Sample records for mainframe computer systems

  1. The role of the host in a cooperating mainframe and workstation environment, volumes 1 and 2

    NASA Technical Reports Server (NTRS)

    Kusmanoff, Antone; Martin, Nancy L.

    1989-01-01

    In recent years, advancements made in computer systems have prompted a move from centralized computing based on timesharing a large mainframe computer to distributed computing based on a connected set of engineering workstations. A major factor in this advancement is the increased performance and lower cost of engineering workstations. The shift to distributed computing from centralized computing has led to challenges associated with the residency of application programs within the system. In a combined system of multiple engineering workstations attached to a mainframe host, the question arises as to how does a system designer assign applications between the larger mainframe host and the smaller, yet powerful, workstation. The concepts related to real time data processing are analyzed and systems are displayed which use a host mainframe and a number of engineering workstations interconnected by a local area network. In most cases, distributed systems can be classified as having a single function or multiple functions and as executing programs in real time or nonreal time. In a system of multiple computers, the degree of autonomy of the computers is important; a system with one master control computer generally differs in reliability, performance, and complexity from a system in which all computers share the control. This research is concerned with generating general criteria principles for software residency decisions (host or workstation) for a diverse yet coupled group of users (the clustered workstations) which may need the use of a shared resource (the mainframe) to perform their functions.

  2. Teach or No Teach: Is Large System Education Resurging?

    ERIC Educational Resources Information Center

    Sharma, Aditya; Murphy, Marianne C.

    2011-01-01

    Legacy or not, mainframe education is being taught at many U.S. universities. Some computer science programs have always had some large system content but there does appear to be resurgence of mainframe related content in business programs such as Management Information Systems (MIS) and Computer Information Systems (CIS). Many companies such as…

  3. The Future SBSS: Migration of SBSS Functions from a Mainframe Environment to a Disturbed PC-Based LAN Environment

    DTIC Science & Technology

    1993-09-01

    transactions 1 .1414 3.6512 2 .0824 2.1263 3 .0483 1.2451 4 .0483 1.2451 5 .0424 1.0954 D 6 .0424 1.0954 ś .0391 .9880 8 .0333 .8591 9 .0291 .7517 10 .0275...replace SBLC mainframes ( 1 :A- 8 ). RPC and SBLC computers are, in general, UNISYS mainframes ( 1 :A-6). In 1997, the UNISYS mainframe contract will...expire, and RPC systems will move to open systems architectures ( 1 :A- 8 ). At this time, the UNISYS mainframe platforms may be replaced with other platforms

  4. DFLOW USER'S MANUAL

    EPA Science Inventory

    DFLOW is a computer program for estimating design stream flows for use in water quality studies. The manual describes the use of the program on both the EPA's IBM mainframe system and on a personal computer (PC). The mainframe version of DFLOW can extract a river's daily flow rec...

  5. 45 CFR Appendix C to Part 1355 - Electronic Data Transmission Format

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... mainframe-to-mainframe data exchange system using the Sterling Software data transfer package called “SUPERTRACS.” This package will allow data exchange between most computer platforms (both mini and mainframe... 45 Public Welfare 4 2010-10-01 2010-10-01 false Electronic Data Transmission Format C Appendix C...

  6. 45 CFR Appendix C to Part 1355 - Electronic Data Transmission Format

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... mainframe-to-mainframe data exchange system using the Sterling Software data transfer package called “SUPERTRACS.” This package will allow data exchange between most computer platforms (both mini and mainframe... 45 Public Welfare 4 2011-10-01 2011-10-01 false Electronic Data Transmission Format C Appendix C...

  7. Computer-generated mineral commodity deposit maps

    USGS Publications Warehouse

    Schruben, Paul G.; Hanley, J. Thomas

    1983-01-01

    This report describes an automated method of generating deposit maps of mineral commodity information. In addition, it serves as a user's manual for the authors' mapping system. Procedures were developed which allow commodity specialists to enter deposit information, retrieve selected data, and plot deposit symbols in any geographic area within the conterminous United States. The mapping system uses both micro- and mainframe computers. The microcomputer is used to input and retrieve information, thus minimizing computing charges. The mainframe computer is used to generate map plots which are printed by a Calcomp plotter. Selector V data base system is employed for input and retrieval on the microcomputer. A general mapping program (Genmap) was written in FORTRAN for use on the mainframe computer. Genmap can plot fifteen symbol types (for point locations) in three sizes. The user can assign symbol types to data items interactively. Individual map symbols can be labeled with a number or the deposit name. Genmap also provides several geographic boundary file and window options.

  8. Networking the Home and University: How Families Can Be Integrated into Proximate/Distant Computer Systems.

    ERIC Educational Resources Information Center

    Watson, J. Allen; And Others

    1989-01-01

    Describes study that was conducted to determine the feasibility of networking home microcomputers with a university mainframe system in order to investigate a new family process research paradigm, as well as the design and function of the microcomputer/mainframe system. Test instrumentation is described and systems' reliability and validity are…

  9. CD-ROM source data uploaded to the operating and storage devices of an IBM 3090 mainframe through a PC terminal.

    PubMed

    Boros, L G; Lepow, C; Ruland, F; Starbuck, V; Jones, S; Flancbaum, L; Townsend, M C

    1992-07-01

    A powerful method of processing MEDLINE and CINAHL source data uploaded to the IBM 3090 mainframe computer through an IBM/PC is described. Data are first downloaded from the CD-ROM's PC devices to floppy disks. These disks then are uploaded to the mainframe computer through an IBM/PC equipped with WordPerfect text editor and computer network connection (SONNGATE). Before downloading, keywords specifying the information to be accessed are typed at the FIND prompt of the CD-ROM station. The resulting abstracts are downloaded into a file called DOWNLOAD.DOC. The floppy disks containing the information are simply carried to an IBM/PC which has a terminal emulation (TELNET) connection to the university-wide computer network (SONNET) at the Ohio State University Academic Computing Services (OSU ACS). The WordPerfect (5.1) processes and saves the text into DOS format. Using the File Transfer Protocol (FTP, 130,000 bytes/s) of SONNET, the entire text containing the information obtained through the MEDLINE and CINAHL search is transferred to the remote mainframe computer for further processing. At this point, abstracts in the specified area are ready for immediate access and multiple retrieval by any PC having network switch or dial-in connection after the USER ID, PASSWORD and ACCOUNT NUMBER are specified by the user. The system provides the user an on-line, very powerful and quick method of searching for words specifying: diseases, agents, experimental methods, animals, authors, and journals in the research area downloaded. The user can also copy the TItles, AUthors and SOurce with optional parts of abstracts into papers under edition. This arrangement serves the special demands of a research laboratory by handling MEDLINE and CINAHL source data resulting after a search is performed with keywords specified for ongoing projects. Since the Ohio State University has a centrally founded mainframe system, the data upload, storage and mainframe operations are free.

  10. Alive and Kicking: Making the Case for Mainframe Education

    ERIC Educational Resources Information Center

    Murphy, Marianne C.; Sharma, Aditya; Seay, Cameron; McClelland, Marilyn K.

    2010-01-01

    As universities continually update and assess their curriculums, mainframe computing is quite often overlooked as it is often thought of as "legacy computer." Mainframe computing appears to be either uninteresting or thought of as a computer past its prime. However, both assumptions are leading to a shortage of IS professionals in the…

  11. Performance Comparison of Mainframe, Workstations, Clusters, and Desktop Computers

    NASA Technical Reports Server (NTRS)

    Farley, Douglas L.

    2005-01-01

    A performance evaluation of a variety of computers frequently found in a scientific or engineering research environment was conducted using a synthetic and application program benchmarks. From a performance perspective, emerging commodity processors have superior performance relative to legacy mainframe computers. In many cases, the PC clusters exhibited comparable performance with traditional mainframe hardware when 8-12 processors were used. The main advantage of the PC clusters was related to their cost. Regardless of whether the clusters were built from new computers or whether they were created from retired computers their performance to cost ratio was superior to the legacy mainframe computers. Finally, the typical annual maintenance cost of legacy mainframe computers is several times the cost of new equipment such as multiprocessor PC workstations. The savings from eliminating the annual maintenance fee on legacy hardware can result in a yearly increase in total computational capability for an organization.

  12. SPIRES Tailored to a Special Library: A Mainframe Answer for a Small Online Catalog.

    ERIC Educational Resources Information Center

    Newton, Mary

    1989-01-01

    Describes the design and functions of a technical library database maintained on a mainframe computer and supported by the SPIRES database management system. The topics covered include record structures, vocabulary control, input procedures, searching features, time considerations, and cost effectiveness. (three references) (CLB)

  13. Mass Storage Systems.

    ERIC Educational Resources Information Center

    Ranade, Sanjay; Schraeder, Jeff

    1991-01-01

    Presents an overview of the mass storage market and discusses mass storage systems as part of computer networks. Systems for personal computers, workstations, minicomputers, and mainframe computers are described; file servers are explained; system integration issues are raised; and future possibilities are suggested. (LRW)

  14. A Low Cost Micro-Computer Based Local Area Network for Medical Office and Medical Center Automation

    PubMed Central

    Epstein, Mel H.; Epstein, Lynn H.; Emerson, Ron G.

    1984-01-01

    A Low Cost Micro-computer based Local Area Network for medical office automation is described which makes use of an array of multiple and different personal computers interconnected by a local area network. Each computer on the network functions as fully potent workstations for data entry and report generation. The network allows each workstation complete access to the entire database. Additionally, designated computers may serve as access ports for remote terminals. Through “Gateways” the network may serve as a front end for a large mainframe, or may interface with another network. The system provides for the medical office environment the expandability and flexibility of a multi-terminal mainframe system at a far lower cost without sacrifice of performance.

  15. Rotordynamics on the PC: Transient Analysis With ARDS

    NASA Technical Reports Server (NTRS)

    Fleming, David P.

    1997-01-01

    Personal computers can now do many jobs that formerly required a large mainframe computer. An example is NASA Lewis Research Center's program Analysis of RotorDynamic Systems (ARDS), which uses the component mode synthesis method to analyze the dynamic motion of up to five rotating shafts. As originally written in the early 1980's, this program was considered large for the mainframe computers of the time. ARDS, which was written in Fortran 77, has been successfully ported to a 486 personal computer. Plots appear on the computer monitor via calls programmed for the original CALCOMP plotter; plots can also be output on a standard laser printer. The executable code, which uses the full array sizes of the mainframe version, easily fits on a high-density floppy disk. The program runs under DOS with an extended memory manager. In addition to transient analysis of blade loss, step turns, and base acceleration, with simulation of squeeze-film dampers and rubs, ARDS calculates natural frequencies and unbalance response.

  16. Hospital mainframe computer documentation of pharmacist interventions.

    PubMed

    Schumock, G T; Guenette, A J; Clark, T; McBride, J M

    1993-07-01

    The hospital mainframe computer pharmacist intervention documentation system described has successfully facilitated the recording, communication, analysis, and reporting of interventions at our hospital. It has proven to be time efficient, accessible, and user-friendly from the standpoint of both the pharmacist and administrator. The advantages of this system greatly outweigh manual documentation and justify the initial time investment in its design and development. In the future, it is hoped that the system can have even broader impact. Intervention/recommendations documented can be made accessible to medical and nursing staff, and as such further increase interdepartmental communication. As pharmacists embrace the pharmaceutical care mandate, documenting interventions in patient care will continue to grow in importance. Complete documentation is essential if pharmacists are to assume responsibility for patient outcomes. With time being an ever-increasing premium, and with economic and human resources dwindling, an efficient and effective means of recording and tracking pharmacist interventions will become imperative for survival in the fiscally challenged health care arena. Documentation of pharmacist intervention using a hospital mainframe computer at UIH has proven both efficient and effective.

  17. ARDS User Manual

    NASA Technical Reports Server (NTRS)

    Fleming, David P.

    2001-01-01

    Personal computers (PCs) are now used extensively for engineering analysis. their capability exceeds that of mainframe computers of only a few years ago. Programs originally written for mainframes have been ported to PCs to make their use easier. One of these programs is ARDS (Analysis of Rotor Dynamic Systems) which was developed at Arizona State University (ASU) by Nelson et al. to quickly and accurately analyze rotor steady state and transient response using the method of component mode synthesis. The original ARDS program was ported to the PC in 1995. Several extensions were made at ASU to increase the capability of mainframe ARDS. These extensions have also been incorporated into the PC version of ARDS. Each mainframe extension had its own user manual generally covering only that extension. Thus to exploit the full capability of ARDS required a large set of user manuals. Moreover, necessary changes and enhancements for PC ARDS were undocumented. The present document is intended to remedy those problems by combining all pertinent information needed for the use of PC ARDS into one volume.

  18. Advanced On-the-Job Training System: System Specification

    DTIC Science & Technology

    1990-05-01

    3.1.5.2.10 Evaluation Subsystem spotfor the Traking Devopment and Deliery Subsystem ..... 22 3.1.5.2.11 TrIning Development=dDelivery Subsystem sL...e. Alsys Ada compiler f. Ethernet Local Area Network reference manual(s) g. Infotron 992 network reference manual(s) h. Computer Program Source...1989 a. Daily check of mainframe components, including all elements critical to support the terminal network . b. Restoration of mainframe equipment

  19. A data acquisition and storage system for the ion auxiliary propulsion system cyclic thruster test

    NASA Technical Reports Server (NTRS)

    Hamley, John A.

    1989-01-01

    A nine-track tape drive interfaced to a standard personal computer was used to transport data from a remote test site to the NASA Lewis mainframe computer for analysis. The Cyclic Ground Test of the Ion Auxiliary Propulsion System (IAPS), which successfully achieved its goal of 2557 cycles and 7057 hr of thrusting beam on time generated several megabytes of test data over many months of continuous testing. A flight-like controller and power supply were used to control the thruster and acquire data. Thruster data was converted to RS232 format and transmitted to a personal computer, which stored the raw digital data on the nine-track tape. The tape format was such that with minor modifications, mainframe flight data analysis software could be used to analyze the Cyclic Ground Test data. The personal computer also converted the digital data to engineering units and displayed real time thruster parameters. Hardcopy data was printed at a rate dependent on thruster operating conditions. The tape drive provided a convenient means to transport the data to the mainframe for analysis, and avoided a development effort for new data analysis software for the Cyclic test. This paper describes the data system, interfacing and software requirements.

  20. From micro to mainframe. A practical approach to perinatal data processing.

    PubMed

    Yeh, S Y; Lincoln, T

    1985-04-01

    A new, practical approach to perinatal data processing for a large obstetric population is described. This was done with a microcomputer for data entry and a mainframe computer for data reduction. The Screen Oriented Data Access (SODA) program was used to generate the data entry form and to input data into the Apple II Plus computer. Data were stored on diskettes and transmitted through a modern and telephone line to the IBM 370/168 computer. The Statistical Analysis System (SAS) program was used for statistical analyses and report generations. This approach was found to be most practical, flexible, and economical.

  1. Unix becoming healthcare's standard operating system.

    PubMed

    Gardner, E

    1991-02-11

    An unfamiliar buzzword is making its way into healthcare executives' vocabulary, as well as their computer systems. Unix is being touted by many industry observers as the most likely candidate to be a standard operating system for minicomputers, mainframes and computer networks.

  2. Specification of Computer Systems by Objectives.

    ERIC Educational Resources Information Center

    Eltoft, Douglas

    1989-01-01

    Discusses the evolution of mainframe and personal computers, and presents a case study of a network developed at the University of Iowa called the Iowa Computer-Aided Engineering Network (ICAEN) that combines Macintosh personal computers with Apollo workstations. Functional objectives are stressed as the best measure of system performance. (LRW)

  3. Cost-effective use of minicomputers to solve structural problems

    NASA Technical Reports Server (NTRS)

    Storaasli, O. O.; Foster, E. P.

    1978-01-01

    Minicomputers are receiving increased use throughout the aerospace industry. Until recently, their use focused primarily on process control and numerically controlled tooling applications, while their exposure to and the opportunity for structural calculations has been limited. With the increased availability of this computer hardware, the question arises as to the feasibility and practicality of carrying out comprehensive structural analysis on a minicomputer. This paper presents results on the potential for using minicomputers for structural analysis by (1) selecting a comprehensive, finite-element structural analysis system in use on large mainframe computers; (2) implementing the system on a minicomputer; and (3) comparing the performance of the minicomputers with that of a large mainframe computer for the solution to a wide range of finite element structural analysis problems.

  4. The transition of GTDS to the Unix workstation environment

    NASA Technical Reports Server (NTRS)

    Carter, D.; Metzinger, R.; Proulx, R.; Cefola, P.

    1995-01-01

    Future Flight Dynamics systems should take advantage of the possibilities provided by current and future generations of low-cost, high performance workstation computing environments with Graphical User Interface. The port of the existing mainframe Flight Dynamics systems to the workstation environment offers an economic approach for combining the tremendous engineering heritage that has been encapsulated in these systems with the advantages of the new computing environments. This paper will describe the successful transition of the Draper Laboratory R&D version of GTDS (Goddard Trajectory Determination System) from the IBM Mainframe to the Unix workstation environment. The approach will be a mix of historical timeline notes, descriptions of the technical problems overcome, and descriptions of associated SQA (software quality assurance) issues.

  5. An evaluation of superminicomputers for thermal analysis

    NASA Technical Reports Server (NTRS)

    Storaasli, O. O.; Vidal, J. B.; Jones, G. K.

    1962-01-01

    The feasibility and cost effectiveness of solving thermal analysis problems on superminicomputers is demonstrated. Conventional thermal analysis and the changing computer environment, computer hardware and software used, six thermal analysis test problems, performance of superminicomputers (CPU time, accuracy, turnaround, and cost) and comparison with large computers are considered. Although the CPU times for superminicomputers were 15 to 30 times greater than the fastest mainframe computer, the minimum cost to obtain the solutions on superminicomputers was from 11 percent to 59 percent of the cost of mainframe solutions. The turnaround (elapsed) time is highly dependent on the computer load, but for large problems, superminicomputers produced results in less elapsed time than a typically loaded mainframe computer.

  6. COMPUTER MODELS/EPANET

    EPA Science Inventory

    Pipe network flow analysis was among the first civil engineering applications programmed for solution on the early commercial mainframe computers in the 1960s. Since that time, advancements in analytical techniques and computing power have enabled us to solve systems with tens o...

  7. 36 CFR 1236.2 - What definitions apply to this part?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... users but does not retain any transmission data), data systems used to collect and process data that have been organized into data files or data bases on either personal computers or mainframe computers...

  8. 36 CFR 1236.2 - What definitions apply to this part?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... users but does not retain any transmission data), data systems used to collect and process data that have been organized into data files or data bases on either personal computers or mainframe computers...

  9. 36 CFR 1236.2 - What definitions apply to this part?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... users but does not retain any transmission data), data systems used to collect and process data that have been organized into data files or data bases on either personal computers or mainframe computers...

  10. 36 CFR 1236.2 - What definitions apply to this part?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... users but does not retain any transmission data), data systems used to collect and process data that have been organized into data files or data bases on either personal computers or mainframe computers...

  11. Wireless Computers: Radio and Light Communications May Bring New Freedom to Computing.

    ERIC Educational Resources Information Center

    Hartmann, Thom

    1984-01-01

    Describes systems which use wireless terminals to communicate with mainframe computers or minicomputers via radio band, discusses their limitations, and gives examples of networks using such systems. The use of communications satellites to increase their range and the possibility of using light beams to transmit data are also discussed. (MBR)

  12. 36 CFR § 1236.2 - What definitions apply to this part?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... users but does not retain any transmission data), data systems used to collect and process data that have been organized into data files or data bases on either personal computers or mainframe computers...

  13. Using a Cray Y-MP as an array processor for a RISC Workstation

    NASA Technical Reports Server (NTRS)

    Lamaster, Hugh; Rogallo, Sarah J.

    1992-01-01

    As microprocessors increase in power, the economics of centralized computing has changed dramatically. At the beginning of the 1980's, mainframes and super computers were often considered to be cost-effective machines for scalar computing. Today, microprocessor-based RISC (reduced-instruction-set computer) systems have displaced many uses of mainframes and supercomputers. Supercomputers are still cost competitive when processing jobs that require both large memory size and high memory bandwidth. One such application is array processing. Certain numerical operations are appropriate to use in a Remote Procedure Call (RPC)-based environment. Matrix multiplication is an example of an operation that can have a sufficient number of arithmetic operations to amortize the cost of an RPC call. An experiment which demonstrates that matrix multiplication can be executed remotely on a large system to speed the execution over that experienced on a workstation is described.

  14. Experience with a UNIX based batch computing facility for H1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerhards, R.; Kruener-Marquis, U.; Szkutnik, Z.

    1994-12-31

    A UNIX based batch computing facility for the H1 experiment at DESY is described. The ultimate goal is to replace the DESY IBM mainframe by a multiprocessor SGI Challenge series computer, using the UNIX operating system, for most of the computing tasks in H1.

  15. Real-time data system: Incorporating new technology in mission critical environments

    NASA Technical Reports Server (NTRS)

    Muratore, John F.; Heindel, Troy A.

    1990-01-01

    If the Space Station Freedom is to remain viable over its 30-year life span, it must be able to incorporate new information systems technologies. These technologies are necessary to enhance mission effectiveness and to enable new NASA missions, such as supporting the Lunar-Mars Initiative. Hi-definition television (HDTV), neural nets, model-based reasoning, advanced languages, CPU designs, and computer networking standards are areas which have been forecasted to make major strides in the next 30 years. A major challenge to NASA is to bring these technologies online without compromising mission safety. In past programs, NASA managers have been understandably reluctant to rely on new technologies for mission critical activities until they are proven in noncritical areas. NASA must develop strategies to allow inflight confidence building and migration of technologies into the trusted tool base. NASA has successfully met this challenge and developed a winning strategy in the Space Shuttle Mission Control Center. This facility, which is clearly among NASA's most critical, is based on 1970's mainframe architecture. Changes to the mainframe are very expensive due to the extensive testing required to prove that changes do not have unanticipated impact on critical processes. Systematic improvement efforts in this facility have been delayed due to this 'risk to change.' In the real-time data system (RTDS) we have introduced a network of engineering computer workstations which run in parallel to the mainframe system. These workstations are located next to flight controller operating positions in mission control and, in some cases, the display units are mounted in the traditional mainframe consoles. This system incorporates several major improvements over the mainframe consoles including automated fault detection by real-time expert systems and color graphic animated schematics of subsystems driven by real-time telemetry. The workstations have the capability of recording telemetry data and providing 'instant replay' for flight controllers. RTDS also provides unique graphics animated by real-time telemetry such as workstation emulation of the shuttle's flight instruments and displays of the remote manipulator system (RMS) position. These systems have been used successfully as prime operational tools since STS-26 and have supported seven shuttle missions.

  16. (Some) Computer Futures: Mainframes.

    ERIC Educational Resources Information Center

    Joseph, Earl C.

    Possible futures for the world of mainframe computers can be forecast through studies identifying forces of change and their impact on current trends. Some new prospects for the future have been generated by advances in information technology; for example, recent United States successes in applied artificial intelligence (AI) have created new…

  17. The Ghost of Computers Past, Present, and Future: Computer Use for Preservice/Inservice Reading Programs.

    ERIC Educational Resources Information Center

    Prince, Amber T.

    Computer assisted instruction, and especially computer simulations, can help to ensure that preservice and inservice teachers learn from the right experiences. In the past, colleges of education used large mainframe computer systems to store student registration, provide simulation lessons on diagnosing reading difficulties, construct informal…

  18. In-House Automation of a Small Library Using a Mainframe Computer.

    ERIC Educational Resources Information Center

    Waranius, Frances B.; Tellier, Stephen H.

    1986-01-01

    An automated library routine management system was developed in-house to create system unique to the Library and Information Center, Lunar and Planetary Institute, Houston, Texas. A modular approach was used to allow continuity in operations and services as system was implemented. Acronyms and computer accounts and file names are appended.…

  19. Officer Computer Utilization Report

    DTIC Science & Technology

    1992-03-01

    Shipboard Non-tactical ADP Program (SNAP),Navy Intelligence Processing System (NIPS), Retail Operation Management (ROM)). Mainframe - An extremely...ADP Program (SNAP), Navy Intelligence Processing System (NIPS), Retail Operation Management (ROM), etc.) @0230@6 7 7. Technical/tactical systems (e.g

  20. Integrated Computer-Aided Drafting Instruction (ICADI).

    ERIC Educational Resources Information Center

    Chen, C. Y.; McCampbell, David H.

    Until recently, computer-aided drafting and design (CAD) systems were almost exclusively operated on mainframes or minicomputers and their cost prohibited many schools from offering CAD instruction. Today, many powerful personal computers are capable of performing the high-speed calculation and analysis required by the CAD application; however,…

  1. The Use of Microcomputers in Distance Teaching Systems. ZIFF Papiere 70.

    ERIC Educational Resources Information Center

    Rumble, Greville

    Microcomputers have revolutionized distance education in virtually every area. Used alone, personal computers provide students with a wide range of utilities, including word processing, graphics packages, and spreadsheets. When linked to a mainframe computer or connected to other personal computers in local area networks, microcomputers can…

  2. NASA Advanced Supercomputing (NAS) User Services Group

    NASA Technical Reports Server (NTRS)

    Pandori, John; Hamilton, Chris; Niggley, C. E.; Parks, John W. (Technical Monitor)

    2002-01-01

    This viewgraph presentation provides an overview of NAS (NASA Advanced Supercomputing), its goals, and its mainframe computer assets. Also covered are its functions, including systems monitoring and technical support.

  3. The Changing Environment of Management Information Systems.

    ERIC Educational Resources Information Center

    Tagawa, Ken

    1982-01-01

    The promise of mainframe computers in the 1970s for management information systems (MIS) is largely unfulfilled, and newer office automation systems and data communication systems are designed to be responsive to MIS needs. The status of these innovations is briefly outlined. (MSE)

  4. AI tools in computer based problem solving

    NASA Technical Reports Server (NTRS)

    Beane, Arthur J.

    1988-01-01

    The use of computers to solve value oriented, deterministic, algorithmic problems, has evolved a structured life cycle model of the software process. The symbolic processing techniques used, primarily in research, for solving nondeterministic problems, and those for which an algorithmic solution is unknown, have evolved a different model, much less structured. Traditionally, the two approaches have been used completely independently. With the advent of low cost, high performance 32 bit workstations executing identical software with large minicomputers and mainframes, it became possible to begin to merge both models into a single extended model of computer problem solving. The implementation of such an extended model on a VAX family of micro/mini/mainframe systems is described. Examples in both development and deployment of applications involving a blending of AI and traditional techniques are given.

  5. Report on the Acceptance Test of the CRI Y-MP 8128, 10 February - 12 March 1990

    NASA Technical Reports Server (NTRS)

    Carter, Russell; Kutler, Paul (Technical Monitor)

    1998-01-01

    The NAS Numerical Aerodynamic Simulation Facility's HSP 2 computer system, a CRI Y-MP 832 SN #1002, underwent a major hardware upgrade in February of 1990. The 32 MWord, 6.3 ns mainframe component of the system was replaced with a 128 MWord, 6.0 ns CRI Y-MP 8128 mainframe, SN #1030. A 30 day Acceptance Test of the computer system was performed by the NAS RND HSP group from 08:00 February 10, 1990 to 08:00 March 12, 1990. Overall responsibility for the RND HSP Acceptance Test was assumed by Duane Carbon. The terms of the contract required that the SN #1030 achieve an effectiveness level of greater than or equal to ninety (90) percent for 30 consecutive days within a 60 day time frame. After the first thirty days, the effectiveness level of SN #1030 was 94.4 percent, hence the acceptance test was passed.

  6. Development of 3-Year Roadmap to Transform the Discipline of Systems Engineering

    DTIC Science & Technology

    2010-03-31

    quickly humans could physically construct them. Indeed, magnetic core memory was entirely constructed by human hands until it was superseded by...For their mainframe computers, IBM develops the applications, operating system, computer hardware and microprocessors (off the shelf standard memory ...processor developers work on potential computational and memory pipelines to support the required performance capabilities and use the available transistors

  7. SIPP ACCESS: Information Tools Improve Access to National Longitudinal Panel Surveys.

    ERIC Educational Resources Information Center

    Robbin, Alice; David, Martin

    1988-01-01

    A computer-based, integrated information system incorporating data and information about the data, SIPP ACCESS systematically links technologies of laser disk, mainframe computer, microcomputer, and electronic networks, and applies relational technology to provide access to information about complex statistical data collections. Examples are given…

  8. Electronic Campus Meets Today's Education Mission.

    ERIC Educational Resources Information Center

    Swalec, John J.; And Others

    Waubonsee Community College (WCC) employs electronic technology to meet the needs of its students and community in virtually every phase of campus operations. WCC's Information System Center, housing three mainframe computers, drives an online registration system, a computerized self-registration system that can be accessed by telephone from…

  9. Don't Gamble with Y2K Compliance.

    ERIC Educational Resources Information Center

    Sturgeon, Julie

    1999-01-01

    Examines one school district's (Clark County, Nevada) response to the Y2K computer problem and provides tips on time-saving Y2K preventive measures other school districts can use. Explains how the district de-bugged its computer system including mainframe considerations and client-server applications. Highlights office equipment and teaching…

  10. PLATO and the English Curriculum.

    ERIC Educational Resources Information Center

    Macgregor, William B.

    PLATO differs from other computer assisted instruction in that it is truly a system, employing a powerful mainframe computer and connecting its users to each other and to the people running it. The earliest PLATO materials in English were drill and practice programs, an improvement over written texts, but a small one. Unfortunately, game lessons,…

  11. Distributed Processing with a Mainframe-Based Hospital Information System: A Generalized Solution

    PubMed Central

    Kirby, J. David; Pickett, Michael P.; Boyarsky, M. William; Stead, William W.

    1987-01-01

    Over the last two years the Medical Center Information Systems Department at Duke University Medical Center has been developing a systematic approach to distributing the processing and data involved in computerized applications at DUMC. The resulting system has been named MAPS- the Micro-ADS Processing System. A key characteristic of MAPS is that it makes it easy to execute any existing mainframe ADS application with a request from a PC. This extends the functionality of the mainframe application set to the PC without compromising the maintainability of the PC or mainframe systems.

  12. 1986 Petroleum Software Directory. [800 mini, micro and mainframe computer software packages

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1985-01-01

    Pennwell's 1986 Petroleum Software Directory is a complete listing of software created specifically for the petroleum industry. Details are provided on over 800 mini, micro and mainframe computer software packages from more than 250 different companies. An accountant can locate programs to automate bookkeeping functions in large oil and gas production firms. A pipeline engineer will find programs designed to calculate line flow and wellbore pressure drop.

  13. The Computerized Reference Department: Buying the Future.

    ERIC Educational Resources Information Center

    Kriz, Harry M.; Kok, Victoria T.

    1985-01-01

    Basis for systematic computerization of academic research library's reference, collection development, and collection management functions emphasizes productivity enhancement for librarians and support staff. Use of microcomputer and university's mainframe computer to develop applications of database management systems, electronic spreadsheets,…

  14. Air traffic control : good progress on interim replacement for outage-plagued system, but risks can be further reduced

    DOT National Transportation Integrated Search

    1996-10-01

    Certain air traffic control(ATC) centers experienced a series of major outages, : some of which were caused by the Display Channel Complex or DCC-a mainframe : computer system that processes radar and other data into displayable images on : controlle...

  15. Colleges' Effort To Prepare for Y2K May Yield Benefits for Many Years.

    ERIC Educational Resources Information Center

    Olsen, Florence

    2000-01-01

    Suggests that the money spent ($100 billion) to fix the Y2K bug in the United States resulted in improved campus computer systems. Reports from campuses around the country indicate that both mainframe and desktop systems experienced fewer problems than expected. (DB)

  16. Micro and Mainframe Computer Models for Improved Planning in Awarding Financial Aid to Disadvantaged Students.

    ERIC Educational Resources Information Center

    Attinasi, Louis C., Jr.; Fenske, Robert H.

    1988-01-01

    Two computer models used at Arizona State University recognize the tendency of students from low-income and minority backgrounds to apply for assistance late in the funding cycle. They permit administrators to project the amount of aid needed by such students. The Financial Aid Computerized Tracking System is described. (Author/MLW)

  17. Report of the Defense Science Board Task Force on Military Applications of New-Generation Computing Technologies.

    DTIC Science & Technology

    1984-12-01

    1980’s we are seeing enhancement of breadth, power, and accessibility of computers in many dimensions: o Pov~erfu1, costly fragile mainframes for...During the 1980’s we are seeing enhancement of breadth, power and accessibility of computers in many dimensions. (1) Powerful, costly, fragile mainframes... X A~ ’ EMORANDlUM FOR THE t-RAIRMAN, DEFENSE<. ’ ’...’"" S!B.FECT: Defense Science Board T is F- Supercomputei Applicai io, Yoi are requested to

  18. Macroeconomic Activity Module - NEMS Documentation

    EIA Publications

    2016-01-01

    Documents the objectives, analytical approach, and development of the National Energy Modeling System (NEMS) Macroeconomic Activity Module (MAM) used to develop the Annual Energy Outlook for 2016 (AEO2016). The report catalogues and describes the module assumptions, computations, methodology, parameter estimation techniques, and mainframe source code

  19. Front End Software for Online Database Searching Part 1: Definitions, System Features, and Evaluation.

    ERIC Educational Resources Information Center

    Hawkins, Donald T.; Levy, Louise R.

    1985-01-01

    This initial article in series of three discusses barriers inhibiting use of current online retrieval systems by novice users and notes reasons for front end and gateway online retrieval systems. Definitions, front end features, user interface, location (personal computer, host mainframe), evaluation, and strengths and weaknesses are covered. (16…

  20. 21. SITE BUILDING 002 SCANNER BUILDING LOOKING AT ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    21. SITE BUILDING 002 - SCANNER BUILDING - LOOKING AT DISC STORAGE SYSTEMS A AND B (A OR B ARE REDUNDANT SYSTEMS), ONE MAINFRAME COMPUTER ON LINE, ONE ON STANDBY WITH STORAGE TAPE, ONE ON STANDBY WITHOUT TAPE INSTALLED. - Cape Cod Air Station, Technical Facility-Scanner Building & Power Plant, Massachusetts Military Reservation, Sandwich, Barnstable County, MA

  1. An Evaluation of the Availability and Application of Microcomputer Software Programs for Use in Air Force Ground Transportation Squadrons

    DTIC Science & Technology

    1988-09-01

    software programs capable of being used on a microcomputer will be considered for analysis. No software intended for use on a miniframe or mainframe...Dial-A-Log consists of a program written in a computer language called L-10 that is run on a DEC-20 miniframe . The combination of the specific...proliferation of software dealing with microcomputers. Instead, they were geared more towards managing the use of miniframe or mainframe computer

  2. Information resources assessment of a healthcare integrated delivery system.

    PubMed Central

    Gadd, C. S.; Friedman, C. P.; Douglas, G.; Miller, D. J.

    1999-01-01

    While clinical healthcare systems may have lagged behind computer applications in other fields in the shift from mainframes to client-server architectures, the rapid deployment of newer applications is closing that gap. Organizations considering the transition to client-server must identify and position themselves to provide the resources necessary to implement and support the infrastructure requirements of client-server architectures and to manage the accelerated complexity at the desktop, including hardware and software deployment, training, and maintenance needs. This paper describes an information resources assessment of the recently aligned Pennsylvania regional Veterans Administration Stars and Stripes Health Network (VISN4), in anticipation of the shift from a predominantly mainframe to a client-server information systems architecture in its well-established VistA clinical information system. The multimethod assessment study is described here to demonstrate this approach and its value to regional healthcare networks undergoing organizational integration and/or significant information technology transformations. PMID:10566414

  3. Interfacing the VAX 11/780 Using Berkeley Unix 4.2.BSD and Ethernet Based Xerox Network Systems. Volume 1.

    DTIC Science & Technology

    1984-12-01

    3Com Corporation ....... A-18 Ethernet Controller Support . . . . . . A-19 Host Systems Support . . . . . . . . . A-20 Personal Computers Support...A-23 VAX EtherSeries Software 0 * A-23 Network Research Corporation . o o o . o A-24 File Transfer Service . . . . o A-25 Virtual Terminal Service 0...Control office is planning to acquire a Digital Equipment Corporation VAX 11/780 mainframe computer with the Unix Berkeley 4.2BSD operating system. They

  4. The Design of an Interactive Computer Based System for the Training of Signal Corps Officers in Communications Network Management

    DTIC Science & Technology

    1985-08-01

    from the mainframe to the terminals is approximately 56k bits per second (21:3). Score: 8. Expandability. The number of terminals available to the 0...the systems controllers may access any files. For modem link up, a callback system is to be implemented to prevent unauthorized off post access (10:2

  5. A Study of Organizational Downsizing and Information Management Strategies.

    DTIC Science & Technology

    1992-09-01

    Projected $100,000 at plants. per month savings from using miniframes . Networked Standardized "Best practices." 3 mainframes and applications. Whatever worked...The projected $1.2 million savings realized from going from mainframes to miniframes is to avoid having to reduce the budget by that amount in other...network hardware Turned in mainframe and replaced it with two miniframes ; networked new minis with systems at plants Networked mainframes and PCs Acquired

  6. Practical applications of remote sensing technology

    NASA Technical Reports Server (NTRS)

    Whitmore, Roy A., Jr.

    1990-01-01

    Land managers increasingly are becoming dependent upon remote sensing and automated analysis techniques for information gathering and synthesis. Remote sensing and geographic information system (GIS) techniques provide quick and economical information gathering for large areas. The outputs of remote sensing classification and analysis are most effective when combined with a total natural resources data base within the capabilities of a computerized GIS. Some examples are presented of the successes, as well as the problems, in integrating remote sensing and geographic information systems. The need to exploit remotely sensed data and the potential that geographic information systems offer for managing and analyzing such data continues to grow. New microcomputers with vastly enlarged memory, multi-fold increases in operating speed and storage capacity that was previously available only on mainframe computers are a reality. Improved raster GIS software systems have been developed for these high performance microcomputers. Vector GIS systems previously reserved for mini and mainframe systems are available to operate on these enhanced microcomputers. One of the more exciting areas that is beginning to emerge is the integration of both raster and vector formats on a single computer screen. This technology will allow satellite imagery or digital aerial photography to be presented as a background to a vector display.

  7. Fiber Optics and Library Technology.

    ERIC Educational Resources Information Center

    Koenig, Michael

    1984-01-01

    This article examines fiber optic technology, explains some of the key terminology, and speculates about the way fiber optics will change our world. Applications of fiber optics to library systems in three major areas--linkage of a number of mainframe computers, local area networks, and main trunk communications--are highlighted. (EJS)

  8. Internal controls over computer-processed financial data at Boeing Petroleum Services

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1992-02-14

    The Strategic Petroleum Reserve (SPR) is responsible for purchasing and storing crude oil to mitigate the potential adverse impact of any future disruptions in crude oil imports. Boeing Petroleum Services, Inc. (BPS) operates the SPR under a US Department of Energy (DOE) management and operating contract. BPS receives support for various information systems and other information processing needs from a mainframe computer center. The objective of the audit was to determine if the internal controls implemented by BPS for computer systems were adequate to assure processing reliability.

  9. Interconnection requirements in avionic systems

    NASA Astrophysics Data System (ADS)

    Vergnolle, Claude; Houssay, Bruno

    1991-04-01

    The future aircraft generation will have thousand smart electromagnetic sensors distributed allover. Each sensor is connected with fibers links to the main-frame computer in charge of the real time signal''s correlation. Such a computer must be compactly built and massively parallel: it needs the use of 3 D optical free-space interconnect between neighbouring boards and reconfigurable interconnects via holographic backplane. The optical interconnect facilities will be also used to build fault-tolerant computer through large redundancy.

  10. The Effects of Word Processing Software on User Satisfaction: An Empirical Study of Micro, Mini, and Mainframe Computers Using an Interactive Artificial Intelligence Expert-System.

    ERIC Educational Resources Information Center

    Rushinek, Avi; Rushinek, Sara

    1984-01-01

    Describes results of a system rating study in which users responded to WPS (word processing software) questions. Study objectives were data collection and evaluation of variables; statistical quantification of WPS's contribution (along with other variables) to user satisfaction; design of an expert system to evaluate WPS; and database update and…

  11. High performance network and channel-based storage

    NASA Technical Reports Server (NTRS)

    Katz, Randy H.

    1991-01-01

    In the traditional mainframe-centered view of a computer system, storage devices are coupled to the system through complex hardware subsystems called input/output (I/O) channels. With the dramatic shift towards workstation-based computing, and its associated client/server model of computation, storage facilities are now found attached to file servers and distributed throughout the network. We discuss the underlying technology trends that are leading to high performance network-based storage, namely advances in networks, storage devices, and I/O controller and server architectures. We review several commercial systems and research prototypes that are leading to a new approach to high performance computing based on network-attached storage.

  12. Rotordynamics on the PC: Further Capabilities of ARDS

    NASA Technical Reports Server (NTRS)

    Fleming, David P.

    1997-01-01

    Rotordynamics codes for personal computers are now becoming available. One of the most capable codes is Analysis of RotorDynamic Systems (ARDS) which uses the component mode synthesis method to analyze a system of up to 5 rotating shafts. ARDS was originally written for a mainframe computer but has been successfully ported to a PC; its basic capabilities for steady-state and transient analysis were reported in an earlier paper. Additional functions have now been added to the PC version of ARDS. These functions include: 1) Estimation of the peak response following blade loss without resorting to a full transient analysis; 2) Calculation of response sensitivity to input parameters; 3) Formulation of optimum rotor and damper designs to place critical speeds in desirable ranges or minimize bearing loads; 4) Production of Poincard plots so the presence of chaotic motion can be ascertained. ARDS produces printed and plotted output. The executable code uses the full array sizes of the mainframe version and fits on a high density floppy disc. Examples of all program capabilities are presented and discussed.

  13. Computing Services and Assured Computing

    DTIC Science & Technology

    2006-05-01

    fighters’ ability to execute the mission.” Computing Services 4 We run IT Systems that: provide medical care pay the warfighters manage maintenance...users • 1,400 applications • 18 facilities • 180 software vendors • 18,000+ copies of executive software products • Virtually every type of mainframe and... chocs electriques, de branchez les deux cordons d’al imentation avant de faire le depannage P R IM A R Y SD A S B 1 2 PowerHub 7000 RST U L 00- 00

  14. NASTRAN migration to UNIX

    NASA Technical Reports Server (NTRS)

    Chan, Gordon C.; Turner, Horace Q.

    1990-01-01

    COSMIC/NASTRAN, as it is supported and maintained by COSMIC, runs on four main-frame computers - CDC, VAX, IBM and UNIVAC. COSMIC/NASTRAN on other computers, such as CRAY, AMDAHL, PRIME, CONVEX, etc., is available commercially from a number of third party organizations. All these computers, with their own one-of-a-kind operating systems, make NASTRAN machine dependent. The job control language (JCL), the file management, and the program execution procedure of these computers are vastly different, although 95 percent of NASTRAN source code was written in standard ANSI FORTRAN 77. The advantage of the UNIX operating system is that it has no machine boundary. UNIX is becoming widely used in many workstations, mini's, super-PC's, and even some main-frame computers. NASTRAN for the UNIX operating system is definitely the way to go in the future, and makes NASTRAN available to a host of computers, big and small. Since 1985, many NASTRAN improvements and enhancements were made to conform to the ANSI FORTRAN 77 standards. A major UNIX migration effort was incorporated into COSMIC NASTRAN 1990 release. As a pioneer work for the UNIX environment, a version of COSMIC 89 NASTRAN was officially released in October 1989 for DEC ULTRIX VAXstation 3100 (with VMS extensions). A COSMIC 90 NASTRAN version for DEC ULTRIX DECstation 3100 (with RISC) is planned for April 1990 release. Both workstations are UNIX based computers. The COSMIC 90 NASTRAN will be made available on a TK50 tape for the DEC ULTRIX workstations. Previously in 1988, an 88 NASTRAN version was tested successfully on a SiliconGraphics workstation.

  15. PERSONAL COMPUTERS AND ENVIRONMENTAL ENGINEERING

    EPA Science Inventory

    This article discusses how personal computers can be applied to environmental engineering. fter explaining some of the differences between mainframe and Personal computers, we will review the development of personal computers and describe the areas of data management, interactive...

  16. Conversion of Mass Storage Hierarchy in an IBM Computer Network

    DTIC Science & Technology

    1989-03-01

    storage devices GUIDE IBM users’ group for DOS operating systems IBM International Business Machines IBM 370/145 CPU introduced in 1970 IBM 370/168 CPU...February 12, 1985, Information Systems Group, International Business Machines Corporation. "IBM 3090 Processor Complex" and 񓼪 Mass Storage System...34 Mainframe Journal, pp. 15-26, 64-65, Dallas, Texas, September-October 1987. 3. International Business Machines Corporation, Introduction to IBM 3S80 Storage

  17. System analysis for the Huntsville Operation Support Center distributed computer system

    NASA Technical Reports Server (NTRS)

    Ingels, F. M.

    1986-01-01

    A simulation model of the NASA Huntsville Operational Support Center (HOSC) was developed. This simulation model emulates the HYPERchannel Local Area Network (LAN) that ties together the various computers of HOSC. The HOSC system is a large installation of mainframe computers such as the Perkin Elmer 3200 series and the Dec VAX series. A series of six simulation exercises of the HOSC model is described using data sets provided by NASA. The analytical analysis of the ETHERNET LAN and the video terminals (VTs) distribution system are presented. An interface analysis of the smart terminal network model which allows the data flow requirements due to VTs on the ETHERNET LAN to be estimated, is presented.

  18. Access control and privacy in large distributed systems

    NASA Technical Reports Server (NTRS)

    Leiner, B. M.; Bishop, M.

    1986-01-01

    Large scale distributed systems consists of workstations, mainframe computers, supercomputers and other types of servers, all connected by a computer network. These systems are being used in a variety of applications including the support of collaborative scientific research. In such an environment, issues of access control and privacy arise. Access control is required for several reasons, including the protection of sensitive resources and cost control. Privacy is also required for similar reasons, including the protection of a researcher's proprietary results. A possible architecture for integrating available computer and communications security technologies into a system that meet these requirements is described. This architecture is meant as a starting point for discussion, rather that the final answer.

  19. Transferring data oscilloscope to an IBM using an Apple II+

    NASA Technical Reports Server (NTRS)

    Miller, D. L.; Frenklach, M. Y.; Laughlin, P. J.; Clary, D. W.

    1984-01-01

    A set of PASCAL programs permitting the use of a laboratory microcomputer to facilitate and control the transfer of data from a digital oscilloscope (used with photomultipliers in experiments on soot formation in hydrocarbon combustion) to a mainframe computer and the subsequent mainframe processing of these data is presented. Advantages of this approach include the possibility of on-line computations, transmission flexibility, automatic transfer and selection, increased capacity and analysis options (such as smoothing, averaging, Fourier transformation, and high-quality plotting), and more rapid availability of results. The hardware and software are briefly characterized, the programs are discussed, and printouts of the listings are provided.

  20. An Implemented Strategy for Campus Connectivity and Cooperative Computing.

    ERIC Educational Resources Information Center

    Halaris, Antony S.; Sloan, Lynda W.

    1989-01-01

    ConnectPac, a software package developed at Iona College to allow a computer user to access all services from a single personal computer, is described. ConnectPac uses mainframe computing to support a campus computing network, integrating personal and centralized computing into a menu-driven user environment. (Author/MLW)

  1. Dynamic gas temperature measurements using a personal computer for data acquisition and reduction

    NASA Technical Reports Server (NTRS)

    Fralick, Gustave C.; Oberle, Lawrence G.; Greer, Lawrence C., III

    1993-01-01

    This report describes a dynamic gas temperature measurement system. It has frequency response to 1000 Hz, and can be used to measure temperatures in hot, high pressure, high velocity flows. A personal computer is used for collecting and processing data, which results in a much shorter wait for results than previously. The data collection process and the user interface are described in detail. The changes made in transporting the software from a mainframe to a personal computer are described in appendices, as is the overall theory of operation.

  2. Operationally Efficient Propulsion System Study (OEPSS) Data Book. Volume 8; Integrated Booster Propulsion Module (BPM) Engine Start Dynamics

    NASA Technical Reports Server (NTRS)

    Kemp, Victoria R.

    1992-01-01

    A fluid-dynamic, digital-transient computer model of an integrated, parallel propulsion system was developed for the CDC mainframe and the SUN workstation computers. Since all STME component designs were used for the integrated system, computer subroutines were written characterizing the performance and geometry of all the components used in the system, including the manifolds. Three transient analysis reports were completed. The first report evaluated the feasibility of integrated engine systems in regards to the start and cutoff transient behavior. The second report evaluated turbopump out and combined thrust chamber/turbopump out conditions. The third report presented sensitivity study results in staggered gas generator spin start and in pump performance characteristics.

  3. A computer-aided design system geared toward conceptual design in a research environment. [for hypersonic vehicles

    NASA Technical Reports Server (NTRS)

    STACK S. H.

    1981-01-01

    A computer-aided design system has recently been developed specifically for the small research group environment. The system is implemented on a Prime 400 minicomputer linked with a CDC 6600 computer. The goal was to assign the minicomputer specific tasks, such as data input and graphics, thereby reserving the large mainframe computer for time-consuming analysis codes. The basic structure of the design system consists of GEMPAK, a computer code that generates detailed configuration geometry from a minimum of input; interface programs that reformat GEMPAK geometry for input to the analysis codes; and utility programs that simplify computer access and data interpretation. The working system has had a large positive impact on the quantity and quality of research performed by the originating group. This paper describes the system, the major factors that contributed to its particular form, and presents examples of its application.

  4. Modified Laser and Thermos cell calculations on microcomputers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shapiro, A.; Huria, H.C.

    1987-01-01

    In the course of designing and operating nuclear reactors, many fuel pin cell calculations are required to obtain homogenized cell cross sections as a function of burnup. In the interest of convenience and cost, it would be very desirable to be able to make such calculations on microcomputers. In addition, such a microcomputer code would be very helpful for educational course work in reactor computations. To establish the feasibility of making detailed cell calculations on a microcomputer, a mainframe cell code was compiled and run on a microcomputer. The computer code Laser, originally written in Fortran IV for the IBM-7090more » class of mainframe computers, is a cylindrical, one-dimensional, multigroup lattice cell program that includes burnup. It is based on the MUFT code for epithermal and fast group calculations, and Thermos for the thermal calculations. There are 50 fast and epithermal groups and 35 thermal groups. Resonances are calculated assuming a homogeneous system and then corrected for self-shielding, Dancoff, and Doppler by self-shielding factors. The Laser code was converted to run on a microcomputer. In addition, the Thermos portion of Laser was extracted and compiled separately to have available a stand alone thermal code.« less

  5. Organizational Strategies for End-User Computing Support.

    ERIC Educational Resources Information Center

    Blackmun, Robert R.; And Others

    1988-01-01

    Effective support for end users of computers has been an important issue in higher education from the first applications of general purpose mainframe computers through minicomputers, microcomputers, and supercomputers. The development of end user support is reviewed and organizational models are examined. (Author/MLW)

  6. Supervisors with Micros: Trends and Training Needs.

    ERIC Educational Resources Information Center

    Bryan, Leslie A., Jr.

    1986-01-01

    Results of a study conducted by Purdue University concerning the use of computers by supervisors in manufacturing firms are presented and discussed. Examines access to computers, minicomputers versus mainframes, training time on computers, replacement of staff, creation of personnel problems, and training methods. (CT)

  7. Automated mainframe data collection in a network environment

    NASA Technical Reports Server (NTRS)

    Gross, David L.

    1994-01-01

    The progress and direction of the computer industry have resulted in widespread use of dissimilar and incompatible mainframe data systems. Data collection from these multiple systems is a labor intensive task. In the past, data collection had been restricted to the efforts of personnel specially trained on each system. Information is one of the most important resources an organizations has. Any improvement in an organization's ability to access and manage that information provides a competitive advantage. This problem of data collection is compounded at NASA sites by multi-center and contractor operations. The Centralized Automated Data Retrieval System (CADRS) is designed to provide a common interface that would permit data access, query, and retrieval from multiple contractor and NASA systems. The methods developed for CADRS have a strong commercial potential in that they would be applicable for any industry that needs inter-department, inter-company, or inter-agency data communications. The widespread use of multi-system data networks, that combine older legacy systems with newer decentralized networks, has made data retrieval a critical problem for information dependent industries. Implementing the technology discussed in this paper would reduce operational expense and improve data collection on these composite data systems.

  8. Networked Microcomputers--The Next Generation in College Computing.

    ERIC Educational Resources Information Center

    Harris, Albert L.

    The evolution of computer hardware for college computing has mirrored the industry's growth. When computers were introduced into the educational environment, they had limited capacity and served one user at a time. Then came large mainframes with many terminals sharing the resource. Next, the use of computers in office automation emerged. As…

  9. LaRC local area networks to support distributed computing

    NASA Technical Reports Server (NTRS)

    Riddle, E. P.

    1984-01-01

    The Langley Research Center's (LaRC) Local Area Network (LAN) effort is discussed. LaRC initiated the development of a LAN to support a growing distributed computing environment at the Center. The purpose of the network is to provide an improved capability (over inteactive and RJE terminal access) for sharing multivendor computer resources. Specifically, the network will provide a data highway for the transfer of files between mainframe computers, minicomputers, work stations, and personal computers. An important influence on the overall network design was the vital need of LaRC researchers to efficiently utilize the large CDC mainframe computers in the central scientific computing facility. Although there was a steady migration from a centralized to a distributed computing environment at LaRC in recent years, the work load on the central resources increased. Major emphasis in the network design was on communication with the central resources within the distributed environment. The network to be implemented will allow researchers to utilize the central resources, distributed minicomputers, work stations, and personal computers to obtain the proper level of computing power to efficiently perform their jobs.

  10. Multi-Core Processor Memory Contention Benchmark Analysis Case Study

    NASA Technical Reports Server (NTRS)

    Simon, Tyler; McGalliard, James

    2009-01-01

    Multi-core processors dominate current mainframe, server, and high performance computing (HPC) systems. This paper provides synthetic kernel and natural benchmark results from an HPC system at the NASA Goddard Space Flight Center that illustrate the performance impacts of multi-core (dual- and quad-core) vs. single core processor systems. Analysis of processor design, application source code, and synthetic and natural test results all indicate that multi-core processors can suffer from significant memory subsystem contention compared to similar single-core processors.

  11. AUTOCASK (AUTOmatic Generation of 3-D CASK models). A microcomputer based system for shipping cask design review analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerhard, M.A.; Sommer, S.C.

    1995-04-01

    AUTOCASK (AUTOmatic Generation of 3-D CASK models) is a microcomputer-based system of computer programs and databases developed at the Lawrence Livermore National Laboratory (LLNL) for the structural analysis of shipping casks for radioactive material. Model specification is performed on the microcomputer, and the analyses are performed on an engineering workstation or mainframe computer. AUTOCASK is based on 80386/80486 compatible microcomputers. The system is composed of a series of menus, input programs, display programs, a mesh generation program, and archive programs. All data is entered through fill-in-the-blank input screens that contain descriptive data requests.

  12. Working with Computers: Computer Orientation for Foreign Students.

    ERIC Educational Resources Information Center

    Barlow, Michael

    Designed as a resource for foreign students, this book includes instructions not only on how to use computers, but also on how to use them to complete academic work more efficiently. Part I introduces the basic operations of mainframes and microcomputers and the major areas of computing, i.e., file management, editing, communications, databases,…

  13. Computers, Remote Teleprocessing and Mass Communication.

    ERIC Educational Resources Information Center

    Cropley, A. J.

    Recent developments in computer technology are reducing the limitations of computers as mass communication devices. The growth of remote teleprocessing is one important step. Computers can now interact with users via terminals which may be hundreds of miles from the actual mainframe machine. Many terminals can be in operation at once, so that many…

  14. A note on the computation of antenna-blocking shadows

    NASA Technical Reports Server (NTRS)

    Levy, R.

    1993-01-01

    A simple and readily applied method is provided to compute the shadow on the main reflector of a Cassegrain antenna, when cast by the subreflector and the subreflector supports. The method entails some convenient minor approximations that will produce results similar to results obtained with a lengthier, mainframe computer program.

  15. Computer Yearbook 72.

    ERIC Educational Resources Information Center

    1972

    Recent and expected developments in the computer industry are discussed in this 628-page yearbook, successor to "The Punched Card Annual." The first section of the report is an overview of current computer hardware and software and includes articles about future applications of mainframes, an analysis of the software industry, and a summary of the…

  16. CICS Region Virtualization for Cost Effective Application Development

    ERIC Educational Resources Information Center

    Khan, Kamal Waris

    2012-01-01

    Mainframe is used for hosting large commercial databases, transaction servers and applications that require a greater degree of reliability, scalability and security. Customer Information Control System (CICS) is a mainframe software framework for implementing transaction services. It is designed for rapid, high-volume online processing. In order…

  17. The DART dispersion analysis research tool: A mechanistic model for predicting fission-product-induced swelling of aluminum dispersion fuels. User`s guide for mainframe, workstation, and personal computer applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rest, J.

    1995-08-01

    This report describes the primary physical models that form the basis of the DART mechanistic computer model for calculating fission-product-induced swelling of aluminum dispersion fuels; the calculated results are compared with test data. In addition, DART calculates irradiation-induced changes in the thermal conductivity of the dispersion fuel, as well as fuel restructuring due to aluminum fuel reaction, amorphization, and recrystallization. Input instructions for execution on mainframe, workstation, and personal computers are provided, as is a description of DART output. The theory of fission gas behavior and its effect on fuel swelling is discussed. The behavior of these fission products inmore » both crystalline and amorphous fuel and in the presence of irradiation-induced recrystallization and crystalline-to-amorphous-phase change phenomena is presented, as are models for these irradiation-induced processes.« less

  18. Closely Spaced Independent Parallel Runway Simulation.

    DTIC Science & Technology

    1984-10-01

    facility consists of the Central Computer Facility, the Controller Laboratory, and the Simulator Pilot Complex. CENTRAL COMPUTER FACILITY. The Central... Computer Facility consists of a group of mainframes, minicomputers, and associated peripherals which host the operational and data acquisition...in the Controller Laboratory and convert their verbal directives into a keyboard entry which is transmitted to the Central Computer Complex, where

  19. Using Microcomputers for Communication. Summary Report: Sociology 110 Distance Education Pilot Project.

    ERIC Educational Resources Information Center

    Misanchuk, Earl R.

    A pilot project involved off-campus (distance education) students creating their assignments on Macintosh computers and "mailing" them electronically to a campus mainframe computer. The goal of the project was to determine what is necessary to implement and to evaluate the potential of computer communications for university-level…

  20. Using Microcomputers Simulations in the Classroom: Examples from Undergraduate and Faculty Computer Literacy Courses.

    ERIC Educational Resources Information Center

    Hart, Jeffrey A.

    1985-01-01

    Presents a discussion of how computer simulations are used in two undergraduate social science courses and a faculty computer literacy course on simulations and artificial intelligence. Includes a list of 60 simulations for use on mainframes and microcomputers. Entries include type of hardware required, publisher's address, and cost. Sample…

  1. Technology in the College Classroom.

    ERIC Educational Resources Information Center

    Earl, Archie W., Sr.

    An analysis was made of the use of computing tools at the graduate and undergraduate levels in colleges and universities in the United States. Topics ranged from hand-held calculators to the use of main-frame computers and the assessment of the SPSSX, SPSS, LINDO, and MINITAB computer software packages. Hand-held calculators are being increasingly…

  2. Microcomputers: Communication Software. Evaluation Guides. Guide Number 13.

    ERIC Educational Resources Information Center

    Gray, Peter J.

    This guide discusses four types of microcomputer-based communication programs that could prove useful to evaluators: (1) the direct communication of information generated by one computer to another computer; (2) using the microcomputer as a terminal to a mainframe computer to input, direct the analysis of, and/or output data using a statistical…

  3. An Automated Approach to Departmental Grant Management.

    ERIC Educational Resources Information Center

    Kressly, Gaby; Kanov, Arnold L.

    1986-01-01

    Installation of a small computer and the use of specially designed programs has proven a cost-effective solution to the data processing needs of a university medical center's ophthalmology department, providing immediate access to grants accounting information and avoiding dependence on the institution's mainframe computer. (MSE)

  4. Advanced manned space flight simulation and training: An investigation of simulation host computer system concepts

    NASA Technical Reports Server (NTRS)

    Montag, Bruce C.; Bishop, Alfred M.; Redfield, Joe B.

    1989-01-01

    The findings of a preliminary investigation by Southwest Research Institute (SwRI) in simulation host computer concepts is presented. It is designed to aid NASA in evaluating simulation technologies for use in spaceflight training. The focus of the investigation is on the next generation of space simulation systems that will be utilized in training personnel for Space Station Freedom operations. SwRI concludes that NASA should pursue a distributed simulation host computer system architecture for the Space Station Training Facility (SSTF) rather than a centralized mainframe based arrangement. A distributed system offers many advantages and is seen by SwRI as the only architecture that will allow NASA to achieve established functional goals and operational objectives over the life of the Space Station Freedom program. Several distributed, parallel computing systems are available today that offer real-time capabilities for time critical, man-in-the-loop simulation. These systems are flexible in terms of connectivity and configurability, and are easily scaled to meet increasing demands for more computing power.

  5. Construction of In-house Databases in a Corporation

    NASA Astrophysics Data System (ADS)

    Sano, Hikomaro

    This report outlines “Repoir” (Report information retrieval) system of Toyota Central R & D Laboratories, Inc. as an example of in-house information retrieval system. The online system was designed to process in-house technical reports with the aid of a mainframe computer and has been in operation since 1979. Its features are multiple use of the information for technical and managerial purposes and simplicity in indexing and data input. The total number of descriptors, specially selected for the system, was minimized for ease of indexing. The report also describes the input items, processing flow and typical outputs in kanji letters.

  6. PLATO Based Computer Assisted Instruction: An Exploration.

    ERIC Educational Resources Information Center

    Wise, Richard L.

    This study focuses on student response to computer-assisted instruction (CAI) after it was introduced into a college level physical geography course, "Introduction to Weather and Climate." PLATO, a University of Illinois mainframe network developed in the 1960s, was selected for its user friendliness, its large supply of courseware, its…

  7. Economic Comparison of Processes Using Spreadsheet Programs

    NASA Technical Reports Server (NTRS)

    Ferrall, J. F.; Pappano, A. W.; Jennings, C. N.

    1986-01-01

    Inexpensive approach aids plant-design decisions. Commercially available electronic spreadsheet programs aid economic comparison of different processes for producing particular end products. Facilitates plantdesign decisions without requiring large expenditures for powerful mainframe computers.

  8. An Assessment of Security Vulnerabilities Comprehension of Cloud Computing Environments: A Quantitative Study Using the Unified Theory of Acceptance and Use

    ERIC Educational Resources Information Center

    Venkatesh, Vijay P.

    2013-01-01

    The current computing landscape owes its roots to the birth of hardware and software technologies from the 1940s and 1950s. Since then, the advent of mainframes, miniaturized computing, and internetworking has given rise to the now prevalent cloud computing era. In the past few months just after 2010, cloud computing adoption has picked up pace…

  9. Fast methods to numerically integrate the Reynolds equation for gas fluid films

    NASA Technical Reports Server (NTRS)

    Dimofte, Florin

    1992-01-01

    The alternating direction implicit (ADI) method is adopted, modified, and applied to the Reynolds equation for thin, gas fluid films. An efficient code is developed to predict both the steady-state and dynamic performance of an aerodynamic journal bearing. An alternative approach is shown for hybrid journal gas bearings by using Liebmann's iterative solution (LIS) for elliptic partial differential equations. The results are compared with known design criteria from experimental data. The developed methods show good accuracy and very short computer running time in comparison with methods based on an inverting of a matrix. The computer codes need a small amount of memory and can be run on either personal computers or on mainframe systems.

  10. Cloud Computing and the Power to Choose

    ERIC Educational Resources Information Center

    Bristow, Rob; Dodds, Ted; Northam, Richard; Plugge, Leo

    2010-01-01

    Some of the most significant changes in information technology are those that have given the individual user greater power to choose. The first of these changes was the development of the personal computer. The PC liberated the individual user from the limitations of the mainframe and minicomputers and from the rules and regulations of centralized…

  11. The Value Proposition in Institutional Repositories

    ERIC Educational Resources Information Center

    Blythe, Erv; Chachra, Vinod

    2005-01-01

    In the education and research arena of the late 1970s and early 1980s, a struggle developed between those who advocated centralized, mainframe-based computing and those who advocated distributed computing. Ultimately, the debate reduced to whether economies of scale or economies of scope are more important to the effectiveness and efficiency of…

  12. How to Get from Cupertino to Boca Raton.

    ERIC Educational Resources Information Center

    Troxel, Duane K.; Chiavacci, Jim

    1985-01-01

    Describes seven methods to transfer data from Apple computer disks to IBM computer disks and vice versa: print out data and retype; use a commercial software package, optical-character reader, homemade cable, or modem to pass or transfer data directly; pay commercial data-transfer service; or store files on mainframe and download. (MBR)

  13. International Futures (IFs): A Global Issues Simulation for Teaching and Research.

    ERIC Educational Resources Information Center

    Hughes, Barry B.

    This paper describes the International Futures (IFs) computer assisted simulation game for use with undergraduates. Written in Standard Fortran IV, the model currently runs on mainframe or mini computers, but has not been adapted for micros. It has been successfully installed on Harris, Burroughs, Telefunken, CDC, Univac, IBM, and Prime machines.…

  14. Organizational Communication: Theoretical Implications of Communication Technology Applications.

    ERIC Educational Resources Information Center

    Danowski, James A.

    Communication technology (CT), which involves the use of computers in private and group communication, has had a major impact on theory and research in organizational communication over the past 30 years. From the 1950s to the early 1970s, mainframe computers were seen as managerial tools in creating more centralized organizational structures.…

  15. The "big bang" implementation: not for the faint of heart.

    PubMed

    Anderson, Linda K; Stafford, Cynthia J

    2002-01-01

    Replacing a hospital's obsolete mainframe computer system with a modern integrated clinical and administrative information system presents multiple challenges. When the new system is activated in one weekend, in "big bang" fashion, the challenges are magnified. Careful planning is essential to ensure that all hospital staff are fully prepared for this transition, knowing this conversion will involve system downtime, procedural changes, and the resulting stress that naturally accompanies change. Implementation concerns include staff preparation and training, process changes, continuity of patient care, and technical and administrative support. This article outlines how the University of Missouri Health Care addressed these operational concerns during this dramatic information system conversion.

  16. Integrated computer-aided design using minicomputers

    NASA Technical Reports Server (NTRS)

    Storaasli, O. O.

    1980-01-01

    Computer-Aided Design/Computer-Aided Manufacturing (CAD/CAM), a highly interactive software, has been implemented on minicomputers at the NASA Langley Research Center. CAD/CAM software integrates many formerly fragmented programs and procedures into one cohesive system; it also includes finite element modeling and analysis, and has been interfaced via a computer network to a relational data base management system and offline plotting devices on mainframe computers. The CAD/CAM software system requires interactive graphics terminals operating at a minimum of 4800 bits/sec transfer rate to a computer. The system is portable and introduces 'interactive graphics', which permits the creation and modification of models interactively. The CAD/CAM system has already produced designs for a large area space platform, a national transonic facility fan blade, and a laminar flow control wind tunnel model. Besides the design/drafting element analysis capability, CAD/CAM provides options to produce an automatic program tooling code to drive a numerically controlled (N/C) machine. Reductions in time for design, engineering, drawing, finite element modeling, and N/C machining will benefit productivity through reduced costs, fewer errors, and a wider range of configuration.

  17. System Documentation for the U.S. Army Ambulatory Care Data Base (ACDB) Study: Mainframe, Personal Computer and Optical Scanner File Structure

    DTIC Science & Technology

    1988-11-01

    Leesburg Pike, Falls Church, VA 22041-3203 (1) HQ HSC (HSCL-A), Fort Sam Houston, TX 78234-6000 (1) Dir, The Army Library, ATTN: ANR-AL-RS (Army Studies), Rm...34BSBAB"I 0831 2324 CLIDEFS(53)-="BAAA" : CLIDEFS(67)=l9BDAA"l 0943 2324 0843 2324 REM * CLINIC CODE BY SINGLE BUBBLE 0843 2324 CLIBU8S(40)"BAAGN&uA.’ AAv

  18. Evaluation of a patient centered e-nursing and caring system.

    PubMed

    Tsai, Lai-Yin; Shan, Huang; Mei-Bei, Lin

    2006-01-01

    This study aims to develop an electronic nursing and caring system to manage patients' information and provide patients with safe and efficient services. By transmitting data among wireless cards, optical network, and mainframe computer, nursing care will be delivered more systematically and patients' safety centered caring will be delivered more efficiently and effectively. With this system, manual record keeping time was cut down, and relevant nursing and caring information was linked up. With the development of an electronic nursing system, nurses were able to make the best use of the Internet resources, integrate information management systematically and improve quality of nursing and caring service.

  19. STATLIB: NSWC Library of Statistical Programs and Subroutines

    DTIC Science & Technology

    1989-08-01

    Uncorrelated Weighted Polynomial Regression 41 .WEPORC Correlated Weighted Polynomial Regression 45 MROP Multiple Regression Using Orthogonal Polynomials ...could not and should not be con- NSWC TR 89-97 verted to the new general purpose computer (the current CDC 995). Some were designed tu compute...personal computers. They are referred to as SPSSPC+, BMDPC, and SASPC and in general are less comprehensive than their mainframe counterparts. The basic

  20. Extending the Human Mind: Computers in Education. Program and Proceedings of the Annual Summer Computer Conference (6th, Eugene, Oregon, August 6-9, 1987).

    ERIC Educational Resources Information Center

    Oregon Univ., Eugene. Center for Advanced Technology in Education.

    Presented in this program and proceedings are the following 27 papers describing a variety of educational uses of computers: "Learner Based Tools for Special Populations" (Barbara Allen); "Micros and Mainframes: Practical Administrative Applications at the Building Level" (Jeannine Bertrand and Eric Schiff); "Logo and Logowriter in the Curriculum"…

  1. HNET - A National Computerized Health Network

    PubMed Central

    Casey, Mark; Hamilton, Richard

    1988-01-01

    The HNET system demonstrated conceptually and technically a national text (and limited bit mapped graphics) computer network for use between innovative members of the health care industry. The HNET configuration of a leased high speed national packet switching network connecting any number of mainframe, mini, and micro computers was unique in it's relatively low capital costs and freedom from obsolescence. With multiple simultaneous conferences, databases, bulletin boards, calendars, and advanced electronic mail and surveys, it is marketable to innovative hospitals, clinics, physicians, health care associations and societies, nurses, multisite research projects libraries, etc.. Electronic publishing and education capabilities along with integrated voice and video transmission are identified as future enhancements.

  2. Computer ray tracing speeds.

    PubMed

    Robb, P; Pawlowski, B

    1990-05-01

    The results of measuring the ray trace speed and compilation speed of thirty-nine computers in fifty-seven configurations, ranging from personal computers to super computers, are described. A correlation of ray trace speed has been made with the LINPACK benchmark which allows the ray trace speed to be estimated using LINPACK performance data. The results indicate that the latest generation of workstations, using CPUs based on RISC (Reduced Instruction Set Computer) technology, are as fast or faster than mainframe computers in compute-bound situations.

  3. APPLEPIPS /Apple Personal Image Processing System/ - An interactive digital image processing system for the Apple II microcomputer

    NASA Technical Reports Server (NTRS)

    Masuoka, E.; Rose, J.; Quattromani, M.

    1981-01-01

    Recent developments related to microprocessor-based personal computers have made low-cost digital image processing systems a reality. Image analysis systems built around these microcomputers provide color image displays for images as large as 256 by 240 pixels in sixteen colors. Descriptive statistics can be computed for portions of an image, and supervised image classification can be obtained. The systems support Basic, Fortran, Pascal, and assembler language. A description is provided of a system which is representative of the new microprocessor-based image processing systems currently on the market. While small systems may never be truly independent of larger mainframes, because they lack 9-track tape drives, the independent processing power of the microcomputers will help alleviate some of the turn-around time problems associated with image analysis and display on the larger multiuser systems.

  4. Ps and Cs of PCs.

    ERIC Educational Resources Information Center

    Raitt, David I.

    1987-01-01

    Considers pros and cons of using personal computers or microcomputers in a library and information setting. Highlights include discussions about the physical environment, security, effects on users, costs in terms of time and money, micro-mainframe links, and standardization considerations. (Author/LRW)

  5. An Introduction To PC-TRIM.

    Treesearch

    John R. Mills

    1989-01-01

    The timber resource inventory model (TRIM) has been adapted to run on person al computers. The personal computer version of TRIM (PC-TRIM) is more widely used than its mainframe parent. Errors that existed in previous versions of TRIM have been corrected. Information is presented to help users with program input and output management in the DOS environment, to...

  6. Designing Programs for Multiple Configurations: "You Mean Everyone Doesn't Have a Pentium or Better!"

    ERIC Educational Resources Information Center

    Conkright, Thomas D.; Joliat, Judy

    1996-01-01

    Discusses the challenges, solutions, and compromises involved in creating computer-delivered training courseware for Apollo Travel Services, a company whose 50,000 agents must access a mainframe from many different computing configurations. Initial difficulties came in trying to manage random access memory and quicken response time, but the future…

  7. 24 CFR 15.110 - What fees will HUD charge?

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... duplicating machinery. The computer run time includes the cost of operating a central processing unit for that... Applies. (6) Computer run time (includes only mainframe search time not printing) The direct cost of... estimated fee is more than $250.00 or you have a history of failing to pay FOIA fees to HUD in a timely...

  8. Supercomputing resources empowering superstack with interactive and integrated systems

    NASA Astrophysics Data System (ADS)

    Rückemann, Claus-Peter

    2012-09-01

    This paper presents the results from the development and implementation of Superstack algorithms to be dynamically used with integrated systems and supercomputing resources. Processing of geophysical data, thus named geoprocessing, is an essential part of the analysis of geoscientific data. The theory of Superstack algorithms and the practical application on modern computing architectures was inspired by developments introduced with processing of seismic data on mainframes and within the last years leading to high end scientific computing applications. There are several stacking algorithms known but with low signal to noise ratio in seismic data the use of iterative algorithms like the Superstack can support analysis and interpretation. The new Superstack algorithms are in use with wave theory and optical phenomena on highly performant computing resources for huge data sets as well as for sophisticated application scenarios in geosciences and archaeology.

  9. A Big RISC

    DTIC Science & Technology

    1983-07-18

    architecture . Design , performance, and cost of BRISC is presented. Performance is shown to be better than high end mainframes such as the IBM 3081 and Amdahl 470V/8 on integer benchmarks written in C, Pascal and LISP. The cost, conservatively estimated to be $132,400 is about the same as a high end minicomputer such as the VAX-11/780. BRISC has a CPU cycle time of 46 ns, providing a RISC I instruction execution rate of greater than 15 MIPs. BRISC is designed with a Structured Computer Aided Logic Design System (SCALD) by Valid Logic Systems. An evaluation of the utility of

  10. Solving subsurface structural problems using a computer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Witte, D.M.

    1987-02-01

    Until recently, the solution of subsurface structural problems has required a combination of graphical construction, trigonometry, time, and patience. Recent advances in software available for both mainframe and microcomputers now reduce the time and potential error of these calculations by an order of magnitude. Software for analysis of deviated wells, three point problems, apparent dip, apparent thickness, and the intersection of two planes, as well as the plotting and interpretation of these data can be used to allow timely and accurate exploration or operational decisions. The available computer software provides a set of utilities, or tools, rather than a comprehensive,more » intelligent system. The burden for selection of appropriate techniques, computation methods, and interpretations still lies with the explorationist user.« less

  11. Instructional image processing on a university mainframe: The Kansas system

    NASA Technical Reports Server (NTRS)

    Williams, T. H. L.; Siebert, J.; Gunn, C.

    1981-01-01

    An interactive digital image processing program package was developed that runs on the University of Kansas central computer, a Honeywell Level 66 multi-processor system. The module form of the package allows easy and rapid upgrades and extensions of the system and is used in remote sensing courses in the Department of Geography, in regional five-day short courses for academics and professionals, and also in remote sensing projects and research. The package comprises three self-contained modules of processing functions: Subimage extraction and rectification; image enhancement, preprocessing and data reduction; and classification. Its use in a typical course setting is described. Availability and costs are considered.

  12. Lessons learned in transitioning to an open systems environment

    NASA Technical Reports Server (NTRS)

    Boland, Dillard E.; Green, David S.; Steger, Warren L.

    1994-01-01

    Software development organizations, both commercial and governmental, are undergoing rapid change spurred by developments in the computing industry. To stay competitive, these organizations must adopt new technologies, skills, and practices quickly. Yet even for an organization with a well-developed set of software engineering models and processes, transitioning to a new technology can be expensive and risky. Current industry trends are leading away from traditional mainframe environments and toward the workstation-based, open systems world. This paper presents the experiences of software engineers on three recent projects that pioneered open systems development for NASA's Flight Dynamics Division of the Goddard Space Flight Center (GSFC).

  13. Computing Legacy Software Behavior to Understand Functionality and Security Properties: An IBM/370 Demonstration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Linger, Richard C; Pleszkoch, Mark G; Prowell, Stacy J

    Organizations maintaining mainframe legacy software can benefit from code modernization and incorporation of security capabilities to address the current threat environment. Oak Ridge National Laboratory is developing the Hyperion system to compute the behavior of software as a means to gain understanding of software functionality and security properties. Computation of functionality is critical to revealing security attributes, which are in fact specialized functional behaviors of software. Oak Ridge is collaborating with MITRE Corporation to conduct a demonstration project to compute behavior of legacy IBM Assembly Language code for a federal agency. The ultimate goal is to understand functionality and securitymore » vulnerabilities as a basis for code modernization. This paper reports on the first phase, to define functional semantics for IBM Assembly instructions and conduct behavior computation experiments.« less

  14. The microcomputer workstation - An alternate hardware architecture for remotely sensed image analysis

    NASA Technical Reports Server (NTRS)

    Erickson, W. K.; Hofman, L. B.; Donovan, W. E.

    1984-01-01

    Difficulties regarding the digital image analysis of remotely sensed imagery can arise in connection with the extensive calculations required. In the past, an expensive large to medium mainframe computer system was needed for performing these calculations. For image-processing applications smaller minicomputer-based systems are now used by many organizations. The costs for such systems are still in the range from $100K to $300K. Recently, as a result of new developments, the use of low-cost microcomputers for image processing and display systems appeared to have become feasible. These developments are related to the advent of the 16-bit microprocessor and the concept of the microcomputer workstation. Earlier 8-bit microcomputer-based image processing systems are briefly examined, and a computer workstation architecture is discussed. Attention is given to a microcomputer workstation developed by Stanford University, and the design and implementation of a workstation network.

  15. Close to real life. [solving for transonic flow about lifting airfoils using supercomputers

    NASA Technical Reports Server (NTRS)

    Peterson, Victor L.; Bailey, F. Ron

    1988-01-01

    NASA's Numerical Aerodynamic Simulation (NAS) facility for CFD modeling of highly complex aerodynamic flows employs as its basic hardware two Cray-2s, an ETA-10 Model Q, an Amdahl 5880 mainframe computer that furnishes both support processing and access to 300 Gbytes of disk storage, several minicomputers and superminicomputers, and a Thinking Machines 16,000-device 'connection machine' processor. NAS, which was the first supercomputer facility to standardize operating-system and communication software on all processors, has done important Space Shuttle aerodynamics simulations and will be critical to the configurational refinement of the National Aerospace Plane and its intergrated powerplant, which will involve complex, high temperature reactive gasdynamic computations.

  16. Testing and validating the CERES-wheat (Crop Estimation through Resource and Environment Synthesis-wheat) model in diverse environments

    NASA Technical Reports Server (NTRS)

    Otter-Nacke, S.; Godwin, D. C.; Ritchie, J. T.

    1986-01-01

    CERES-Wheat is a computer simulation model of the growth, development, and yield of spring and winter wheat. It was designed to be used in any location throughout the world where wheat can be grown. The model is written in Fortran 77, operates on a daily time stop, and runs on a range of computer systems from microcomputers to mainframes. Two versions of the model were developed: one, CERES-Wheat, assumes nitrogen to be nonlimiting; in the other, CERES-Wheat-N, the effects of nitrogen deficiency are simulated. The report provides the comparisons of simulations and measurements of about 350 wheat data sets collected from throughout the world.

  17. Recipe for Regional Development.

    ERIC Educational Resources Information Center

    Baldwin, Fred D.

    1994-01-01

    The Ceramics Corridor has created new jobs in New York's Appalachian region by fostering ceramics research and product development by small private companies. Corridor business incubators offer tenants low overhead costs, fiber-optic connections to Alfred University's mainframe computer, rental of lab space, and use of equipment small companies…

  18. Hardware Development for a Mobile Educational Robot.

    ERIC Educational Resources Information Center

    Mannaa, A. M.; And Others

    1987-01-01

    Describes the development of a robot whose mainframe is essentially transparent and walks on four legs. Discusses various gaits in four-legged motion. Reports on initial trials of a full-sized model without computer-control, including smoothness of motion and actual obstacle crossing features. (CW)

  19. Providing Information Services in Videotex.

    ERIC Educational Resources Information Center

    Harris, Gary L.

    1986-01-01

    The provision of information through videotex in West Germany is described. Information programs and services of the Gesellschaft fur Information und Dokumentation (GID) and its cooperative partners are reviewed to illustrate program contents, a marketing strategy, and the application of gateway technology with mainframe and personal computers.…

  20. Evolutionary Development of the Simulation by Logical Modeling System (SIBYL)

    NASA Technical Reports Server (NTRS)

    Wu, Helen

    1995-01-01

    Through the evolutionary development of the Simulation by Logical Modeling System (SIBYL) we have re-engineered the expensive and complex IBM mainframe based Long-term Hardware Projection Model (LHPM) to a robust cost-effective computer based mode that is easy to use. We achieved significant cost reductions and improved productivity in preparing long-term forecasts of Space Shuttle Main Engine (SSME) hardware. The LHPM for the SSME is a stochastic simulation model that projects the hardware requirements over 10 years. SIBYL is now the primary modeling tool for developing SSME logistics proposals and Program Operating Plan (POP) for NASA and divisional marketing studies.

  1. Cost/Schedule Control Systems Criteria: A Reference Guide to C/SCSC information

    DTIC Science & Technology

    1992-09-01

    Smith, Larry A. "Mainframe ARTEMIS: More than a Project Management Tool -- Earned Value Analysis ( PEVA )," Project Management Journal, 19:23-28 (April 1988...A. "Mainframe ARTEMIS: More than a Project Management Tool - Earned Value Analysis ( PEVA )," Project Management Journal, 19:23-28 (April 1988). 14...than a Project Management Tool -- Earned Value Analysis ( PEVA )," Project Management Journal, 19:23-28 (April 1988). 17. Trufant, Thomas M. and Robert

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Radtke, M.A.

    This paper will chronicle the activity at Wisconsin Public Service Corporation (WPSC) that resulted in the complete migration of a traditional, late 1970`s vintage, Energy Management System (EMS). The new environment includes networked microcomputers, minicomputers, and the corporate mainframe, and provides on-line access to employees outside the energy control center and some WPSC customers. In the late 1980`s, WPSC was forecasting an EMS computer upgrade or replacement to address both capacity and technology needs. Reasoning that access to diverse computing resources would best position the company to accommodate the uncertain needs of the energy industry in the 90`s, WPSC chosemore » to investigate an in-place migration to a network of computers, able to support heterogeneous hardware and operating systems. The system was developed in a modular fashion, with individual modules being deployed as soon as they were completed. The functional and technical specification was continuously enhanced as operating experience was gained from each operational module. With the migration off the original EMS computers complete, the networked system called DEMAXX (Distributed Energy Management Architecture with eXtensive eXpandability) has exceeded expectations in the areas of: cost, performance, flexibility, and reliability.« less

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Radtke, M.A.

    This paper will chronicle the activity at Wisconsin Public Service Corporation (WPSC) that resulted in the complete migration of a traditional, late 1970`s vintage, Energy management System (EMS). The new environment includes networked microcomputers, minicomputers, and the corporate mainframe, and provides on-line access to employees outside the energy control center and some WPSC customers. In the late 1980`s, WPSC was forecasting an EMS computer upgrade or replacement to address both capacity and technology needs. Reasoning that access to diverse computing resources would best position the company to accommodate the uncertain needs of the energy industry in the 90`s, WPSC chosemore » to investigate an in-place migration to a network of computers, able to support heterogeneous hardware and operating systems. The system was developed in a modular fashion, with individual modules being deployed as soon as they were completed. The functional and technical specification was continuously enhanced as operating experience was gained from each operational module. With the migration of the original EMS computers complete, the networked system called DEMAXX (Distributed Energy Management Architecture with eXtensive eXpandability) has exceeded expectations in the areas of: cost, performance, flexibility, and reliability.« less

  4. TOOLS FOR PRESENTING SPATIAL AND TEMPORAL PATTERNS OF ENVIRONMENTAL MONITORING DATA

    EPA Science Inventory

    The EPA Health Effects Research Laboratory has developed this data presentation tool for use with a variety of types of data which may contain spatial and temporal patterns of interest. he technology links mainframe computing power to the new generation of "desktop publishing" ha...

  5. Definitions of database files and fields of the Personal Computer-Based Water Data Sources Directory

    USGS Publications Warehouse

    Green, J. Wayne

    1991-01-01

    This report describes the data-base files and fields of the personal computer-based Water Data Sources Directory (WDSD). The personal computer-based WDSD was derived from the U.S. Geological Survey (USGS) mainframe computer version. The mainframe version of the WDSD is a hierarchical data-base design. The personal computer-based WDSD is a relational data- base design. This report describes the data-base files and fields of the relational data-base design in dBASE IV (the use of brand names in this abstract is for identification purposes only and does not constitute endorsement by the U.S. Geological Survey) for the personal computer. The WDSD contains information on (1) the type of organization, (2) the major orientation of water-data activities conducted by each organization, (3) the names, addresses, and telephone numbers of offices within each organization from which water data may be obtained, (4) the types of data held by each organization and the geographic locations within which these data have been collected, (5) alternative sources of an organization's data, (6) the designation of liaison personnel in matters related to water-data acquisition and indexing, (7) the volume of water data indexed for the organization, and (8) information about other types of data and services available from the organization that are pertinent to water-resources activities.

  6. Intelligent buildings.

    PubMed

    Williams, W E

    1987-01-01

    The maturing of technologies in computer capabilities, particularly direct digital signals, has provided an exciting variety of new communication and facility control opportunities. These include telecommunications, energy management systems, security systems, office automation systems, local area networks, and video conferencing. New applications are developing continuously. The so-called "intelligent" or "smart" building concept evolves from the development of this advanced technology in building environments. Automation has had a dramatic effect on facility planning. For decades, communications were limited to the telephone, the typewritten message, and copy machines. The office itself and its functions had been essentially unchanged for decades. Office automation systems began to surface during the energy crisis and, although their newer technology was timely, they were, for the most part, designed separately from other new building systems. For example, most mainframe computer systems were originally stand-alone, as were word processing installations. In the last five years, the advances in distributive systems, networking, and personal computer capabilities have provided opportunities to make such dramatic improvements in productivity that the Selectric typewriter has gone from being the most advanced piece of office equipment to nearly total obsolescence.

  7. System analysis in rotorcraft design: The past decade

    NASA Technical Reports Server (NTRS)

    Galloway, Thomas L.

    1988-01-01

    Rapid advances in the technology of electronic digital computers and the need for an integrated synthesis approach in developing future rotorcraft programs has led to increased emphasis on system analysis techniques in rotorcraft design. The task in systems analysis is to deal with complex, interdependent, and conflicting requirements in a structured manner so rational and objective decisions can be made. Whether the results are wisdom or rubbish depends upon the validity and sometimes more importantly, the consistency of the inputs, the correctness of the analysis, and a sensible choice of measures of effectiveness to draw conclusions. In rotorcraft design this means combining design requirements, technology assessment, sensitivity analysis and reviews techniques currently in use by NASA and Army organizations in developing research programs and vehicle specifications for rotorcraft. These procedures span simple graphical approaches to comprehensive analysis on large mainframe computers. Examples of recent applications to military and civil missions are highlighted.

  8. Workshop on Office Automation and Telecommunication: Applying the Technology.

    ERIC Educational Resources Information Center

    Mitchell, Bill

    This document contains 12 outlines that forecast the office of the future. The outlines cover the following topics: (1) office automation definition and objectives; (2) functional categories of office automation software packages for mini and mainframe computers; (3) office automation-related software for microcomputers; (4) office automation…

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    The model is designed to enable decision makers to compare the economics of geothermal projects with the economics of alternative energy systems at an early stage in the decision process. The geothermal engineering and economic feasibility computer model (GEEF) is written in FORTRAN IV language and can be run on a mainframe or a mini-computer system. An abbreviated version of the model is being developed for usage in conjunction with a programmable desk calculator. The GEEF model has two main segments, namely (i) the engineering design/cost segment and (ii) the economic analysis segment. In the engineering segment, the model determinesmore » the numbers of production and injection wells, heat exchanger design, operating parameters for the system, requirement of supplementary system (to augment the working fluid temperature if the resource temperature is not sufficiently high), and the fluid flow rates. The model can handle single stage systems as well as two stage cascaded systems in which the second stage may involve a space heating application after a process heat application in the first stage.« less

  10. Rotary engine performance computer program (RCEMAP and RCEMAPPC): User's guide

    NASA Technical Reports Server (NTRS)

    Bartrand, Timothy A.; Willis, Edward A.

    1993-01-01

    This report is a user's guide for a computer code that simulates the performance of several rotary combustion engine configurations. It is intended to assist prospective users in getting started with RCEMAP and/or RCEMAPPC. RCEMAP (Rotary Combustion Engine performance MAP generating code) is the mainframe version, while RCEMAPPC is a simplified subset designed for the personal computer, or PC, environment. Both versions are based on an open, zero-dimensional combustion system model for the prediction of instantaneous pressures, temperature, chemical composition and other in-chamber thermodynamic properties. Both versions predict overall engine performance and thermal characteristics, including bmep, bsfc, exhaust gas temperature, average material temperatures, and turbocharger operating conditions. Required inputs include engine geometry, materials, constants for use in the combustion heat release model, and turbomachinery maps. Illustrative examples and sample input files for both versions are included.

  11. A data-management system for detailed areal interpretive data

    USGS Publications Warehouse

    Ferrigno, C.F.

    1986-01-01

    A data storage and retrieval system has been developed to organize and preserve areal interpretive data. This system can be used by any study where there is a need to store areal interpretive data that generally is presented in map form. This system provides the capability to grid areal interpretive data for input to groundwater flow models at any spacing and orientation. The data storage and retrieval system is designed to be used for studies that cover small areas such as counties. The system is built around a hierarchically structured data base consisting of related latitude-longitude blocks. The information in the data base can be stored at different levels of detail, with the finest detail being a block of 6 sec of latitude by 6 sec of longitude (approximately 0.01 sq mi). This system was implemented on a mainframe computer using a hierarchical data base management system. The computer programs are written in Fortran IV and PL/1. The design and capabilities of the data storage and retrieval system, and the computer programs that are used to implement the system are described. Supplemental sections contain the data dictionary, user documentation of the data-system software, changes that would need to be made to use this system for other studies, and information on the computer software tape. (Lantz-PTT)

  12. Developing Computer Software for Use in the Speech/Comunications Classroom.

    ERIC Educational Resources Information Center

    Krauss, Beatrice J.

    Appropriate software can turn the microcomputer from the dumb box into a teaching tool. One resource for finding appropriate software is the organization Edunet. It allows the user to access the mainframe of 18 major universities and has developed a communications network with 130 colleges. It also handles billing, does periodic software…

  13. The Rise of the CISO

    ERIC Educational Resources Information Center

    Gale, Doug

    2007-01-01

    The late 1980s was an exciting time to be a CIO in higher education. Computing was being decentralized as microcomputers replaced mainframes, networking was emerging, and the National Science Foundation Network (NSFNET) was introducing the concept of an "internet" to hundreds of thousands of new users. Security wasn't much of an issue;…

  14. Microcomputers, Software and Foreign Languages for Special Purposes: An Analysis of TXTPRO.

    ERIC Educational Resources Information Center

    Tang, Michael S.

    TXTPRO, a computer program developed as a graduate-level research tool for descriptive linguistic analysis, produces simple alphabetic and word frequency lists, analyzes word combinations, and develops concordances. With modifications, a teacher could enter the program into a mainframe or a microcomputer and use it for text analyses to develop…

  15. An Analysis of Graduate Nursing Students' Innovation-Decision Process

    PubMed Central

    Kacynski, Kathryn A.; Roy, Katrina D.

    1984-01-01

    This study's purpose was to examine the innovation-decision process used by graduate nursing students when deciding to use computer applications. Graduate nursing students enrolled in a mandatory research class were surveyed before and after their use of a mainframe computer for beginning data analysis about their general attitudes towards computers, individual characteristics such as “cosmopoliteness”, and their desire to learn more about a computer application. It was expected that an experimental intervention, a videotaped demonstration of interactive video instruction of cardiopulmonary resuscitation (CPR); previous computer experience; and the subject's “cosmopoliteness” wolud influence attitudes towards computers and the desire to learn more about a computer application.

  16. CLARET user's manual: Mainframe Logs. Revision 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frobose, R.H.

    1984-11-12

    CLARET (Computer Logging and RETrieval) is a stand-alone PDP 11/23 system that can support 16 terminals. It provides a forms-oriented front end by which operators enter online activity logs for the Lawrence Livermore National Laboratory's OCTOPUS computer network. The logs are stored on the PDP 11/23 disks for later retrieval, and hardcopy reports are generated both automatically and upon request. Online viewing of the current logs is provided to management. As each day's logs are completed, the information is automatically sent to a CRAY and included in an online database system. The terminal used for the CLARET system is amore » dual-port Hewlett Packard 2626 terminal that can be used as either the CLARET logging station or as an independent OCTOPUS terminal. Because this is a stand-alone system, it does not depend on the availability of the OCTOPUS network to run and, in the event of a power failure, can be brought up independently.« less

  17. [A survey of the best bibliographic searching system in occupational medicine and discussion of its implementation].

    PubMed

    Inoue, J

    1991-12-01

    When occupational health personnel, especially occupational physicians search bibliographies, they usually have to search bibliographies by themselves. Also, if a library is not available because of the location of their work place, they might have to rely on online databases. Although there are many commercial databases in the world, people who seldom use them, will have problems with on-line searching, such as user-computer interface, keywords, and so on. The present study surveyed the best bibliographic searching system in the field of occupational medicine by questionnaire through the use of DIALOG OnDisc MEDLINE as a commercial database. In order to ascertain the problems involved in determining the best bibliographic searching system, a prototype bibliographic searching system was constructed and then evaluated. Finally, solutions for the problems were discussed. These led to the following conclusions: to construct the best bibliographic searching system at the present time, 1) a concept of micro-to-mainframe links (MML) is needed for the computer hardware network; 2) multi-lingual font standards and an excellent common user-computer interface are needed for the computer software; 3) a short course and education of database management systems, and support of personal information processing for retrieved data are necessary for the practical use of the system.

  18. The third level trigger and output event unit of the UA1 data-acquisition system

    NASA Astrophysics Data System (ADS)

    Cittolin, S.; Demoulin, M.; Fucci, A.; Haynes, W.; Martin, B.; Porte, J. P.; Sphicas, P.

    1989-12-01

    The upgraded UA1 experiment utilizes twelve 3081/E emulators for its third-level trigger system. The system is interfaced to VME, and is controlled by 68000 microprocessor VME boards on the input and output. The output controller communicates with an IBM 9375 mainframe via the CERN-IBM developed VICI interface. The events selected by the emulators are output on IBM-3480 cassettes. The user interface to this system is based on a series of Macintosh personal computer connected to the VME bus. These Macs are also used for developing software for the emulators and for monitoring the entire system. The same configuration has also been used for offline event reconstruction. A description of the system, together with details of both the online and offline modes of operation and an eveluation of its performance are presented.

  19. The changing nature of spacecraft operations: From the Vikings of the 1970's to the great observatories of the 1990's and beyond

    NASA Technical Reports Server (NTRS)

    Ledbetter, Kenneth W.

    1992-01-01

    Four trends in spacecraft flight operations are discussed which will reduce overall program costs. These trends are the use of high-speed, highly reliable data communications systems for distributing operations functions to more convenient and cost-effective sites; the improved capability for remote operation of sensors; a continued rapid increase in memory and processing speed of flight qualified computer chips; and increasingly capable ground-based hardware and software systems, notably those augmented by artificial intelligence functions. Changes reflected by these trends are reviewed starting from the NASA Viking missions of the early 70s, when mission control was conducted at one location using expensive and cumbersome mainframe computers and communications equipment. In the 1980s, powerful desktop computers and modems enabled the Magellan project team to operate the spacecraft remotely. In the 1990s, the Hubble Space Telescope project uses multiple color screens and automated sequencing software on small computers. Given a projection of current capabilities, future control centers will be even more cost-effective.

  20. 48 CFR 2452.204-70 - Preservation of, and access to, contract records (tangible and electronically stored information...

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... or data storage). ESI devices and media include, but are not be limited to: (1) Computers (mainframe...) Personal data assistants (PDAs); (5) External data storage devices including portable devices (e.g., flash drive); and (6) Data storage media (magnetic, e.g., tape; optical, e.g., compact disc, microfilm, etc...

  1. 48 CFR 2452.204-70 - Preservation of, and access to, contract records (tangible and electronically stored information...

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... or data storage). ESI devices and media include, but are not be limited to: (1) Computers (mainframe...) Personal data assistants (PDAs); (5) External data storage devices including portable devices (e.g., flash drive); and (6) Data storage media (magnetic, e.g., tape; optical, e.g., compact disc, microfilm, etc...

  2. Developing a Telecommunications Curriculum for Students with Physical Disabilities.

    ERIC Educational Resources Information Center

    Gandell, Terry S.; Laufer, Dorothy

    1993-01-01

    A telecommunications curriculum was developed for students (ages 15-21) with physical disabilities. Curriculum content included an internal mailbox program (Mailbox), interactive communication system (Blisscom), bulletin board system (Arctel), and a mainframe system (Compuserv). (JDD)

  3. Proceedings of the NASTRAN (Tradename) Users’ Colloquium (18th) Held in Portland, Oregon on 23-27 April 1990

    DTIC Science & Technology

    1990-04-01

    Maxwell (Texas A&M University-) 4. ACCURACY OF THE QUAD& THICK SHELL ELEMENT ’........... .. 3.0 by William R. Case, Tiffany D. Bowles, Al ia K. Croft and...Computer Literacy: Mainframe Monsters and Pacman. Symposium on Advances and Trends in Structures and Dynamics, Washington, D.C., October 1984. 4. Woodward...No. 1, 1985. 5. Wilson, E.L., and M. Holt: CAL-80-Computer Assisted Learning of Structural Engineering. Symposium on Advances and Trends in

  4. Video Information Communication and Retrieval/Image Based Information System (VICAR/IBIS)

    NASA Technical Reports Server (NTRS)

    Wherry, D. B.

    1981-01-01

    The acquisition, operation, and planning stages of installing a VICAR/IBIS system are described. The system operates in an IBM mainframe environment, and provides image processing of raster data. System support problems with software and documentation are discussed.

  5. Transferring ecosystem simulation codes to supercomputers

    NASA Technical Reports Server (NTRS)

    Skiles, J. W.; Schulbach, C. H.

    1995-01-01

    Many ecosystem simulation computer codes have been developed in the last twenty-five years. This development took place initially on main-frame computers, then mini-computers, and more recently, on micro-computers and workstations. Supercomputing platforms (both parallel and distributed systems) have been largely unused, however, because of the perceived difficulty in accessing and using the machines. Also, significant differences in the system architectures of sequential, scalar computers and parallel and/or vector supercomputers must be considered. We have transferred a grassland simulation model (developed on a VAX) to a Cray Y-MP/C90. We describe porting the model to the Cray and the changes we made to exploit the parallelism in the application and improve code execution. The Cray executed the model 30 times faster than the VAX and 10 times faster than a Unix workstation. We achieved an additional speedup of 30 percent by using the compiler's vectoring and 'in-line' capabilities. The code runs at only about 5 percent of the Cray's peak speed because it ineffectively uses the vector and parallel processing capabilities of the Cray. We expect that by restructuring the code, it could execute an additional six to ten times faster.

  6. Security in Full-Force

    NASA Technical Reports Server (NTRS)

    2002-01-01

    When fully developed for NASA, Vanguard Enforcer(TM) software-which emulates the activities of highly technical security system programmers, auditors, and administrators-was among the first intrusion detection programs to restrict human errors from affecting security, and to ensure the integrity of a computer's operating systems, as well as the protection of mission critical resources. Vanguard Enforcer was delivered in 1991 to Johnson Space Center and has been protecting systems and critical data there ever since. In August of 1999, NASA granted Vanguard exclusive rights to commercialize the Enforcer system for the private sector. In return, Vanguard continues to supply NASA with ongoing research, development, and support of Enforcer. The Vanguard Enforcer 4.2 is one of several surveillance technologies that make up the Vanguard Security Solutions line of products. Using a mainframe environment, Enforcer 4.2 achieves previously unattainable levels of automated security management.

  7. SIMOS feasibility report, task 4 : sign inventory management and ordering system

    DOT National Transportation Integrated Search

    1997-12-01

    The Sign Inventory Management and Ordering System (SIMOS) design is a merger of existing manually maintained information management systems married to PennDOT's GIS and department-wide mainframe database to form a logical connection for enhanced sign...

  8. Interactive Forecasting with the National Weather Service River Forecast System

    NASA Technical Reports Server (NTRS)

    Smith, George F.; Page, Donna

    1993-01-01

    The National Weather Service River Forecast System (NWSRFS) consists of several major hydrometeorologic subcomponents to model the physics of the flow of water through the hydrologic cycle. The entire NWSRFS currently runs in both mainframe and minicomputer environments, using command oriented text input to control the system computations. As computationally powerful and graphically sophisticated scientific workstations became available, the National Weather Service (NWS) recognized that a graphically based, interactive environment would enhance the accuracy and timeliness of NWS river and flood forecasts. Consequently, the operational forecasting portion of the NWSRFS has been ported to run under a UNIX operating system, with X windows as the display environment on a system of networked scientific workstations. In addition, the NWSRFS Interactive Forecast Program was developed to provide a graphical user interface to allow the forecaster to control NWSRFS program flow and to make adjustments to forecasts as necessary. The potential market for water resources forecasting is immense and largely untapped. Any private company able to market the river forecasting technologies currently developed by the NWS Office of Hydrology could provide benefits to many information users and profit from providing these services.

  9. Reanalysis, compatibility and correlation in analysis of modified antenna structures

    NASA Technical Reports Server (NTRS)

    Levy, R.

    1989-01-01

    A simple computational procedure is synthesized to process changes in the microwave-antenna pathlength-error measure when there are changes in the antenna structure model. The procedure employs structural modification reanalysis methods combined with new extensions of correlation analysis to provide the revised rms pathlength error. Mainframe finite-element-method processing of the structure model is required only for the initial unmodified structure, and elementary postprocessor computations develop and deal with the effects of the changes. Several illustrative computational examples are included. The procedure adapts readily to processing spectra of changes for parameter studies or sensitivity analyses.

  10. Computer Supported Indexing: A History and Evaluation of NASA's MAI System

    NASA Technical Reports Server (NTRS)

    Silvester, June P.

    1997-01-01

    Computer supported or machine aided indexing (MAI) can be categorized in multiple ways. The system used by the National Aeronautics and Space Administration's (NASA's) Center for AeroSpace Information (CASI) is described as semantic and computational. It's based on the co-occurrence of domain-specific terminology in parts of a sentence, and the probability that an indexer will assign a particular index term when a given word or phrase is encountered in text. The NASA CASI system is run on demand by the indexer and responds in 3 to 9 seconds with a list of suggested, authorized terms. The system was originally based on a syntactic system used in the late 1970's by the Defense Technical Information Center (DTIC). The NASA mainframe-supported system consists of three components: two programs and a knowledge base (KB). The evolution of the system is described and flow charts illustrate the MAI procedures. Tests used to evaluate NASA's MAI system were limited to those that would not slow production. A very early test indicated that MAI saved about 3 minutes and provided several additional terms for each document indexed. It also was determined that time and other resources spent in careful construction of the KB pay off with high-quality output and indexer acceptance of MAI results.

  11. Measurement of Loneliness Among Clients Representing Four Stages of Cancer: An Exploratory Study.

    DTIC Science & Technology

    1985-03-01

    status, and membership in organizations for each client were entered into a SPSS program in a mainframe computer . The means and a one-way analysis of...Study 6. PERFORMING ORG. REPORT NUMBER 7. AUTHOR(e) S. CONTRACT OR GRANT NUMBER(&) Suanne Smith 9. PERFORMING ORGANIZATION NAME AND ADDRESS 10. PROGRAM ...27 Definitions of Terms .......... . . . . 28 II. MErODOLOGY . . . . . . . . . . ......... 30 Overviev of Design

  12. Environmental Gradient Analysis, Ordination, and Classification in Environmental Impact Assessments.

    DTIC Science & Technology

    1987-09-01

    agglomerative clustering algorithms for mainframe computers: (1) the unweighted pair-group method that V uses arithmetic averages ( UPGMA ), (2) the...hierarchical agglomerative unweighted pair-group method using arithmetic averages ( UPGMA ), which is also called average linkage clustering. This method was...dendrograms produced by weighted clustering (93). Sneath and Sokal (94), Romesburg (84), and Seber• (90) also strongly recommend the UPGMA . A dendrogram

  13. Conversion and Retrievability of Hard Copy and Digital Documents on Optical Disks

    DTIC Science & Technology

    1992-03-01

    53 B. CURRENT THESIS PREPARATION TOOLS . ....... 54 1. Thesis Preparation using G-Thesis ..... 55 2. Thesis Preparation using Framemaker ...School mainframe. • Computer Science department students can use a software package called Framemaker , available on Sun work stations in their...by most thesis typists and students. For this reason, the discussion of thesis preparation tools will be limited to; G-thesis, Framemaker and

  14. The RANDOM computer program: A linear congruential random number generator

    NASA Technical Reports Server (NTRS)

    Miles, R. F., Jr.

    1986-01-01

    The RANDOM Computer Program is a FORTRAN program for generating random number sequences and testing linear congruential random number generators (LCGs). The linear congruential form of random number generator is discussed, and the selection of parameters of an LCG for a microcomputer described. This document describes the following: (1) The RANDOM Computer Program; (2) RANDOM.MOD, the computer code needed to implement an LCG in a FORTRAN program; and (3) The RANCYCLE and the ARITH Computer Programs that provide computational assistance in the selection of parameters for an LCG. The RANDOM, RANCYCLE, and ARITH Computer Programs are written in Microsoft FORTRAN for the IBM PC microcomputer and its compatibles. With only minor modifications, the RANDOM Computer Program and its LCG can be run on most micromputers or mainframe computers.

  15. A PC-based multispectral scanner data evaluation workstation: Application to Daedalus scanners

    NASA Technical Reports Server (NTRS)

    Jedlovec, Gary J.; James, Mark W.; Smith, Matthew R.; Atkinson, Robert J.

    1991-01-01

    In late 1989, a personal computer (PC)-based data evaluation workstation was developed to support post flight processing of Multispectral Atmospheric Mapping Sensor (MAMS) data. The MAMS Quick View System (QVS) is an image analysis and display system designed to provide the capability to evaluate Daedalus scanner data immediately after an aircraft flight. Even in its original form, the QVS offered the portability of a personal computer with the advanced analysis and display features of a mainframe image analysis system. It was recognized, however, that the original QVS had its limitations, both in speed and processing of MAMS data. Recent efforts are presented that focus on overcoming earlier limitations and adapting the system to a new data tape structure. In doing so, the enhanced Quick View System (QVS2) will accommodate data from any of the four spectrometers used with the Daedalus scanner on the NASA ER2 platform. The QVS2 is designed around the AST 486/33 MHz CPU personal computer and comes with 10 EISA expansion slots, keyboard, and 4.0 mbytes of memory. Specialized PC-McIDAS software provides the main image analysis and display capability for the system. Image analysis and display of the digital scanner data is accomplished with PC-McIDAS software.

  16. Networked Instructional Chemistry: Using Technology To Teach Chemistry

    NASA Astrophysics Data System (ADS)

    Smith, Stanley; Stovall, Iris

    1996-10-01

    Networked multimedia microcomputers provide new ways to help students learn chemistry and to help instructors manage the learning environment. This technology is used to replace some traditional laboratory work, collect on-line experimental data, enhance lectures and quiz sections with multimedia presentations, provide prelaboratory training for beginning nonchemistry- major organic laboratory, provide electronic homework for organic chemistry students, give graduate students access to real NMR data for analysis, and provide access to molecular modeling tools. The integration of all of these activities into an active learning environment is made possible by a client-server network of hundreds of computers. This requires not only instructional software but also classroom and course management software, computers, networking, and room management. Combining computer-based work with traditional course material is made possible with software management tools that allow the instructor to monitor the progress of each student and make available an on-line gradebook so students can see their grades and class standing. This client-server based system extends the capabilities of the earlier mainframe-based PLATO system, which was used for instructional computing. This paper outlines the components of a technology center used to support over 5,000 students per semester.

  17. An overview of the NASA electronic components information management system

    NASA Technical Reports Server (NTRS)

    Kramer, G.; Waterbury, S.

    1991-01-01

    The NASA Parts Project Office (NPPO) comprehensive data system to support all NASA Electric, Electronic, and Electromechanical (EEE) parts management and technical data requirements is described. A phase delivery approach is adopted, comprising four principal phases. Phases 1 and 2 support Space Station Freedom (SSF) and use a centralized architecture with all data and processing kept on a mainframe computer. Phases 3 and 4 support all NASA centers and projects and implement a distributed system architecture, in which data and processing are shared among networked database servers. The Phase 1 system, which became operational in February of 1990, implements a core set of functions. Phase 2, scheduled for release in 1991, adds functions to the Phase 1 system. Phase 3, to be prototyped beginning in 1991 and delivered in 1992, introduces a distributed system, separate from the Phase 1 and 2 system, with a refined semantic data model. Phase 4 extends the data model and functionality of the Phase 3 system to provide support for the NASA design community, including integration with Computer Aided Design (CAD) environments. Phase 4 is scheduled for prototyping in 1992 to 93 and delivery in 1994.

  18. Desktop Computing Integration Project

    NASA Technical Reports Server (NTRS)

    Tureman, Robert L., Jr.

    1992-01-01

    The Desktop Computing Integration Project for the Human Resources Management Division (HRMD) of LaRC was designed to help division personnel use personal computing resources to perform job tasks. The three goals of the project were to involve HRMD personnel in desktop computing, link mainframe data to desktop capabilities, and to estimate training needs for the division. The project resulted in increased usage of personal computers by Awards specialists, an increased awareness of LaRC resources to help perform tasks, and personal computer output that was used in presentation of information to center personnel. In addition, the necessary skills for HRMD personal computer users were identified. The Awards Office was chosen for the project because of the consistency of their data requests and the desire of employees in that area to use the personal computer.

  19. Man-machine interfaces in LACIE/ERIPS

    NASA Technical Reports Server (NTRS)

    Duprey, B. B. (Principal Investigator)

    1979-01-01

    One of the most important aspects of the interactive portion of the LACIE/ERIPS software system is the way in which the analysis and decision-making capabilities of a human being are integrated with the speed and accuracy of a computer to produce a powerful analysis system. The three major man-machine interfaces in the system are (1) the use of menus for communications between the software and the interactive user; (2) the checkpoint/restart facility to recreate in one job the internal environment achieved in an earlier one; and (3) the error recovery capability which would normally cause job termination. This interactive system, which executes on an IBM 360/75 mainframe, was adapted for use in noninteractive (batch) mode. A case study is presented to show how the interfaces work in practice by defining some fields based on an image screen display, noting the field definitions, and obtaining a film product of the classification map.

  20. ASTEC: Controls analysis for personal computers

    NASA Technical Reports Server (NTRS)

    Downing, John P.; Bauer, Frank H.; Thorpe, Christopher J.

    1989-01-01

    The ASTEC (Analysis and Simulation Tools for Engineering Controls) software is under development at Goddard Space Flight Center (GSFC). The design goal is to provide a wide selection of controls analysis tools at the personal computer level, as well as the capability to upload compute-intensive jobs to a mainframe or supercomputer. The project is a follow-on to the INCA (INteractive Controls Analysis) program that has been developed at GSFC over the past five years. While ASTEC makes use of the algorithms and expertise developed for the INCA program, the user interface was redesigned to take advantage of the capabilities of the personal computer. The design philosophy and the current capabilities of the ASTEC software are described.

  1. An evaluation of superminicomputers for thermal analysis

    NASA Technical Reports Server (NTRS)

    Storaasli, O. O.; Vidal, J. B.; Jones, G. K.

    1982-01-01

    The use of superminicomputers for solving a series of increasingly complex thermal analysis problems is investigated. The approach involved (1) installation and verification of the SPAR thermal analyzer software on superminicomputers at Langley Research Center and Goddard Space Flight Center, (2) solution of six increasingly complex thermal problems on this equipment, and (3) comparison of solution (accuracy, CPU time, turnaround time, and cost) with solutions on large mainframe computers.

  2. Health care informatics research implementation of the VA-DHCP Spanish version for Latin America.

    PubMed Central

    Samper, R.; Marin, C. J.; Ospina, J. A.; Varela, C. A.

    1992-01-01

    The VA DHCP, hospital computer program represents an integral solution to the complex clinical and administrative functions of any hospital world wide. Developed by the Department of Veterans Administration, it has until lately run exclusively in mainframe platforms. The recent implementation in PCs opens the opportunity for use in Latinamerica. Detailed description of the strategy for Spanish, local implementation in Colombia is made. PMID:1482994

  3. Health care informatics research implementation of the VA-DHCP Spanish version for Latin America.

    PubMed

    Samper, R; Marin, C J; Ospina, J A; Varela, C A

    1992-01-01

    The VA DHCP, hospital computer program represents an integral solution to the complex clinical and administrative functions of any hospital world wide. Developed by the Department of Veterans Administration, it has until lately run exclusively in mainframe platforms. The recent implementation in PCs opens the opportunity for use in Latinamerica. Detailed description of the strategy for Spanish, local implementation in Colombia is made.

  4. A DNA sequence analysis package for the IBM personal computer.

    PubMed Central

    Lagrimini, L M; Brentano, S T; Donelson, J E

    1984-01-01

    We present here a collection of DNA sequence analysis programs, called "PC Sequence" (PCS), which are designed to run on the IBM Personal Computer (PC). These programs are written in IBM PC compiled BASIC and take full advantage of the IBM PC's speed, error handling, and graphics capabilities. For a modest initial expense in hardware any laboratory can use these programs to quickly perform computer analysis on DNA sequences. They are written with the novice user in mind and require very little training or previous experience with computers. Also provided are a text editing program for creating and modifying DNA sequence files and a communications program which enables the PC to communicate with and collect information from mainframe computers and DNA sequence databases. PMID:6546433

  5. Web-Enabled Systems for Student Access.

    ERIC Educational Resources Information Center

    Harris, Chad S.; Herring, Tom

    1999-01-01

    California State University, Fullerton is developing a suite of server-based, Web-enabled applications that distribute the functionality of its student information system software to external customers without modifying the mainframe applications or databases. The cost-effective, secure, and rapidly deployable business solution involves using the…

  6. Functional requirements document for NASA/MSFC Earth Science and Applications Division: Data and information system (ESAD-DIS). Interoperability, 1992

    NASA Technical Reports Server (NTRS)

    Stephens, J. Briscoe; Grider, Gary W.

    1992-01-01

    These Earth Science and Applications Division-Data and Information System (ESAD-DIS) interoperability requirements are designed to quantify the Earth Science and Application Division's hardware and software requirements in terms of communications between personal and visualization workstation, and mainframe computers. The electronic mail requirements and local area network (LAN) requirements are addressed. These interoperability requirements are top-level requirements framed around defining the existing ESAD-DIS interoperability and projecting known near-term requirements for both operational support and for management planning. Detailed requirements will be submitted on a case-by-case basis. This document is also intended as an overview of ESAD-DIs interoperability for new-comers and management not familiar with these activities. It is intended as background documentation to support requests for resources and support requirements.

  7. Impact of workstations on criticality analyses at ABB combustion engineering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tarko, L.B.; Freeman, R.S.; O'Donnell, P.F.

    1993-01-01

    During 1991, ABB Combustion Engineering (ABB C-E) made the transition from a CDC Cyber 990 mainframe for nuclear criticality safety analyses to Hewlett Packard (HP)/Apollo workstations. The primary motivation for this change was improved economics of the workstation and maintaining state-of-the-art technology. The Cyber 990 utilized the NOS operating system with a 60-bit word size. The CPU memory size was limited to 131 100 words of directly addressable memory with an extended 250000 words available. The Apollo workstation environment at ABB consists of HP/Apollo-9000/400 series desktop units used by most application engineers, networked with HP/Apollo DN10000 platforms that use 32-bitmore » word size and function as the computer servers and network administrative CPUS, providing a virtual memory system.« less

  8. Arterial signal timing optimization using PASSER II-87

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, E.C.P.; Messer, C.J.; Garza, R.U.

    1988-11-01

    PASSER is the acronym for the Progression Analysis and Signal System Evaluation Routine. PASSER II was originally developed by the Texas Transportation Institute (TTI) for the Dallas Corridor Project. The Texas State Department of Highways and Public Transportation (SDHPT) has sponsored the subsequent program development on both mainframe computers and microcomputers. The theory, model structure, methodology, and logic of PASSER II have been evaluated and well documented. PASSER II is widely used because of its ability to easily select multiple-phase sequences by adjusting the background cycle length and progression speeds to find the optimal timing plants, such as cycle, greenmore » split, phase sequence, and offsets, that can efficiently maximize the two-way progression bands.« less

  9. Disciplines, models, and computers: the path to computational quantum chemistry.

    PubMed

    Lenhard, Johannes

    2014-12-01

    Many disciplines and scientific fields have undergone a computational turn in the past several decades. This paper analyzes this sort of turn by investigating the case of computational quantum chemistry. The main claim is that the transformation from quantum to computational quantum chemistry involved changes in three dimensions. First, on the side of instrumentation, small computers and a networked infrastructure took over the lead from centralized mainframe architecture. Second, a new conception of computational modeling became feasible and assumed a crucial role. And third, the field of computa- tional quantum chemistry became organized in a market-like fashion and this market is much bigger than the number of quantum theory experts. These claims will be substantiated by an investigation of the so-called density functional theory (DFT), the arguably pivotal theory in the turn to computational quantum chemistry around 1990.

  10. Avoid Disaster: Use Firewalls for Inter-Intranet Security.

    ERIC Educational Resources Information Center

    Charnetski, J. R.

    1998-01-01

    Discusses the use of firewalls for library intranets, highlighting the move from mainframes to PCs, security issues and firewall architecture, and operating systems. Provides a glossary of basic networking terms and a bibliography of suggested reading. (PEN)

  11. Benchmarked analyses of gamma skyshine using MORSE-CGA-PC and the DABL69 cross-section set

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reichert, P.T.; Golshani, M.

    1991-01-01

    Design for gamma-ray skyshine is a common consideration for a variety of nuclear and accelerator facilities. Many of these designs can benefit from a more accurate and complete treatment than can be provided by simple skyshine analysis tools. Those methods typically require a number of conservative, simplifying assumptions in modeling the radiation source and shielding geometry. This paper considers the benchmarking of one analytical option. The MORSE-CGA Monte Carlo radiation transport code system provides the capability for detailed treatment of virtually any source and shielding geometry. Unfortunately, the mainframe computer costs of MORSE-CGA analyses can prevent cost-effective application to smallmore » projects. For this reason, the MORSE-CGA system was converted to run on IBM personal computer (PC)-compatible computers using the Intel 80386 or 80486 microprocessors. The DLC-130/DABL69 cross-section set (46n,23g) was chosen as the most suitable, readily available, broad-group library. The most important reason is the relatively high (P{sub 5}) Legendre order of expansion for angular distribution. This is likely to be beneficial in the deep-penetration conditions modeled in some skyshine problems.« less

  12. The challenge of a data storage hierarchy

    NASA Technical Reports Server (NTRS)

    Ruderman, Michael

    1992-01-01

    A discussion of Mesa Archival Systems' data archiving system is presented. This data archiving system is strictly a software system that is implemented on a mainframe and manages the data into permanent file storage. Emphasis is placed on the fact that any kind of client system on the network can be connected through the Unix interface of the data archiving system.

  13. TDRSS-user orbit determination using batch least-squares and sequential methods

    NASA Astrophysics Data System (ADS)

    Oza, D. H.; Jones, T. L.; Hakimi, M.; Samii, Mina V.; Doll, C. E.; Mistretta, G. D.; Hart, R. C.

    1993-02-01

    The Goddard Space Flight Center (GSFC) Flight Dynamics Division (FDD) commissioned Applied Technology Associates, Incorporated, to develop the Real-Time Orbit Determination/Enhanced (RTOD/E) system on a Disk Operating System (DOS)-based personal computer (PC) as a prototype system for sequential orbit determination of spacecraft. This paper presents the results of a study to compare the orbit determination accuracy for a Tracking and Data Relay Satellite System (TDRSS) user spacecraft, Landsat-4, obtained using RTOD/E, operating on a PC, with the accuracy of an established batch least-squares system, the Goddard Trajectory Determination System (GTDS), and operating on a mainframe computer. The results of Landsat-4 orbit determination will provide useful experience for the Earth Observing System (EOS) series of satellites. The Landsat-4 ephemerides were estimated for the January 17-23, 1991, timeframe, during which intensive TDRSS tracking data for Landsat-4 were available. Independent assessments were made of the consistencies (overlap comparisons for the batch case and covariances and the first measurement residuals for the sequential case) of solutions produced by the batch and sequential methods. The forward-filtered RTOD/E orbit solutions were compared with the definitive GTDS orbit solutions for Landsat-4; the solution differences were less than 40 meters after the filter had reached steady state.

  14. TDRSS-user orbit determination using batch least-squares and sequential methods

    NASA Technical Reports Server (NTRS)

    Oza, D. H.; Jones, T. L.; Hakimi, M.; Samii, Mina V.; Doll, C. E.; Mistretta, G. D.; Hart, R. C.

    1993-01-01

    The Goddard Space Flight Center (GSFC) Flight Dynamics Division (FDD) commissioned Applied Technology Associates, Incorporated, to develop the Real-Time Orbit Determination/Enhanced (RTOD/E) system on a Disk Operating System (DOS)-based personal computer (PC) as a prototype system for sequential orbit determination of spacecraft. This paper presents the results of a study to compare the orbit determination accuracy for a Tracking and Data Relay Satellite System (TDRSS) user spacecraft, Landsat-4, obtained using RTOD/E, operating on a PC, with the accuracy of an established batch least-squares system, the Goddard Trajectory Determination System (GTDS), and operating on a mainframe computer. The results of Landsat-4 orbit determination will provide useful experience for the Earth Observing System (EOS) series of satellites. The Landsat-4 ephemerides were estimated for the January 17-23, 1991, timeframe, during which intensive TDRSS tracking data for Landsat-4 were available. Independent assessments were made of the consistencies (overlap comparisons for the batch case and covariances and the first measurement residuals for the sequential case) of solutions produced by the batch and sequential methods. The forward-filtered RTOD/E orbit solutions were compared with the definitive GTDS orbit solutions for Landsat-4; the solution differences were less than 40 meters after the filter had reached steady state.

  15. Multi-crop area estimation and mapping on a microprocessor/mainframe network

    NASA Technical Reports Server (NTRS)

    Sheffner, E.

    1985-01-01

    The data processing system is outlined for a 1985 test aimed at determining the performance characteristics of area estimation and mapping procedures connected with the California Cooperative Remote Sensing Project. The project is a joint effort of the USDA Statistical Reporting Service-Remote Sensing Branch, the California Department of Water Resources, NASA-Ames Research Center, and the University of California Remote Sensing Research Program. One objective of the program was to study performance when data processing is done on a microprocessor/mainframe network under operational conditions. The 1985 test covered the hardware, software, and network specifications and the integration of these three components. Plans for the year - including planned completion of PEDITOR software, testing of software on MIDAS, and accomplishment of data processing on the MIDAS-VAX-CRAY network - are discussed briefly.

  16. Comparison of ERBS orbit determination accuracy using batch least-squares and sequential methods

    NASA Technical Reports Server (NTRS)

    Oza, D. H.; Jones, T. L.; Fabien, S. M.; Mistretta, G. D.; Hart, R. C.; Doll, C. E.

    1991-01-01

    The Flight Dynamics Div. (FDD) at NASA-Goddard commissioned a study to develop the Real Time Orbit Determination/Enhanced (RTOD/E) system as a prototype system for sequential orbit determination of spacecraft on a DOS based personal computer (PC). An overview is presented of RTOD/E capabilities and the results are presented of a study to compare the orbit determination accuracy for a Tracking and Data Relay Satellite System (TDRSS) user spacecraft obtained using RTOS/E on a PC with the accuracy of an established batch least squares system, the Goddard Trajectory Determination System (GTDS), operating on a mainframe computer. RTOD/E was used to perform sequential orbit determination for the Earth Radiation Budget Satellite (ERBS), and the Goddard Trajectory Determination System (GTDS) was used to perform the batch least squares orbit determination. The estimated ERBS ephemerides were obtained for the Aug. 16 to 22, 1989, timeframe, during which intensive TDRSS tracking data for ERBS were available. Independent assessments were made to examine the consistencies of results obtained by the batch and sequential methods. Comparisons were made between the forward filtered RTOD/E orbit solutions and definitive GTDS orbit solutions for ERBS; the solution differences were less than 40 meters after the filter had reached steady state.

  17. Comparison of ERBS orbit determination accuracy using batch least-squares and sequential methods

    NASA Astrophysics Data System (ADS)

    Oza, D. H.; Jones, T. L.; Fabien, S. M.; Mistretta, G. D.; Hart, R. C.; Doll, C. E.

    1991-10-01

    The Flight Dynamics Div. (FDD) at NASA-Goddard commissioned a study to develop the Real Time Orbit Determination/Enhanced (RTOD/E) system as a prototype system for sequential orbit determination of spacecraft on a DOS based personal computer (PC). An overview is presented of RTOD/E capabilities and the results are presented of a study to compare the orbit determination accuracy for a Tracking and Data Relay Satellite System (TDRSS) user spacecraft obtained using RTOS/E on a PC with the accuracy of an established batch least squares system, the Goddard Trajectory Determination System (GTDS), operating on a mainframe computer. RTOD/E was used to perform sequential orbit determination for the Earth Radiation Budget Satellite (ERBS), and the Goddard Trajectory Determination System (GTDS) was used to perform the batch least squares orbit determination. The estimated ERBS ephemerides were obtained for the Aug. 16 to 22, 1989, timeframe, during which intensive TDRSS tracking data for ERBS were available. Independent assessments were made to examine the consistencies of results obtained by the batch and sequential methods. Comparisons were made between the forward filtered RTOD/E orbit solutions and definitive GTDS orbit solutions for ERBS; the solution differences were less than 40 meters after the filter had reached steady state.

  18. Market research for Idaho Transportation Department linear referencing system.

    DOT National Transportation Integrated Search

    2009-09-02

    For over 30 years, the Idaho Transportation Department (ITD) has had an LRS called MACS : (MilePoint And Coded Segment), which is being implemented on a mainframe using a : COBOL/CICS platform. As ITD began embracing newer technologies and moving tow...

  19. A SLAM II simulation model for analyzing space station mission processing requirements

    NASA Technical Reports Server (NTRS)

    Linton, D. G.

    1985-01-01

    Space station mission processing is modeled via the SLAM 2 simulation language on an IBM 4381 mainframe and an IBM PC microcomputer with 620K RAM, two double-sided disk drives and an 8087 coprocessor chip. Using a time phased mission (payload) schedule and parameters associated with the mission, orbiter (space shuttle) and ground facility databases, estimates for ground facility utilization are computed. Simulation output associated with the science and applications database is used to assess alternative mission schedules.

  20. Beyond Information Retrieval: Ways To Provide Content in Context.

    ERIC Educational Resources Information Center

    Wiley, Deborah Lynne

    1998-01-01

    Provides an overview of information retrieval from mainframe systems to Web search engines; discusses collaborative filtering, data extraction, data visualization, agent technology, pattern recognition, classification and clustering, and virtual communities. Argues that rather than huge data-storage centers and proprietary software, we need…

  1. Installing an Integrated System and a Fourth-Generation Language.

    ERIC Educational Resources Information Center

    Ridenour, David; Ferguson, Linda

    1987-01-01

    In the spring of 1986 Indiana State University converted to the Series Z software of Information Associates, an IBM mainframe, and Information Builders' FOCUS fourth-generation language. The beginning of the planning stage to product selection, training, and implementation is described. (Author/MLW)

  2. What Lies Beyond the Online Catalog?

    ERIC Educational Resources Information Center

    Matthews, Joseph R.; And Others

    1985-01-01

    Five prominent consultants project technological advancements that, in some cases, will enhance current library systems, and in many cases will cause them to become obsolete. Major trends include advances in mainframe and microcomputing technology, development of inexpensive local area networks and telecommunications gateways, and the advent of…

  3. Kodak Optical Disk and Microfilm Technologies Carve Niches in Specific Applications.

    ERIC Educational Resources Information Center

    Gallenberger, John; Batterton, John

    1989-01-01

    Describes the Eastman Kodak Company's microfilm and optical disk technologies and their applications. Topics discussed include WORM technology; retrieval needs and cost effective archival storage needs; engineering applications; jukeboxes; optical storage options; systems for use with mainframes and microcomputers; and possible future…

  4. The SEL Adapts to Meet Changing Times

    NASA Technical Reports Server (NTRS)

    Pajerski, Rose S.; Basili, Victor R.

    1997-01-01

    Since 1976, the Software Engineering Laboratory (SEL) has been dedicated to understanding and improving the way in which one NASA organization, the Flight Dynamics Division (FDD) at Goddard Space Flight Center, develops, maintains, and manages complex flight dynamics systems. It has done this by developing and refining a continual process improvement approach that allows an organization such as the FDD to fine-tune its process for its particular domain. Experimental software engineering and measurement play a significant role in this approach. The SEL is a partnership of NASA Goddard, its major software contractor, Computer Sciences Corporation (CSC), and the University of Maryland's (LTM) Department of Computer Science. The FDD primarily builds software systems that provide ground-based flight dynamics support for scientific satellites. They fall into two sets: ground systems and simulators. Ground systems are midsize systems that average around 250 thousand source lines of code (KSLOC). Ground system development projects typically last 1 - 2 years. Recent systems have been rehosted to workstations from IBM mainframes, and also contain significant new subsystems written in C and C++. The simulators are smaller systems averaging around 60 KSLOC that provide the test data for the ground systems. Simulator development lasts up to 1 year. Most of the simulators have been built in Ada on workstations. The SEL is responsible for the management and continual improvement of the software engineering processes used on these FDD projects.

  5. An implementation of the distributed programming structural synthesis system (PROSSS)

    NASA Technical Reports Server (NTRS)

    Rogers, J. L., Jr.

    1981-01-01

    A method is described for implementing a flexible software system that combines large, complex programs with small, user-supplied, problem-dependent programs and that distributes their execution between a mainframe and a minicomputer. The Programming Structural Synthesis System (PROSSS) was the specific software system considered. The results of such distributed implementation are flexibility of the optimization procedure organization and versatility of the formulation of constraints and design variables.

  6. Workstations take over conceptual design

    NASA Technical Reports Server (NTRS)

    Kidwell, George H.

    1987-01-01

    Workstations provide sufficient computing memory and speed for early evaluations of aircraft design alternatives to identify those worthy of further study. It is recommended that the programming of such machines permit integrated calculations of the configuration and performance analysis of new concepts, along with the capability of changing up to 100 variables at a time and swiftly viewing the results. Computations can be augmented through links to mainframes and supercomputers. Programming, particularly debugging operations, are enhanced by the capability of working with one program line at a time and having available on-screen error indices. Workstation networks permit on-line communication among users and with persons and computers outside the facility. Application of the capabilities is illustrated through a description of NASA-Ames design efforts for an oblique wing for a jet performed on a MicroVAX network.

  7. Assessment of the information content of patterns: an algorithm

    NASA Astrophysics Data System (ADS)

    Daemi, M. Farhang; Beurle, R. L.

    1991-12-01

    A preliminary investigation confirmed the possibility of assessing the translational and rotational information content of simple artificial images. The calculation is tedious, and for more realistic patterns it is essential to implement the method on a computer. This paper describes an algorithm developed for this purpose which confirms the results of the preliminary investigation. Use of the algorithm facilitates much more comprehensive analysis of the combined effect of continuous rotation and fine translation, and paves the way for analysis of more realistic patterns. Owing to the volume of calculation involved in these algorithms, extensive computing facilities were necessary. The major part of the work was carried out using an ICL 3900 series mainframe computer as well as other powerful workstations such as a RISC architecture MIPS machine.

  8. JANE, A new information retrieval system for the Radiation Shielding Information Center

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trubey, D.K.

    A new information storage and retrieval system has been developed for the Radiation Shielding Information Center (RSIC) at Oak Ridge National Laboratory to replace mainframe systems that have become obsolete. The database contains citations and abstracts of literature which were selected by RSIC analysts and indexed with terms from a controlled vocabulary. The database, begun in 1963, has been maintained continuously since that time. The new system, called JANE, incorporates automatic indexing techniques and on-line retrieval using the RSIC Data General Eclipse MV/4000 minicomputer, Automatic indexing and retrieval techniques based on fuzzy-set theory allow the presentation of results in ordermore » of Retrieval Status Value. The fuzzy-set membership function depends on term frequency in the titles and abstracts and on Term Discrimination Values which indicate the resolving power of the individual terms. These values are determined by the Cover Coefficient method. The use of a commercial database base to store and retrieve the indexing information permits rapid retrieval of the stored documents. Comparisons of the new and presently-used systems for actual searches of the literature indicate that it is practical to replace the mainframe systems with a minicomputer system similar to the present version of JANE. 18 refs., 10 figs.« less

  9. Software Testing and Verification in Climate Model Development

    NASA Technical Reports Server (NTRS)

    Clune, Thomas L.; Rood, RIchard B.

    2011-01-01

    Over the past 30 years most climate models have grown from relatively simple representations of a few atmospheric processes to a complex multi-disciplinary system. Computer infrastructure over that period has gone from punch card mainframes to modem parallel clusters. Model implementations have become complex, brittle, and increasingly difficult to extend and maintain. Existing verification processes for model implementations rely almost exclusively upon some combination of detailed analysis of output from full climate simulations and system-level regression tests. In additional to being quite costly in terms of developer time and computing resources, these testing methodologies are limited in terms of the types of defects that can be detected, isolated and diagnosed. Mitigating these weaknesses of coarse-grained testing with finer-grained "unit" tests has been perceived as cumbersome and counter-productive. In the commercial software sector, recent advances in tools and methodology have led to a renaissance for systematic fine-grained testing. We discuss the availability of analogous tools for scientific software and examine benefits that similar testing methodologies could bring to climate modeling software. We describe the unique challenges faced when testing complex numerical algorithms and suggest techniques to minimize and/or eliminate the difficulties.

  10. User's manual for the Macintosh version of PASCO

    NASA Technical Reports Server (NTRS)

    Lucas, S. H.; Davis, Randall C.

    1991-01-01

    A user's manual for Macintosh PASCO is presented. Macintosh PASCO is an Apple Macintosh version of PASCO, an existing computer code for structural analysis and optimization of longitudinally stiffened composite panels. PASCO combines a rigorous buckling analysis program with a nonlinear mathematical optimization routine to minimize panel mass. Macintosh PASCO accepts the same input as mainframe versions of PASCO. As output, Macintosh PASCO produces a text file and mode shape plots in the form of Apple Macintosh PICT files. Only the user interface for Macintosh is discussed here.

  11. Computing at h1 - Experience and Future

    NASA Astrophysics Data System (ADS)

    Eckerlin, G.; Gerhards, R.; Kleinwort, C.; KrÜNer-Marquis, U.; Egli, S.; Niebergall, F.

    The H1 experiment has now been successfully operating at the electron proton collider HERA at DESY for three years. During this time the computing environment has gradually shifted from a mainframe oriented environment to the distributed server/client Unix world. This transition is now almost complete. Computing needs are largely determined by the present amount of 1.5 TB of reconstructed data per year (1994), corresponding to 1.2 × 107 accepted events. All data are centrally available at DESY. In addition to data analysis, which is done in all collaborating institutes, most of the centrally organized Monte Carlo production is performed outside of DESY. New software tools to cope with offline computing needs include CENTIPEDE, a tool for the use of distributed batch and interactive resources for Monte Carlo production, and H1 UNIX, a software package for automatic updates of H1 software on all UNIX platforms.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Chao

    Sparx, a new environment for Cryo-EM image processing; Cryo-EM, Single particle reconstruction, principal component analysis; Hardware Req.: PC, MAC, Supercomputer, Mainframe, Multiplatform, Workstation. Software Req.: operating system is Unix; Compiler C++; type of files: source code, object library, executable modules, compilation instructions; sample problem input data. Location/transmission: http://sparx-em.org; User manual & paper: http://sparx-em.org;

  13. Some Problems and Solutions in Transferring Ecosystem Simulation Codes to Supercomputers

    NASA Technical Reports Server (NTRS)

    Skiles, J. W.; Schulbach, C. H.

    1994-01-01

    Many computer codes for the simulation of ecological systems have been developed in the last twenty-five years. This development took place initially on main-frame computers, then mini-computers, and more recently, on micro-computers and workstations. Recent recognition of ecosystem science as a High Performance Computing and Communications Program Grand Challenge area emphasizes supercomputers (both parallel and distributed systems) as the next set of tools for ecological simulation. Transferring ecosystem simulation codes to such systems is not a matter of simply compiling and executing existing code on the supercomputer since there are significant differences in the system architectures of sequential, scalar computers and parallel and/or vector supercomputers. To more appropriately match the application to the architecture (necessary to achieve reasonable performance), the parallelism (if it exists) of the original application must be exploited. We discuss our work in transferring a general grassland simulation model (developed on a VAX in the FORTRAN computer programming language) to a Cray Y-MP. We show the Cray shared-memory vector-architecture, and discuss our rationale for selecting the Cray. We describe porting the model to the Cray and executing and verifying a baseline version, and we discuss the changes we made to exploit the parallelism in the application and to improve code execution. As a result, the Cray executed the model 30 times faster than the VAX 11/785 and 10 times faster than a Sun 4 workstation. We achieved an additional speed-up of approximately 30 percent over the original Cray run by using the compiler's vectorizing capabilities and the machine's ability to put subroutines and functions "in-line" in the code. With the modifications, the code still runs at only about 5% of the Cray's peak speed because it makes ineffective use of the vector processing capabilities of the Cray. We conclude with a discussion and future plans.

  14. Evaluation of Landsat-4 orbit determination accuracy using batch least-squares and sequential methods

    NASA Astrophysics Data System (ADS)

    Oza, D. H.; Jones, T. L.; Feiertag, R.; Samii, M. V.; Doll, C. E.; Mistretta, G. D.; Hart, R. C.

    The Goddard Space Flight Center (GSFC) Flight Dynamics Division (FDD) commissioned Applied Technology Associates, Incorporated, to develop the Real-Time Orbit Determination/Enhanced (RTOD/E) system on a Disk Operating System (DOS)-based personal computer (PC) as a prototype system for sequential orbit determination of spacecraft. This paper presents the results of a study to compare the orbit determination accuracy for a Tracking and Data Relay Satellite (TDRS) System (TDRSS) user spacecraft, Landsat-4, obtained using RTOD/E, operating on a PC, with the accuracy of an established batch least-squares system, the Goddard Trajectory Determination System (GTDS), operating on a mainframe computer. The results of Landsat-4 orbit determination will provide useful experience for the Earth Observing System (EOS) series of satellites. The Landsat-4 ephemerides were estimated for the May 18-24, 1992, timeframe, during which intensive TDRSS tracking data for Landsat-4 were available. During this period, there were two separate orbit-adjust maneuvers on one of the TDRSS spacecraft (TDRS-East) and one small orbit-adjust maneuver for Landsat-4. Independent assessments were made of the consistencies (overlap comparisons for the batch case and covariances and the first measurement residuals for the sequential case) of solutions produced by the batch and sequential methods. The forward-filtered RTOD/E orbit solutions were compared with the definitive GTDS orbit solutions for Landsat-4; the solution differences were generally less than 30 meters after the filter had reached steady state.

  15. Evaluation of Landsat-4 orbit determination accuracy using batch least-squares and sequential methods

    NASA Technical Reports Server (NTRS)

    Oza, D. H.; Jones, T. L.; Feiertag, R.; Samii, M. V.; Doll, C. E.; Mistretta, G. D.; Hart, R. C.

    1993-01-01

    The Goddard Space Flight Center (GSFC) Flight Dynamics Division (FDD) commissioned Applied Technology Associates, Incorporated, to develop the Real-Time Orbit Determination/Enhanced (RTOD/E) system on a Disk Operating System (DOS)-based personal computer (PC) as a prototype system for sequential orbit determination of spacecraft. This paper presents the results of a study to compare the orbit determination accuracy for a Tracking and Data Relay Satellite (TDRS) System (TDRSS) user spacecraft, Landsat-4, obtained using RTOD/E, operating on a PC, with the accuracy of an established batch least-squares system, the Goddard Trajectory Determination System (GTDS), operating on a mainframe computer. The results of Landsat-4 orbit determination will provide useful experience for the Earth Observing System (EOS) series of satellites. The Landsat-4 ephemerides were estimated for the May 18-24, 1992, timeframe, during which intensive TDRSS tracking data for Landsat-4 were available. During this period, there were two separate orbit-adjust maneuvers on one of the TDRSS spacecraft (TDRS-East) and one small orbit-adjust maneuver for Landsat-4. Independent assessments were made of the consistencies (overlap comparisons for the batch case and covariances and the first measurement residuals for the sequential case) of solutions produced by the batch and sequential methods. The forward-filtered RTOD/E orbit solutions were compared with the definitive GTDS orbit solutions for Landsat-4; the solution differences were generally less than 30 meters after the filter had reached steady state.

  16. Design and implementation of scalable tape archiver

    NASA Technical Reports Server (NTRS)

    Nemoto, Toshihiro; Kitsuregawa, Masaru; Takagi, Mikio

    1996-01-01

    In order to reduce costs, computer manufacturers try to use commodity parts as much as possible. Mainframes using proprietary processors are being replaced by high performance RISC microprocessor-based workstations, which are further being replaced by the commodity microprocessor used in personal computers. Highly reliable disks for mainframes are also being replaced by disk arrays, which are complexes of disk drives. In this paper we try to clarify the feasibility of a large scale tertiary storage system composed of 8-mm tape archivers utilizing robotics. In the near future, the 8-mm tape archiver will be widely used and become a commodity part, since recent rapid growth of multimedia applications requires much larger storage than disk drives can provide. We designed a scalable tape archiver which connects as many 8-mm tape archivers (element archivers) as possible. In the scalable archiver, robotics can exchange a cassette tape between two adjacent element archivers mechanically. Thus, we can build a large scalable archiver inexpensively. In addition, a sophisticated migration mechanism distributes frequently accessed tapes (hot tapes) evenly among all of the element archivers, which improves the throughput considerably. Even with the failures of some tape drives, the system dynamically redistributes hot tapes to the other element archivers which have live tape drives. Several kinds of specially tailored huge archivers are on the market, however, the 8-mm tape scalable archiver could replace them. To maintain high performance in spite of high access locality when a large number of archivers are attached to the scalable archiver, it is necessary to scatter frequently accessed cassettes among the element archivers and to use the tape drives efficiently. For this purpose, we introduce two cassette migration algorithms, foreground migration and background migration. Background migration transfers cassettes between element archivers to redistribute frequently accessed cassettes, thus balancing the load of each archiver. Background migration occurs the robotics are idle. Both migration algorithms are based on access frequency and space utility of each element archiver. To normalize these parameters according to the number of drives in each element archiver, it is possible to maintain high performance even if some tape drives fail. We found that the foreground migration is efficient at reducing access response time. Beside the foreground migration, the background migration makes it possible to track the transition of spatial access locality quickly.

  17. Composite Cores

    NASA Technical Reports Server (NTRS)

    1990-01-01

    Spang & Company's new configuration of converter transformer cores is a composite of gapped and ungapped cores assembled together in concentric relationship. The net effect of the composite design is to combine the protection from saturation offered by the gapped core with the lower magnetizing requirement of the ungapped core. The uncut core functions under normal operating conditions and the cut core takes over during abnormal operation to prevent power surges and their potentially destructive effect on transistors. Principal customers are aerospace and defense manufacturers. Cores also have applicability in commercial products where precise power regulation is required, as in the power supplies for large mainframe computers.

  18. Verification of a national water data base using a geographic information system

    USGS Publications Warehouse

    Harrison, H.E.

    1994-01-01

    The National Water Data Exchange (NAWDEX) was developed to assist users of water-resource data in the identification, location, and acquisition of data. The Master Water Data Index (MWDI) of NAWDEX currently indexes the data collected by 423 organizations from nearly 500,000 sites throughout the United Stales. The utilization of new computer technologies permit the distribution of the MWDI to the public on compact disc. In addition, geographic information systems (GIS) are now available that can store and analyze these data in a spatial format. These recent innovations could increase access and add new capabilities to the MWDI. Before either of these technologies could be employed, however, a quality-assurance check of the MWDI needed to be performed. The MWDI resides on a mainframe computer in a tabular format. It was copied onto a workstation and converted to a GIS format. The GIS was used to identify errors in the MWDI and produce reports that summarized these errors. The summary reports were sent to the responsible contributing agencies along with instructions for submitting their corrections to the NAWDEX Program Office. The MWDI administrator received reports that summarized all of the errors identified. Of the 494,997 sites checked, 93,440 sites had at least one error (18.9 percent error rate).

  19. Jackson State University's Center for Spatial Data Research and Applications: New facilities and new paradigms

    NASA Technical Reports Server (NTRS)

    Davis, Bruce E.; Elliot, Gregory

    1989-01-01

    Jackson State University recently established the Center for Spatial Data Research and Applications, a Geographical Information System (GIS) and remote sensing laboratory. Taking advantage of new technologies and new directions in the spatial (geographic) sciences, JSU is building a Center of Excellence in Spatial Data Management. New opportunities for research, applications, and employment are emerging. GIS requires fundamental shifts and new demands in traditional computer science and geographic training. The Center is not merely another computer lab but is one setting the pace in a new applied frontier. GIS and its associated technologies are discussed. The Center's facilities are described. An ARC/INFO GIS runs on a Vax mainframe, with numerous workstations. Image processing packages include ELAS, LIPS, VICAR, and ERDAS. A host of hardware and software peripheral are used in support. Numerous projects are underway, such as the construction of a Gulf of Mexico environmental data base, development of AI in image processing, a land use dynamics study of metropolitan Jackson, and others. A new academic interdisciplinary program in Spatial Data Management is under development, combining courses in Geography and Computer Science. The broad range of JSU's GIS and remote sensing activities is addressed. The impacts on changing paradigms in the university and in the professional world conclude the discussion.

  20. METRRA Signature - Radar Cross Section Measurements. Final Report/ Instruction Manual

    DTIC Science & Technology

    1978-12-01

    Configuration 1 1. 5 Condensed System Parameters 1 1.5.1 Transmitter 1 1.5.2 Receiver 4 2.0 Description 5 V 2.1 Transmitter 5 2.3 Receiver 10 2.4 Antennas 14...System Configuration. 1.4.1 See Figure 1.4.2. 1.5 Condensed System Parameters . 1.5.1 Transmitter. "Mainframe: Applied Microwave Laboratory, Model...for Cubic Defense by Addington Laboratories. Techebychev designs are used for both filters to provide the steepest skirts for given numbers of reactive

  1. Lunar laser ranging data processing in a Unix/X windows environment

    NASA Technical Reports Server (NTRS)

    Ricklefs, Randall L.; Ries, Judit G.

    1993-01-01

    In cooperation with the NASA Crustal Dynamics Project initiative placing workstation computers at each of its laser ranging stations to handle data filtering and normalpointing, MLRS personnel have developed a new generation of software to provide the same services for the lunar laser ranging data type. The Unix operating system and X windows/Motif provides an environment for both batch and interactive filtering and normalpointing as well as prediction calculations. The goal is to provide a transportable and maintainable data reduction environment. This software and some sample displays are presented. that the lunar (or satellite) datacould be processed on one computer while data was taken on the other. The reduction of the data was totally interactive and in no way automated. In addition, lunar predictions were produced on-site, another first in the effort to down-size historically mainframe-based applications. Extraction of earth rotation parameters was at one time attempted on site in near-realtime. In 1988, the Crustal Dynamics Project SLR Computer Panel mandated the installation of Hewlett-Packard 9000/360 Unix workstations at each NASA-operated laser ranging station to relieve the aging controller computers of much of their data and communications handling responsibility and to provide on-site data filtering and normal pointing for a growing list of artificial satellite targets. This was seen by MLRS staff as an opportunity to provide a better lunar data processing environment as well.

  2. Lunar laser ranging data processing in a Unix/X windows environment

    NASA Astrophysics Data System (ADS)

    Ricklefs, Randall L.; Ries, Judit G.

    1993-06-01

    In cooperation with the NASA Crustal Dynamics Project initiative placing workstation computers at each of its laser ranging stations to handle data filtering and normalpointing, MLRS personnel have developed a new generation of software to provide the same services for the lunar laser ranging data type. The Unix operating system and X windows/Motif provides an environment for both batch and interactive filtering and normalpointing as well as prediction calculations. The goal is to provide a transportable and maintainable data reduction environment. This software and some sample displays are presented. that the lunar (or satellite) datacould be processed on one computer while data was taken on the other. The reduction of the data was totally interactive and in no way automated. In addition, lunar predictions were produced on-site, another first in the effort to down-size historically mainframe-based applications. Extraction of earth rotation parameters was at one time attempted on site in near-realtime. In 1988, the Crustal Dynamics Project SLR Computer Panel mandated the installation of Hewlett-Packard 9000/360 Unix workstations at each NASA-operated laser ranging station to relieve the aging controller computers of much of their data and communications handling responsibility and to provide on-site data filtering and normal pointing for a growing list of artificial satellite targets. This was seen by MLRS staff as an opportunity to provide a better lunar data processing environment as well.

  3. Principles and techniques in the design of ADMS+. [advanced data-base management system

    NASA Technical Reports Server (NTRS)

    Roussopoulos, Nick; Kang, Hyunchul

    1986-01-01

    'ADMS+/-' is an advanced data base management system whose architecture integrates the ADSM+ mainframe data base system with a large number of work station data base systems, designated ADMS-; no communications exist between these work stations. The use of this system radically decreases the response time of locally processed queries, since the work station runs in a single-user mode, and no dynamic security checking is required for the downloaded portion of the data base. The deferred update strategy used reduces overhead due to update synchronization in message traffic.

  4. Users Guide to the JPL Doppler Gravity Database

    NASA Technical Reports Server (NTRS)

    Muller, P. M.; Sjogren, W. L.

    1986-01-01

    Local gravity accelerations and gravimetry have been determined directly from spacecraft Doppler tracking data near the Moon and various planets by the Jet Propulsion Laboratory. Researchers in many fields have an interest in planet-wide global gravimetric mapping and its applications. Many of them use their own computers in support of their studies and would benefit from being able to directly manipulate these gravity data for inclusion in their own modeling computations. Pubication of some 150 Apollo 15 subsatellite low-altitude, high-resolution, single-orbit data sets is covered. The doppler residuals with a determination of the derivative function providing line-of-sight-gravity are both listed and plotted (on microfilm), and can be ordered in computer readable forms (tape and floppy disk). The form and format of this database as well as the methods of data reduction are explained and referenced. A skeleton computer program is provided which can be modified to support re-reductions and re-formatted presentations suitable to a wide variety of research needs undertaken on mainframe or PC class microcomputers.

  5. Business Process Reengineering With Knowledge Value Added in Support of the Department of the Navy Chief Information Officer

    DTIC Science & Technology

    2003-09-01

    BLANK xv LIST OF ACRONYMS ABC Activity Based Costing ADO ActiveX Data Object ASP Application Server Page BPR Business Process Re...processes uses people and systems (hardware, software, machinery, etc.) and that these people and systems contain the “corporate” knowledge of the...server architecture was also a high maintenance item. Data was no longer contained on one mainframe but was distributed throughout the enterprise

  6. Navy Enterprise Resource Planning Program: Governance Challenges in Deploying an Enterprise-Wide Information Technology System in the Department of the Navy

    DTIC Science & Technology

    2010-12-01

    Authority WCF Working Capital Fund Y2K Year 2000 xvi THIS PAGE INTENTIONALLY LEFT BLANK xvii ACKNOWLEDGMENTS We express our deepest...signaling the move away from mainframe systems. However, it was the year 2000 ( Y2K ) dilemma that ushered in unprecedented growth in the development of ERP...software and IT systems of the 1990s. The possibility of non- Y2K compliant legacy systems failing at the turn of the century resulted in the

  7. Development of the functional simulator for the Galileo attitude and articulation control system

    NASA Technical Reports Server (NTRS)

    Namiri, M. K.

    1983-01-01

    A simulation program for verifying and checking the performance of the Galileo Spacecraft's Attitude and Articulation Control Subsystem's (AACS) flight software is discussed. The program, which is called Functional Simulator (FUNSIM), provides a simple method of interfacing user-supplied mathematical models coded in FORTRAN which describes spacecraft dynamics, sensors, and actuators; this is done with the AACS flight software, coded in HAL/S (High-level Advanced Language/Shuttle). It is thus able to simulate the AACS flight software accurately to the HAL/S statement level in the environment of a mainframe computer system. FUNSIM also has a command and data subsystem (CDS) simulator. It is noted that the input/output data and timing are simulated with the same precision as the flight microprocessor. FUNSIM uses a variable stepsize numerical integration algorithm complete with individual error bound control on the state variable to solve the equations of motion. The program has been designed to provide both line printer and matrix dot plotting of the variables requested in the run section and to provide error diagnostics.

  8. s95-16439

    NASA Image and Video Library

    2014-08-14

    S95-16439 (13-22 July 1995) --- An overall view from the rear shows activity in the new Mission Control Center (MCC), opened for operation and dedicated during the STS-70 mission. The new MCC, developed at a cost of about 50 million, replaces the main-frame based, NASA-unique design of the old Mission Control with a standard workstation-based, local area network system commonly in use today.

  9. A comparison of TSS and TRASYS in form factor calculation

    NASA Technical Reports Server (NTRS)

    Golliher, Eric

    1993-01-01

    As the workstation and personal computer become more popular than a centralized mainframe to perform thermal analysis, the methods for space vehicle thermal analysis will change. Already, many thermal analysis codes are now available for workstations, which were not in existence just five years ago. As these changes occur, some organizations will adopt the new codes and analysis techniques, while others will not. This might lead to misunderstandings between thermal shops in different organizations. If thermal analysts make an effort to understand the major differences between the new and old methods, a smoother transition to a more efficient and more versatile thermal analysis environment will be realized.

  10. The Spatial and Temporal Variability of the Arctic Atmospheric Boundary Layer and Its Effect on Electromagnetic (EM) Propagation.

    DTIC Science & Technology

    1987-12-01

    could be run on the IBM 3033 mainframe at the Naval Postgraduate School. I would also like to thank Lt. Mike Dotson whose thesis provided the...29 pressure levels by using an interactive graphing package, Grafstat, available on the IBM 3033 mainframe at the Naval Postgraduate School. Profiles...Polarstern should probably have been 65 . ¢,- .. ... .. .. -- --- P4. itp so’ W. . Fi.3 5. Cae1 p sbr,1 u,10UC n hpslctosa anh20UC (1) S, 40 m ito ce 2) P, MZ

  11. Optical systems integrated modeling

    NASA Technical Reports Server (NTRS)

    Shannon, Robert R.; Laskin, Robert A.; Brewer, SI; Burrows, Chris; Epps, Harlan; Illingworth, Garth; Korsch, Dietrich; Levine, B. Martin; Mahajan, Vini; Rimmer, Chuck

    1992-01-01

    An integrated modeling capability that provides the tools by which entire optical systems and instruments can be simulated and optimized is a key technology development, applicable to all mission classes, especially astrophysics. Many of the future missions require optical systems that are physically much larger than anything flown before and yet must retain the characteristic sub-micron diffraction limited wavefront accuracy of their smaller precursors. It is no longer feasible to follow the path of 'cut and test' development; the sheer scale of these systems precludes many of the older techniques that rely upon ground evaluation of full size engineering units. The ability to accurately model (by computer) and optimize the entire flight system's integrated structural, thermal, and dynamic characteristics is essential. Two distinct integrated modeling capabilities are required. These are an initial design capability and a detailed design and optimization system. The content of an initial design package is shown. It would be a modular, workstation based code which allows preliminary integrated system analysis and trade studies to be carried out quickly by a single engineer or a small design team. A simple concept for a detailed design and optimization system is shown. This is a linkage of interface architecture that allows efficient interchange of information between existing large specialized optical, control, thermal, and structural design codes. The computing environment would be a network of large mainframe machines and its users would be project level design teams. More advanced concepts for detailed design systems would support interaction between modules and automated optimization of the entire system. Technology assessment and development plans for integrated package for initial design, interface development for detailed optimization, validation, and modeling research are presented.

  12. Integration of communications with the Intelligent Gateway Processor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hampel, V.E.

    1986-01-01

    The Intelligent Gateway Processor (IGP) software is being used to interconnect users equipped with different personal computers and ASCII terminals to mainframe machines of different make. This integration is made possible by the IGP's unique user interface and networking software. Prototype systems of the table-driven, interpreter-based IGP have been adapted to very different programmatic requirements and have demonstrated substantial increases in end-user productivity. Procedures previously requiring days can now be carried out in minutes. The IGP software has been under development by the Technology Information Systems (TIS) program at Lawrence Livermore National Laboratory (LLNL) since 1975 and is in usemore » by several federal agencies since 1983: The Air Force is prototyping applications which range from automated identification of spare parts for aircraft to office automation and the controlled storage and distribution of technical orders and engineering drawings. Other applications of the IGP are the Information Management System (IMS) for aviation statistics in the Federal Aviation Administration (FAA), the Nuclear Criticality Information System (NCIS) and a nationwide Cost Estimating System (CES) in the Department of Energy, the library automation network of the Defense Technical Information Center (DTIC), and the modernization program in the Office of the Secretary of Defense (OSD). 31 refs., 9 figs.« less

  13. Implementation of Parallel Computing Technology to Vortex Flow

    NASA Technical Reports Server (NTRS)

    Dacles-Mariani, Jennifer

    1999-01-01

    Mainframe supercomputers such as the Cray C90 was invaluable in obtaining large scale computations using several millions of grid points to resolve salient features of a tip vortex flow over a lifting wing. However, real flight configurations require tracking not only of the flow over several lifting wings but its growth and decay in the near- and intermediate- wake regions, not to mention the interaction of these vortices with each other. Resolving and tracking the evolution and interaction of these vortices shed from complex bodies is computationally intensive. Parallel computing technology is an attractive option in solving these flows. In planetary science vortical flows are also important in studying how planets and protoplanets form when cosmic dust and gases become gravitationally unstable and eventually form planets or protoplanets. The current paradigm for the formation of planetary systems maintains that the planets accreted from the nebula of gas and dust left over from the formation of the Sun. Traditional theory also indicate that such a preplanetary nebula took the form of flattened disk. The coagulation of dust led to the settling of aggregates toward the midplane of the disk, where they grew further into asteroid-like planetesimals. Some of the issues still remaining in this process are the onset of gravitational instability, the role of turbulence in the damping of particles and radial effects. In this study the focus will be with the role of turbulence and the radial effects.

  14. UNIX based client/server hospital information system.

    PubMed

    Nakamura, S; Sakurai, K; Uchiyama, M; Yoshii, Y; Tachibana, N

    1995-01-01

    SMILE (St. Luke's Medical Center Information Linkage Environment) is a HIS which is a client/server system using a UNIX workstation under an open network, LAN(FDDI&10BASE-T). It provides a multivendor environment, high performance with low cost and a user-friendly GUI. However, the client/server architecture with a UNIX workstation does not have the same OLTP environment (ex. TP monor) as the mainframe. So, our system problems and the steps used to solve them were reviewed. Several points that are necessary for a client/server system with a UNIX workstation in the future are presented.

  15. Pc as Physics Computer for Lhc ?

    NASA Astrophysics Data System (ADS)

    Jarp, Sverre; Simmins, Antony; Tang, Hong; Yaari, R.

    In the last five years, we have seen RISC workstations take over the computing scene that was once controlled by mainframes and supercomputers. In this paper we will argue that the same phenomenon might happen again. A project, active since March this year in the Physics Data Processing group, of CERN's CN division is described where ordinary desktop PCs running Windows (NT and 3.11) have been used for creating an environment for running large LHC batch jobs (initially the DICE simulation job of Atlas). The problems encountered in porting both the CERN library and the specific Atlas codes are described together with some encouraging benchmark results when comparing to existing RISC workstations in use by the Atlas collaboration. The issues of establishing the batch environment (Batch monitor, staging software, etc.) are also covered. Finally a quick extrapolation of commodity computing power available in the future is touched upon to indicate what kind of cost envelope could be sufficient for the simulation farms required by the LHC experiments.

  16. ASTEC and MODEL: Controls software development at Goddard Space Flight Center

    NASA Technical Reports Server (NTRS)

    Downing, John P.; Bauer, Frank H.; Surber, Jeffrey L.

    1993-01-01

    The ASTEC (Analysis and Simulation Tools for Engineering Controls) software is under development at the Goddard Space Flight Center (GSFC). The design goal is to provide a wide selection of controls analysis tools at the personal computer level, as well as the capability to upload compute-intensive jobs to a mainframe or supercomputer. In the last three years the ASTEC (Analysis and Simulation Tools for Engineering Controls) software has been under development. ASTEC is meant to be an integrated collection of controls analysis tools for use at the desktop level. MODEL (Multi-Optimal Differential Equation Language) is a translator that converts programs written in the MODEL language to FORTRAN. An upgraded version of the MODEL program will be merged into ASTEC. MODEL has not been modified since 1981 and has not kept with changes in computers or user interface techniques. This paper describes the changes made to MODEL in order to make it useful in the 90's and how it relates to ASTEC.

  17. Using the network to achieve energy efficiency

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giglio, M.

    1995-12-01

    Novell, the third largest software company in the world, has developed Netware Embedded Systems Technology (NEST). NEST will take the network deeper into non-traditional computing environments and will imbed networking into more intelligent devices. Ultimately, this will lead to energy efficiencies in the office. NEST can make point-of-sale terminals, alarm systems, televisions, traffic controls, printers, lights, fax machines, copiers, HVAC controls, PBX machines, etc., either intelligent or more intelligent than they are currently. The mission statement for this particular group is to integrate over 30 million new intelligent devices into the workplace and the home with Novell networks by 1997.more » Computing trends have progressed from mainframes in the 1960s to keys, security systems, and airplanes in the year 2000. In fact, the new Boeing 777 has NEST in it, and it also has network servers on board. NEST enables the embedded network with the ability to put intelligence into devices. This gives one more control of the devices from wherever one is. For example, the pharmaceutical industry could use NEST to coordinate what the consumer is buying, what is in the warehouse, what the manufacturing plant is tooled for, and so on. Through NEST technology, the pharmaceutical industry now uses a camera that takes pictures of the pills. It can see whether an {open_quotes}overdose{close_quotes} or {open_quotes}underdose{close_quotes} of a particular type of pill is being manufactured. The plant can be shut down and corrections made immediately.« less

  18. The IBM PC at NASA Ames

    NASA Technical Reports Server (NTRS)

    Peredo, James P.

    1988-01-01

    Like many large companies, Ames relies very much on its computing power to get work done. And, like many other large companies, finding the IBM PC a reliable tool, Ames uses it for many of the same types of functions as other companies. Presentation and clarification needs demand much of graphics packages. Programming and text editing needs require simpler, more-powerful packages. The storage space needed by NASA's scientists and users for the monumental amounts of data that Ames needs to keep demand the best database packages that are large and easy to use. Availability to the Micom Switching Network combines the powers of the IBM PC with the capabilities of other computers and mainframes and allows users to communicate electronically. These four primary capabilities of the PC are vital to the needs of NASA's users and help to continue and support the vast amounts of work done by the NASA employees.

  19. Assessment of radionuclide databases in CAP88 mainframe version 1.0 and Windows-based version 3.0.

    PubMed

    LaBone, Elizabeth D; Farfán, Eduardo B; Lee, Patricia L; Jannik, G Timothy; Donnelly, Elizabeth H; Foley, Trevor Q

    2009-09-01

    In this study the radionuclide databases for two versions of the Clean Air Act Assessment Package-1988 (CAP88) computer model were assessed in detail. CAP88 estimates radiation dose and the risk of health effects to human populations from radionuclide emissions to air. This program is used by several U.S. Department of Energy (DOE) facilities to comply with National Emission Standards for Hazardous Air Pollutants regulations. CAP88 Mainframe, referred to as version 1.0 on the U.S. Environmental Protection Agency Web site (http://www.epa.gov/radiation/assessment/CAP88/), was the very first CAP88 version released in 1988. Some DOE facilities including the Savannah River Site still employ this version (1.0) while others use the more user-friendly personal computer Windows-based version 3.0 released in December 2007. Version 1.0 uses the program RADRISK based on International Commission on Radiological Protection Publication 30 as its radionuclide database. Version 3.0 uses half-life, dose, and risk factor values based on Federal Guidance Report 13. Differences in these values could cause different results for the same input exposure data (same scenario), depending on which version of CAP88 is used. Consequently, the differences between the two versions are being assessed in detail at Savannah River National Laboratory. The version 1.0 and 3.0 database files contain 496 and 838 radionuclides, respectively, and though one would expect the newer version to include all the 496 radionuclides, 35 radionuclides are listed in version 1.0 that are not included in version 3.0. The majority of these has either extremely short or long half-lives or is no longer in production; however, some of the short-lived radionuclides might produce progeny of great interest at DOE sites. In addition, 122 radionuclides were found to have different half-lives in the two versions, with 21 over 3 percent different and 12 over 10 percent different.

  20. Data Recording Room in the 10-by 10-Foot Supersonic Wind Tunnel

    NASA Image and Video Library

    1973-04-21

    The test data recording equipment located in the office building of the 10-by 10-Foot Supersonic Wind Tunnel at the NASA Lewis Research Center. The data system was the state of the art when the facility began operating in 1955 and was upgraded over time. NASA engineers used solenoid valves to measure pressures from different locations within the test section. Up 48 measurements could be fed into a single transducer. The 10-by 10 data recorders could handle up to 200 data channels at once. The Central Automatic Digital Data Encoder (CADDE) converted this direct current raw data from the test section into digital format on magnetic tape. The digital information was sent to the Lewis Central Computer Facility for additional processing. It could also be displayed in the control room via strip charts or oscillographs. The 16-by 56-foot long ERA 1103 UNIVAC mainframe computer processed most of the digital data. The paper tape with the raw data was fed into the ERA 1103 which performed the needed calculations. The information was then sent back to the control room. There was a lag of several minutes before the computed information was available, but it was exponentially faster than the hand calculations performed by the female computers. The 10- by 10-foot tunnel, which had its official opening in May 1956, was built under the Congressional Unitary Plan Act which coordinated wind tunnel construction at the NACA, Air Force, industry, and universities. The 10- by 10 was the largest of the three NACA tunnels built under the act.

  1. MULTIVARIATE ANALYSIS OF DRINKING BEHAVIOUR IN A RURAL POPULATION

    PubMed Central

    Mathrubootham, N.; Bashyam, V.S.P.; Shahjahan

    1997-01-01

    This study was carried out to find out the drinking pattern in a rural population, using multivariate techniques. 386 current users identified in a community were assessed with regard to their drinking behaviours using a structured interview. For purposes of the study the questions were condensed into 46 meaningful variables. In bivariate analysis, 14 variables including dependent variables such as dependence, MAST & CAGE (measuring alcoholic status), Q.F. Index and troubled drinking were found to be significant. Taking these variables and other multivariate techniques too such as ANOVA, correlation, regression analysis and factor analysis were done using both SPSS PC + and HCL magnum mainframe computer with FOCUS package and UNIX systems. Results revealed that number of factors such as drinking style, duration of drinking, pattern of abuse, Q.F. Index and various problems influenced drinking and some of them set up a vicious circle. Factor analysis revealed mainly 3 factors, abuse, dependence and social drinking factors. Dependence could be divided into low/moderate dependence. The implications and practical applications of these tests are also discussed. PMID:21584077

  2. A report on the ST ScI optical disk workstation

    NASA Technical Reports Server (NTRS)

    1985-01-01

    The STScI optical disk project was designed to explore the options, opportunities and problems presented by the optical disk technology, and to see if optical disks are a viable, and inexpensive, means of storing the large amount of data which are found in astronomical digital imagery. A separate workstation was purchased on which the development can be done and serves as an astronomical image processing computer, incorporating the optical disks into the solution of standard image processing tasks. It is indicated that small workstations can be powerful tools for image processing, and that astronomical image processing may be more conveniently and cost-effectively performed on microcomputers than on the mainframe and super-minicomputers. The optical disks provide unique capabilities in data storage.

  3. Re-engineering Nascom's network management architecture

    NASA Technical Reports Server (NTRS)

    Drake, Brian C.; Messent, David

    1994-01-01

    The development of Nascom systems for ground communications began in 1958 with Project Vanguard. The low-speed systems (rates less than 9.6 Kbs) were developed following existing standards; but, there were no comparable standards for high-speed systems. As a result, these systems were developed using custom protocols and custom hardware. Technology has made enormous strides since the ground support systems were implemented. Standards for computer equipment, software, and high-speed communications exist and the performance of current workstations exceeds that of the mainframes used in the development of the ground systems. Nascom is in the process of upgrading its ground support systems and providing additional services. The Message Switching System (MSS), Communications Address Processor (CAP), and Multiplexer/Demultiplexer (MDM) Automated Control System (MACS) are all examples of Nascom systems developed using standards such as, X-windows, Motif, and Simple Network Management Protocol (SNMP). Also, the Earth Observing System (EOS) Communications (Ecom) project is stressing standards as an integral part of its network. The move towards standards has produced a reduction in development, maintenance, and interoperability costs, while providing operational quality improvement. The Facility and Resource Manager (FARM) project has been established to integrate the Nascom networks and systems into a common network management architecture. The maximization of standards and implementation of computer automation in the architecture will lead to continued cost reductions and increased operational efficiency. The first step has been to derive overall Nascom requirements and identify the functionality common to all the current management systems. The identification of these common functions will enable the reuse of processes in the management architecture and promote increased use of automation throughout the Nascom network. The MSS, CAP, MACS, and Ecom projects have indicated the potential value of commercial-off-the-shelf (COTS) and standards through reduced cost and high quality. The FARM will allow the application of the lessons learned from these projects to all future Nascom systems.

  4. GRID2D/3D: A computer program for generating grid systems in complex-shaped two- and three-dimensional spatial domains. Part 2: User's manual and program listing

    NASA Technical Reports Server (NTRS)

    Bailey, R. T.; Shih, T. I.-P.; Nguyen, H. L.; Roelke, R. J.

    1990-01-01

    An efficient computer program, called GRID2D/3D, was developed to generate single and composite grid systems within geometrically complex two- and three-dimensional (2- and 3-D) spatial domains that can deform with time. GRID2D/3D generates single grid systems by using algebraic grid generation methods based on transfinite interpolation in which the distribution of grid points within the spatial domain is controlled by stretching functions. All single grid systems generated by GRID2D/3D can have grid lines that are continuous and differentiable everywhere up to the second-order. Also, grid lines can intersect boundaries of the spatial domain orthogonally. GRID2D/3D generates composite grid systems by patching together two or more single grid systems. The patching can be discontinuous or continuous. For continuous composite grid systems, the grid lines are continuous and differentiable everywhere up to the second-order except at interfaces where different single grid systems meet. At interfaces where different single grid systems meet, the grid lines are only differentiable up to the first-order. For 2-D spatial domains, the boundary curves are described by using either cubic or tension spline interpolation. For 3-D spatial domains, the boundary surfaces are described by using either linear Coon's interpolation, bi-hyperbolic spline interpolation, or a new technique referred to as 3-D bi-directional Hermite interpolation. Since grid systems generated by algebraic methods can have grid lines that overlap one another, GRID2D/3D contains a graphics package for evaluating the grid systems generated. With the graphics package, the user can generate grid systems in an interactive manner with the grid generation part of GRID2D/3D. GRID2D/3D is written in FORTRAN 77 and can be run on any IBM PC, XT, or AT compatible computer. In order to use GRID2D/3D on workstations or mainframe computers, some minor modifications must be made in the graphics part of the program; no modifications are needed in the grid generation part of the program. The theory and method used in GRID2D/3D is described.

  5. GRID2D/3D: A computer program for generating grid systems in complex-shaped two- and three-dimensional spatial domains. Part 1: Theory and method

    NASA Technical Reports Server (NTRS)

    Shih, T. I.-P.; Bailey, R. T.; Nguyen, H. L.; Roelke, R. J.

    1990-01-01

    An efficient computer program, called GRID2D/3D was developed to generate single and composite grid systems within geometrically complex two- and three-dimensional (2- and 3-D) spatial domains that can deform with time. GRID2D/3D generates single grid systems by using algebraic grid generation methods based on transfinite interpolation in which the distribution of grid points within the spatial domain is controlled by stretching functions. All single grid systems generated by GRID2D/3D can have grid lines that are continuous and differentiable everywhere up to the second-order. Also, grid lines can intersect boundaries of the spatial domain orthogonally. GRID2D/3D generates composite grid systems by patching together two or more single grid systems. The patching can be discontinuous or continuous. For continuous composite grid systems, the grid lines are continuous and differentiable everywhere up to the second-order except at interfaces where different single grid systems meet. At interfaces where different single grid systems meet, the grid lines are only differentiable up to the first-order. For 2-D spatial domains, the boundary curves are described by using either cubic or tension spline interpolation. For 3-D spatial domains, the boundary surfaces are described by using either linear Coon's interpolation, bi-hyperbolic spline interpolation, or a new technique referred to as 3-D bi-directional Hermite interpolation. Since grid systems generated by algebraic methods can have grid lines that overlap one another, GRID2D/3D contains a graphics package for evaluating the grid systems generated. With the graphics package, the user can generate grid systems in an interactive manner with the grid generation part of GRID2D/3D. GRID2D/3D is written in FORTRAN 77 and can be run on any IBM PC, XT, or AT compatible computer. In order to use GRID2D/3D on workstations or mainframe computers, some minor modifications must be made in the graphics part of the program; no modifications are needed in the grid generation part of the program. This technical memorandum describes the theory and method used in GRID2D/3D.

  6. HIS priorities in developing countries.

    PubMed

    Amado Espinosa, L

    1995-04-01

    Looking for a solution to fulfill the requirements that the new global economical system demands, developing countries face a reality of poor communications infrastructure, a delay in applying information technology to the organizations, and a semi-closed political system avoiding the necessary reforms. HIS technology has been developed more for transactional purposes on mini and mainframe platforms. Administrative modules are the most frequently observed and physicians are now requiring more support for their activities. The second information systems generation will take advantage of PC technology, client-server models and telecommunications to achieve integration. International organizations, academic and industrial, public and private, will play a major role to transfer technology and to develop this area.

  7. Modelling milk production from feed intake in dairy cattle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clarke, D.L.

    1985-05-01

    Predictive models were developed for both Holstein and Jersey cows. Since Holsteins comprised eighty-five percent of the data, the predictive models developed for Holsteins were used for the development of a user-friendly computer model. Predictive models included: milk production (squared multiple correlation .73), natural log (ln) of milk production (.73), four percent fat-corrected milk (.67), ln four percent fat-corrected milk (.68), fat-free milk (.73), ln fat-free milk (.73), dry matter intake (.61), ln dry matter intake (.60), milk fat (.52), and ln milk fat (.56). The predictive models for ln milk production, ln fat-free milk and ln dry matter intakemore » were incorporated into a computer model. The model was written in standard Fortran for use on mainframe or micro-computers. Daily milk production, fat-free milk production, and dry matter intake were predicted on a daily basis with the previous day's dry matter intake serving as an independent variable in the prediction of the daily milk and fat-free milk production. 21 refs.« less

  8. Two-dimensional nonsteady viscous flow simulation on the Navier-Stokes computer miniNode

    NASA Technical Reports Server (NTRS)

    Nosenchuck, Daniel M.; Littman, Michael G.; Flannery, William

    1986-01-01

    The needs of large-scale scientific computation are outpacing the growth in performance of mainframe supercomputers. In particular, problems in fluid mechanics involving complex flow simulations require far more speed and capacity than that provided by current and proposed Class VI supercomputers. To address this concern, the Navier-Stokes Computer (NSC) was developed. The NSC is a parallel-processing machine, comprised of individual Nodes, each comparable in performance to current supercomputers. The global architecture is that of a hypercube, and a 128-Node NSC has been designed. New architectural features, such as a reconfigurable many-function ALU pipeline and a multifunction memory-ALU switch, have provided the capability to efficiently implement a wide range of algorithms. Efficient algorithms typically involve numerically intensive tasks, which often include conditional operations. These operations may be efficiently implemented on the NSC without, in general, sacrificing vector-processing speed. To illustrate the architecture, programming, and several of the capabilities of the NSC, the simulation of two-dimensional, nonsteady viscous flows on a prototype Node, called the miniNode, is presented.

  9. Remote control canard missile with a free-rolling tail brake torque system

    NASA Technical Reports Server (NTRS)

    Blair, A. B., Jr.

    1981-01-01

    An experimental wind-tunnel investigation has been conducted at supersonic Mach numbers to determine the static aerodynamic characteristics of a cruciform canard-controlled missile with fixed and free-rolling tail-fin afterbodies. Mechanical coupling effects of the free-rolling tail afterbody were investigated using an electronic/electromagnetic brake system that provides arbitrary tail-fin brake torques with continuous measurements of tail-to-mainframe torque and tail-roll rate. Results are summarized to show the effects of fixed and free-rolling tail-fin afterbodies that include simulated measured bearing friction torques on the longitudinal and lateral-directional aerodynamic characteristics.

  10. Pre- and post-processing for Cosmic/NASTRAN on personal computers and mainframes

    NASA Technical Reports Server (NTRS)

    Kamel, H. A.; Mobley, A. V.; Nagaraj, B.; Watkins, K. W.

    1986-01-01

    An interface between Cosmic/NASTRAN and GIFTS has recently been released, combining the powerful pre- and post-processing capabilities of GIFTS with Cosmic/NASTRAN's analysis capabilities. The interface operates on a wide range of computers, even linking Cosmic/NASTRAN and GIFTS when the two are on different computers. GIFTS offers a wide range of elements for use in model construction, each translated by the interface into the nearest Cosmic/NASTRAN equivalent; and the options of automatic or interactive modelling and loading in GIFTS make pre-processing easy and effective. The interface itself includes the programs GFTCOS, which creates the Cosmic/NASTRAN input deck (and, if desired, control deck) from the GIFTS Unified Data Base, COSGFT, which translates the displacements from the Cosmic/NASTRAN analysis back into GIFTS; and HOSTR, which handles stress computations for a few higher-order elements available in the interface, but not supported by the GIFTS processor STRESS. Finally, the versatile display options in GIFTS post-processing allow the user to examine the analysis results through an especially wide range of capabilities, including such possibilities as creating composite loading cases, plotting in color and animating the analysis.

  11. City public service learns to speed read. [Computerized routing system for meter reading

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aitken, E.L.

    1994-02-01

    City Public Service (CPS) of San Antonio, TX is a municipally owned utility that serves a densely populated 1,566 square miles in and around San Antonio. CPS's service area is divided into 21 meter reading districts, each of which is broken down into no more than 99 regular routes. Every day, a CPS employee reads one of the districts, following one or more routes. In 1991, CPS began using handheld computers to record reads for regular routes, which are stored on the devices themselves. In contrast, rereads and final reads occur at random throughout the service area. Because they changemore » every day, the process of creating routes that can be loaded onto a handheld device is difficult. Until recently, rereads and final reads were printed on paper orders, and route schedulers would spend close to two hours sorting the paper orders into routes. Meter readers would then hand-sequence the orders on their routes, often using a city map, before taking them into the field in stacks. When the meter readers returned, their completed orders had to be separated by type of reread, and then keyed into the mainframe computer before bill processing could begin. CPS's data processing department developed a computerized routing system of its own that saves time and labor, as well as paper. The system eliminates paper orders entirely, enabling schedulers to create reread and final read routes graphically on a PC. Information no longer needs to be keyed from hard copy, reducing the margin of error and streamlining bill processing by incorporating automated data transfer between systems.« less

  12. The Air Force's central reference laboratory: maximizing service while minimizing cost.

    PubMed

    Armbruster, D A

    1991-11-01

    The Laboratory Services Branch (Epi Lab) of the Epidemiology Division, Brooks AFB, Texas, is designated by regulation to serve as the Air Force's central reference laboratory, providing clinical laboratory testing support to all Air Force medical treatment facilities (MTFs). Epi Lab recognized that it was not offering the MTFs a service comparable to civilian reference laboratories and that, as a result, the Air Force medical system was spending hundreds of thousands of dollars yearly for commercial laboratory support. An in-house laboratory upgrade program was proposed to and approved by the USAF Surgeon General, as a Congressional Efficiencies Add project, to launch a two-phase initiative consisting of a 1-year field trial of 30 MTFs, followed by expansion to another 60 MTFs. Major components of the program include overnight air courier service to deliver patient samples to Epi Lab, a mainframe computer laboratory information system and electronic reporting of results to the MTFs throughout the CONUS. Application of medical marketing concepts and the Total Quality Management (TQM) philosophy allowed Epi to provide dramatically enhanced reference service at a cost savings of about $1 million to the medical system. The Epi Lab upgrade program represents an innovative problem-solving approach, combining technical and managerial improvements, resulting in substantial patient care service and financial dividends. It serves as an example of successful application of TQM and marketing within the military medical system.

  13. Dense wavelength division multiplexing devices for metropolitan-area datacom and telecom networks

    NASA Astrophysics Data System (ADS)

    DeCusatis, Casimer M.; Priest, David G.

    2000-12-01

    Large data processing environments in use today can require multi-gigabyte or terabyte capacity in the data communication infrastructure; these requirements are being driven by storage area networks with access to petabyte data bases, new architecture for parallel processing which require high bandwidth optical links, and rapidly growing network applications such as electronic commerce over the Internet or virtual private networks. These datacom applications require high availability, fault tolerance, security, and the capacity to recover from any single point of failure without relying on traditional SONET-based networking. These requirements, coupled with fiber exhaust in metropolitan areas, are driving the introduction of dense optical wavelength division multiplexing (DWDM) in data communication systems, particularly for large enterprise servers or mainframes. In this paper, we examine the technical requirements for emerging nextgeneration DWDM systems. Protocols for storage area networks and computer architectures such as Parallel Sysplex are presented, including their fiber bandwidth requirements. We then describe two commercially available DWDM solutions, a first generation 10 channel system and a recently announced next generation 32 channel system. Technical requirements, network management and security, fault tolerant network designs, new network topologies enabled by DWDM, and the role of time division multiplexing in the network are all discussed. Finally, we present a description of testing conducted on these networks and future directions for this technology.

  14. CD-ROM technology at the EROS data center

    USGS Publications Warehouse

    Madigan, Michael E.; Weinheimer, Mary C.

    1993-01-01

    The vast amount of digital spatial data often required by a single user has created a demand for media alternatives to 1/2" magnetic tape. One such medium that has been recently adopted at the U.S. Geological Survey's EROS Data Center is the compact disc (CD). CD's are a versatile, dynamic, and low-cost method for providing a variety of data on a single media device and are compatible with various computer platforms. CD drives are available for personal computers, UNIX workstations, and mainframe systems, either directly connected, or through a network. This medium furnishes a quick method of reproducing and distributing large amounts of data on a single CD. Several data sets are already available on CD's, including collections of historical Landsat multispectral scanner data and biweekly composites of Advanced Very High Resolution Radiometer data for the conterminous United States. The EROS Data Center intends to provide even more data sets on CD's. Plans include specific data sets on a customized disc to fulfill individual requests, and mass production of unique data sets for large-scale distribution. Requests for a single compact disc-read only memory (CD-ROM) containing a large volume of data either for archiving or for one-time distribution can be addressed with a CD-write once (CD-WO) unit. Mass production and large-scale distribution will require CD-ROM replication and mastering.

  15. Normalizing the causality between time series.

    PubMed

    Liang, X San

    2015-08-01

    Recently, a rigorous yet concise formula was derived to evaluate information flow, and hence the causality in a quantitative sense, between time series. To assess the importance of a resulting causality, it needs to be normalized. The normalization is achieved through distinguishing a Lyapunov exponent-like, one-dimensional phase-space stretching rate and a noise-to-signal ratio from the rate of information flow in the balance of the marginal entropy evolution of the flow recipient. It is verified with autoregressive models and applied to a real financial analysis problem. An unusually strong one-way causality is identified from IBM (International Business Machines Corporation) to GE (General Electric Company) in their early era, revealing to us an old story, which has almost faded into oblivion, about "Seven Dwarfs" competing with a giant for the mainframe computer market.

  16. Compiler-assisted multiple instruction rollback recovery using a read buffer

    NASA Technical Reports Server (NTRS)

    Alewine, N. J.; Chen, S.-K.; Fuchs, W. K.; Hwu, W.-M.

    1993-01-01

    Multiple instruction rollback (MIR) is a technique that has been implemented in mainframe computers to provide rapid recovery from transient processor failures. Hardware-based MIR designs eliminate rollback data hazards by providing data redundancy implemented in hardware. Compiler-based MIR designs have also been developed which remove rollback data hazards directly with data-flow transformations. This paper focuses on compiler-assisted techniques to achieve multiple instruction rollback recovery. We observe that some data hazards resulting from instruction rollback can be resolved efficiently by providing an operand read buffer while others are resolved more efficiently with compiler transformations. A compiler-assisted multiple instruction rollback scheme is developed which combines hardware-implemented data redundancy with compiler-driven hazard removal transformations. Experimental performance evaluations indicate improved efficiency over previous hardware-based and compiler-based schemes.

  17. Normalizing the causality between time series

    NASA Astrophysics Data System (ADS)

    Liang, X. San

    2015-08-01

    Recently, a rigorous yet concise formula was derived to evaluate information flow, and hence the causality in a quantitative sense, between time series. To assess the importance of a resulting causality, it needs to be normalized. The normalization is achieved through distinguishing a Lyapunov exponent-like, one-dimensional phase-space stretching rate and a noise-to-signal ratio from the rate of information flow in the balance of the marginal entropy evolution of the flow recipient. It is verified with autoregressive models and applied to a real financial analysis problem. An unusually strong one-way causality is identified from IBM (International Business Machines Corporation) to GE (General Electric Company) in their early era, revealing to us an old story, which has almost faded into oblivion, about "Seven Dwarfs" competing with a giant for the mainframe computer market.

  18. Compiler-assisted multiple instruction rollback recovery using a read buffer

    NASA Technical Reports Server (NTRS)

    Alewine, Neal J.; Chen, Shyh-Kwei; Fuchs, W. Kent; Hwu, Wen-Mei W.

    1995-01-01

    Multiple instruction rollback (MIR) is a technique that has been implemented in mainframe computers to provide rapid recovery from transient processor failures. Hardware-based MIR designs eliminate rollback data hazards by providing data redundancy implemented in hardware. Compiler-based MIR designs have also been developed which remove rollback data hazards directly with data-flow transformations. This paper describes compiler-assisted techniques to achieve multiple instruction rollback recovery. We observe that some data hazards resulting from instruction rollback can be resolved efficiently by providing an operand read buffer while others are resolved more efficiently with compiler transformations. The compiler-assisted scheme presented consists of hardware that is less complex than shadow files, history files, history buffers, or delayed write buffers, while experimental evaluation indicates performance improvement over compiler-based schemes.

  19. Arterial Catheterization

    MedlinePlus

    ... and Their Families , ATS Website: www.thoracic.org/assemblies/cc/ccprimer/mainframe2.html Additional Information American Thoracic ... Have the ICU nurse show you how the line is bandaged and how it is watched to ...

  20. The geo-control system for station keeping and colocation of geostationary satellites

    NASA Technical Reports Server (NTRS)

    Montenbruck, O.; Eckstein, M. C.; Gonner, J.

    1993-01-01

    GeoControl is a compact but powerful and accurate software system for station keeping of single and colocated satellites, which has been developed at the German Space Operations Center. It includes four core modules for orbit determination (including maneuver estimation), maneuver planning, monitoring of proximities between colocated satellites, and interference and event prediction. A simple database containing state vector and maneuver information at selected epochs is maintained as a central interface between the modules. A menu driven shell utilizing form screens for data input serves as the central user interface. The software is written in Ada and FORTRAN and may be used on VAX workstations or mainframes under the VMS operating system.

  1. Applied Research Study

    NASA Technical Reports Server (NTRS)

    Leach, Ronald J.

    1997-01-01

    The purpose of this project was to study the feasibility of reusing major components of a software system that had been used to control the operations of a spacecraft launched in the 1980s. The study was done in the context of a ground data processing system that was to be rehosted from a large mainframe to an inexpensive workstation. The study concluded that a systematic approach using inexpensive tools could aid in the reengineering process by identifying a set of certified reusable components. The study also developed procedures for determining duplicate versions of software, which were created because of inadequate naming conventions. Such procedures reduced reengineering costs by approximately 19.4 percent.

  2. Laboratory Information Systems.

    PubMed

    Henricks, Walter H

    2015-06-01

    Laboratory information systems (LISs) supply mission-critical capabilities for the vast array of information-processing needs of modern laboratories. LIS architectures include mainframe, client-server, and thin client configurations. The LIS database software manages a laboratory's data. LIS dictionaries are database tables that a laboratory uses to tailor an LIS to the unique needs of that laboratory. Anatomic pathology LIS (APLIS) functions play key roles throughout the pathology workflow, and laboratories rely on LIS management reports to monitor operations. This article describes the structure and functions of APLISs, with emphasis on their roles in laboratory operations and their relevance to pathologists. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. A Collaborative Knowledge Management Process for Implementing Healthcare Enterprise Information Systems

    NASA Astrophysics Data System (ADS)

    Cheng, Po-Hsun; Chen, Sao-Jie; Lai, Jin-Shin; Lai, Feipei

    This paper illustrates a feasible health informatics domain knowledge management process which helps gather useful technology information and reduce many knowledge misunderstandings among engineers who have participated in the IBM mainframe rightsizing project at National Taiwan University (NTU) Hospital. We design an asynchronously sharing mechanism to facilitate the knowledge transfer and our health informatics domain knowledge management process can be used to publish and retrieve documents dynamically. It effectively creates an acceptable discussion environment and even lessens the traditional meeting burden among development engineers. An overall description on the current software development status is presented. Then, the knowledge management implementation of health information systems is proposed.

  4. s95-16445

    NASA Image and Video Library

    2014-08-07

    S95-16445 (13-22 July 1995) --- A wide angle view from the rear shows activity in the new Mission Control Center (MCC), opened for operation and dedicated during the STS-70 mission. The Space Shuttle Discovery was just passing over Florida at the time this photo was taken (note mercator map and TV scene on screens). The new MCC, developed at a cost of about 50 million, replaces the main-frame based, NASA-unique design of the old Mission Control with a standard workstation-based, local area network system commonly in use today.

  5. Quantifying the potential export flows of used electronic products in Macau: a case study of PCs.

    PubMed

    Yu, Danfeng; Song, Qingbin; Wang, Zhishi; Li, Jinhui; Duan, Huabo; Wang, Jinben; Wang, Chao; Wang, Xu

    2017-12-01

    The used electronic product (UEP) has attracted the worldwide attentions because part of e-waste may be exported from developed countries to developing countries in the name of UEP. On the basis of large foreign trade data of electronic products (e-products), this study adopted the trade data approach (TDA) to quantify the potential exports of UEP in Macau, taking a case study of personal computers (PCs). The results show that the desktop mainframes, LCD monitors, and CRT monitors have more low-unit-value trades with higher trade volumes in the past 10 years, while the laptop and tablet PCs, as the newer technologies, owned the higher ratios of the high-unit-value trades. During the period of 2005-2015, the total mean exports for used laptop and tablet PCs, desktop mainframes, and LCD monitors were approximately 18,592, 79,957, and 43,177 units, respectively, while the possible export volume of used CRT monitors was higher, up to 430,098 units in 2000-2010. Noticed that these potential export volumes could be the lower bound because not all used PCs may be shipped using the PC trade code. For all the four kinds of used PCs, the majority (61.6-98.82%) of the export volumes have gone to Hong Kong, followed by Mainland China and Taiwan. Since 2011, there was no CRT monitor export; however, the other kinds of used PC exports will still exist in Macau in the future. The outcomes are helpful to understand and manage the current export situations of used products in Macau, and can also provide a reference for other countries and regions.

  6. The Electronic Hermit: Trends in Library Automation.

    ERIC Educational Resources Information Center

    LaRue, James

    1988-01-01

    Reviews trends in library software development including: (1) microcomputer applications; (2) CD-ROM; (3) desktop publishing; (4) public access microcomputers; (5) artificial intelligence; (6) mainframes and minicomputers; and (7) automated catalogs. (MES)

  7. Constraints and Opportunities in GCM Model Development

    NASA Technical Reports Server (NTRS)

    Schmidt, Gavin; Clune, Thomas

    2010-01-01

    Over the past 30 years climate models have evolved from relatively simple representations of a few atmospheric processes to complex multi-disciplinary system models which incorporate physics from bottom of the ocean to the mesopause and are used for seasonal to multi-million year timescales. Computer infrastructure over that period has gone from punchcard mainframes to modern parallel clusters. Constraints of working within an ever evolving research code mean that most software changes must be incremental so as not to disrupt scientific throughput. Unfortunately, programming methodologies have generally not kept pace with these challenges, and existing implementations now present a heavy and growing burden on further model development as well as limiting flexibility and reliability. Opportunely, advances in software engineering from other disciplines (e.g. the commercial software industry) as well as new generations of powerful development tools can be incorporated by the model developers to incrementally and systematically improve underlying implementations and reverse the long term trend of increasing development overhead. However, these methodologies cannot be applied blindly, but rather must be carefully tailored to the unique characteristics of scientific software development. We will discuss the need for close integration of software engineers and climate scientists to find the optimal processes for climate modeling.

  8. Cooperative processing data bases

    NASA Technical Reports Server (NTRS)

    Hasta, Juzar

    1991-01-01

    Cooperative processing for the 1990's using client-server technology is addressed. The main theme is concepts of downsizing from mainframes and minicomputers to workstations on a local area network (LAN). This document is presented in view graph form.

  9. Finite Element Analysis (FEA) in Design and Production.

    ERIC Educational Resources Information Center

    Waggoner, Todd C.; And Others

    1995-01-01

    Finite element analysis (FEA) enables industrial designers to analyze complex components by dividing them into smaller elements, then assessing stress and strain characteristics. Traditionally mainframe based, FEA is being increasingly used in microcomputers. (SK)

  10. Marshal Wrubel and the Electronic Computer as an Astronomical Instrument

    NASA Astrophysics Data System (ADS)

    Mutschlecner, J. P.; Olsen, K. H.

    1998-05-01

    In 1960, Marshal H. Wrubel, professor of astrophysics at Indiana University, published an influential review paper under the title, "The Electronic Computer as an Astronomical Instrument." This essay pointed out the enormous potential of the electronic computer as an instrument of observational and theoretical research in astronomy, illustrated programming concepts, and made specific recommendations for the increased use of computers in astronomy. He noted that, with a few scattered exceptions, computer use by the astronomical community had heretofore been "timid and sporadic." This situation was to improve dramatically in the next few years. By the late 1950s, general-purpose, high-speed, "mainframe" computers were just emerging from the experimental, developmental stage, but few were affordable by or available to academic and research institutions not closely associated with large industrial or national defense programs. Yet by 1960 Wrubel had spent a decade actively pioneering and promoting the imaginative application of electronic computation within the astronomical community. Astronomy upper-level undergraduate and graduate students at Indiana were introduced to computing, and Ph.D. candidates who he supervised applied computer techniques to problems in theoretical astrophysics. He wrote an early textbook on programming, taught programming classes, and helped establish and direct the Research Computing Center at Indiana, later named the Wrubel Computing Center in his honor. He and his students created a variety of algorithms and subroutines and exchanged these throughout the astronomical community by distributing the Astronomical Computation News Letter. Nationally as well as internationally, Wrubel actively cooperated with other groups interested in computing applications for theoretical astrophysics, often through his position as secretary of the IAU commission on Stellar Constitution.

  11. NASA Lewis steady-state heat pipe code users manual

    NASA Technical Reports Server (NTRS)

    Tower, Leonard K.; Baker, Karl W.; Marks, Timothy S.

    1992-01-01

    The NASA Lewis heat pipe code was developed to predict the performance of heat pipes in the steady state. The code can be used as a design tool on a personal computer or with a suitable calling routine, as a subroutine for a mainframe radiator code. A variety of wick structures, including a user input option, can be used. Heat pipes with multiple evaporators, condensers, and adiabatic sections in series and with wick structures that differ among sections can be modeled. Several working fluids can be chosen, including potassium, sodium, and lithium, for which monomer-dimer equilibrium is considered. The code incorporates a vapor flow algorithm that treats compressibility and axially varying heat input. This code facilitates the determination of heat pipe operating temperatures and heat pipe limits that may be encountered at the specified heat input and environment temperature. Data are input to the computer through a user-interactive input subroutine. Output, such as liquid and vapor pressures and temperatures, is printed at equally spaced axial positions along the pipe as determined by the user.

  12. NASA Lewis steady-state heat pipe code users manual

    NASA Astrophysics Data System (ADS)

    Tower, Leonard K.; Baker, Karl W.; Marks, Timothy S.

    1992-06-01

    The NASA Lewis heat pipe code was developed to predict the performance of heat pipes in the steady state. The code can be used as a design tool on a personal computer or with a suitable calling routine, as a subroutine for a mainframe radiator code. A variety of wick structures, including a user input option, can be used. Heat pipes with multiple evaporators, condensers, and adiabatic sections in series and with wick structures that differ among sections can be modeled. Several working fluids can be chosen, including potassium, sodium, and lithium, for which monomer-dimer equilibrium is considered. The code incorporates a vapor flow algorithm that treats compressibility and axially varying heat input. This code facilitates the determination of heat pipe operating temperatures and heat pipe limits that may be encountered at the specified heat input and environment temperature. Data are input to the computer through a user-interactive input subroutine. Output, such as liquid and vapor pressures and temperatures, is printed at equally spaced axial positions along the pipe as determined by the user.

  13. Maxdose-SR and popdose-SR routine release atmospheric dose models used at SRS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jannik, G. T.; Trimor, P. P.

    MAXDOSE-SR and POPDOSE-SR are used to calculate dose to the offsite Reference Person and to the surrounding Savannah River Site (SRS) population respectively following routine releases of atmospheric radioactivity. These models are currently accessed through the Dose Model Version 2014 graphical user interface (GUI). MAXDOSE-SR and POPDOSE-SR are personal computer (PC) versions of MAXIGASP and POPGASP, which both resided on the SRS IBM Mainframe. These two codes follow U.S. Nuclear Regulatory Commission (USNRC) Regulatory Guides 1.109 and 1.111 (1977a, 1977b). The basis for MAXDOSE-SR and POPDOSE-SR are USNRC developed codes XOQDOQ (Sagendorf et. al 1982) and GASPAR (Eckerman et. almore » 1980). Both of these codes have previously been verified for use at SRS (Simpkins 1999 and 2000). The revisions incorporated into MAXDOSE-SR and POPDOSE-SR Version 2014 (hereafter referred to as MAXDOSE-SR and POPDOSE-SR unless otherwise noted) were made per Computer Program Modification Tracker (CPMT) number Q-CMT-A-00016 (Appendix D). Version 2014 was verified for use at SRS in Dixon (2014).« less

  14. From the genetic to the computer program: the historicity of 'data' and 'computation' in the investigations on the nematode worm C. elegans (1963-1998).

    PubMed

    García-Sancho, Miguel

    2012-03-01

    This paper argues that the history of the computer, of the practice of computation and of the notions of 'data' and 'programme' are essential for a critical account of the emergence and implications of data-driven research. In order to show this, I focus on the transition that the investigations on the worm C. elegans experienced in the Laboratory of Molecular Biology of Cambridge (UK). Throughout the 1980s, this research programme evolved from a study of the genetic basis of the worm's development and behaviour to a DNA mapping and sequencing initiative. By examining the changing computing technologies which were used at the Laboratory, I demonstrate that by the time of this transition researchers shifted from modelling the worm's genetic programme on a mainframe apparatus to writing minicomputer programs aimed at providing map and sequence data which was then circulated to other groups working on the genetics of C. elegans. The shift in the worm research should thus not be simply explained in the application of computers which transformed the project from hypothesis-driven to a data-intensive endeavour. The key factor was rather a historically specific technology-in-house and easy programmable minicomputers-which redefined the way of achieving the project's long-standing goal, leading the genetic programme to co-evolve with the practices of data production and distribution. Copyright © 2011 Elsevier Ltd. All rights reserved.

  15. Developmental assessment of the Fort St. Vrain version of the Composite HTGR Analysis Program (CHAP-2)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stroh, K.R.

    1980-01-01

    The Composite HTGR Analysis Program (CHAP) consists of a model-independent systems analysis mainframe named LASAN and model-dependent linked code modules, each representing a component, subsystem, or phenomenon of an HTGR plant. The Fort St. Vrain (FSV) version (CHAP-2) includes 21 coded modules that model the neutron kinetics and thermal response of the core; the thermal-hydraulics of the reactor primary coolant system, secondary steam supply system, and balance-of-plant; the actions of the control system and plant protection system; the response of the reactor building; and the relative hazard resulting from fuel particle failure. FSV steady-state and transient plant data are beingmore » used to partially verify the component modeling and dynamic smulation techniques used to predict plant response to postulated accident sequences.« less

  16. Solar-System Tests of Gravitational Theories

    NASA Technical Reports Server (NTRS)

    Shapiro, Irwin

    1997-01-01

    We are engaged in testing gravitational theory by means of observations of objects in the solar system. These tests include an examination of the Principle Of Equivalence (POE), the Shapiro delay, the advances of planetary perihelia, the possibility of a secular variation G in the "gravitational constant" G, and the rate of the de Sitter (geodetic) precession of the Earth-Moon system. These results are consistent with our preliminary results focusing on the contribution of Lunar Laser Ranging (LLR), which were presented at the seventh Marcel Grossmann meeting on general relativity. The largest improvement over previous results comes in the uncertainty for (eta): a factor of five better than our previous value. This improvement reflects the increasing strength of the LLR data. A similar analysis presented at the same meeting by a group at the Jet Propulsion Laboratory gave a similar result for (eta). Our value for (beta) represents our first such result determined simultaneously with the solar quadrupole moment from the dynamical data set. These results are being prepared for publication. We have shown how positions determined from different planetary ephemerides can be compared and how the combination of VLBI and pulse timing information can yield a direct tie between planetary and radio frames. We have continued to include new data in our analysis as they became available. Finally, we have made improvement in our analysis software (PEP) and ported it to a network of modern workstations from its former home on a "mainframe" computer.

  17. CET89 - CHEMICAL EQUILIBRIUM WITH TRANSPORT PROPERTIES, 1989

    NASA Technical Reports Server (NTRS)

    Mcbride, B.

    1994-01-01

    Scientists and engineers need chemical equilibrium composition data to calculate the theoretical thermodynamic properties of a chemical system. This information is essential in the design and analysis of equipment such as compressors, turbines, nozzles, engines, shock tubes, heat exchangers, and chemical processing equipment. The substantial amount of numerical computation required to obtain equilibrium compositions and transport properties for complex chemical systems led scientists at NASA's Lewis Research Center to develop CET89, a program designed to calculate the thermodynamic and transport properties of these systems. CET89 is a general program which will calculate chemical equilibrium compositions and mixture properties for any chemical system with available thermodynamic data. Generally, mixtures may include condensed and gaseous products. CET89 performs the following operations: it 1) obtains chemical equilibrium compositions for assigned thermodynamic states, 2) calculates dilute-gas transport properties of complex chemical mixtures, 3) obtains Chapman-Jouguet detonation properties for gaseous species, 4) calculates incident and reflected shock properties in terms of assigned velocities, and 5) calculates theoretical rocket performance for both equilibrium and frozen compositions during expansion. The rocket performance function allows the option of assuming either a finite area or an infinite area combustor. CET89 accommodates problems involving up to 24 reactants, 20 elements, and 600 products (400 of which may be condensed). The program includes a library of thermodynamic and transport properties in the form of least squares coefficients for possible reaction products. It includes thermodynamic data for over 1300 gaseous and condensed species and transport data for 151 gases. The subroutines UTHERM and UTRAN convert thermodynamic and transport data to unformatted form for faster processing. The program conforms to the FORTRAN 77 standard, except for some input in NAMELIST format. It requires about 423 KB memory, and is designed to be used on mainframe, workstation, and mini computers. Due to its memory requirements, this program does not readily lend itself to implementation on MS-DOS based machines.

  18. Computing at DESY — current setup, trends and strategic directions

    NASA Astrophysics Data System (ADS)

    Ernst, Michael

    1998-05-01

    Since the HERA experiments H1 and ZEUS started data taking in '92, the computing environment at DESY has changed dramatically. Running a mainframe centred computing for more than 20 years, DESY switched to a heterogeneous, fully distributed computing environment within only about two years in almost every corner where computing has its applications. The computing strategy was highly influenced by the needs of the user community. The collaborations are usually limited by current technology and their ever increasing demands is the driving force for central computing to always move close to the technology edge. While DESY's central computing has a multidecade experience in running Central Data Recording/Central Data Processing for HEP experiments, the most challenging task today is to provide for clear and homogeneous concepts in the desktop area. Given that lowest level commodity hardware draws more and more attention, combined with the financial constraints we are facing already today, we quickly need concepts for integrated support of a versatile device which has the potential to move into basically any computing area in HEP. Though commercial solutions, especially addressing the PC management/support issues, are expected to come to market in the next 2-3 years, we need to provide for suitable solutions now. Buying PC's at DESY currently at a rate of about 30/month will otherwise absorb any available manpower in central computing and still will leave hundreds of unhappy people alone. Though certainly not the only region, the desktop issue is one of the most important one where we need HEP-wide collaboration to a large extent, and right now. Taking into account that there is traditionally no room for R&D at DESY, collaboration, meaning sharing experience and development resources within the HEP community, is a predominant factor for us.

  19. Making tomorrow's mistakes today: Evolutionary prototyping for risk reduction and shorter development time

    NASA Astrophysics Data System (ADS)

    Friedman, Gary; Schwuttke, Ursula M.; Burliegh, Scott; Chow, Sanguan; Parlier, Randy; Lee, Lorrine; Castro, Henry; Gersbach, Jim

    1993-03-01

    In the early days of JPL's solar system exploration, each spacecraft mission required its own dedicated data system with all software applications written in the mainframe's native assembly language. Although these early telemetry processing systems were a triumph of engineering in their day, since that time the computer industry has advanced to the point where it is now advantageous to replace these systems with more modern technology. The Space Flight Operations Center (SFOC) Prototype group was established in 1985 as a workstation and software laboratory. The charter of the lab was to determine if it was possible to construct a multimission telemetry processing system using commercial, off-the-shelf computers that communicated via networks. The staff of the lab mirrored that of a typical skunk works operation -- a small, multi-disciplinary team with a great deal of autonomy that could get complex tasks done quickly. In an effort to determine which approaches would be useful, the prototype group experimented with all types of operating systems, inter-process communication mechanisms, network protocols, packet size parameters. Out of that pioneering work came the confidence that a multi-mission telemetry processing system could be built using high-level languages running in a heterogeneous, networked workstation environment. Experience revealed that the operating systems on all nodes should be similar (i.e., all VMS or all PC-DOS or all UNIX), and that a unique Data Transport Subsystem tool needed to be built to address the incompatibilities of network standards, byte ordering, and socket buffering. The advantages of building a telemetry processing system based on emerging industry standards were numerous: by employing these standards, we would no longer be locked into a single vendor. When new technology came to market which offered ten times the performance at one eighth the cost, it would be possible to attach the new machine to the network, re-compile the application code, and run. In addition, we would no longer be plagued with lack of manufacturer support when we encountered obscure bugs. And maybe, hopefully, the eternal elusive goal of software portability across different vendors' platforms would finally be available. Some highlights of our prototyping efforts are described.

  20. Making tomorrow's mistakes today: Evolutionary prototyping for risk reduction and shorter development time

    NASA Technical Reports Server (NTRS)

    Friedman, Gary; Schwuttke, Ursula M.; Burliegh, Scott; Chow, Sanguan; Parlier, Randy; Lee, Lorrine; Castro, Henry; Gersbach, Jim

    1993-01-01

    In the early days of JPL's solar system exploration, each spacecraft mission required its own dedicated data system with all software applications written in the mainframe's native assembly language. Although these early telemetry processing systems were a triumph of engineering in their day, since that time the computer industry has advanced to the point where it is now advantageous to replace these systems with more modern technology. The Space Flight Operations Center (SFOC) Prototype group was established in 1985 as a workstation and software laboratory. The charter of the lab was to determine if it was possible to construct a multimission telemetry processing system using commercial, off-the-shelf computers that communicated via networks. The staff of the lab mirrored that of a typical skunk works operation -- a small, multi-disciplinary team with a great deal of autonomy that could get complex tasks done quickly. In an effort to determine which approaches would be useful, the prototype group experimented with all types of operating systems, inter-process communication mechanisms, network protocols, packet size parameters. Out of that pioneering work came the confidence that a multi-mission telemetry processing system could be built using high-level languages running in a heterogeneous, networked workstation environment. Experience revealed that the operating systems on all nodes should be similar (i.e., all VMS or all PC-DOS or all UNIX), and that a unique Data Transport Subsystem tool needed to be built to address the incompatibilities of network standards, byte ordering, and socket buffering. The advantages of building a telemetry processing system based on emerging industry standards were numerous: by employing these standards, we would no longer be locked into a single vendor. When new technology came to market which offered ten times the performance at one eighth the cost, it would be possible to attach the new machine to the network, re-compile the application code, and run. In addition, we would no longer be plagued with lack of manufacturer support when we encountered obscure bugs. And maybe, hopefully, the eternal elusive goal of software portability across different vendors' platforms would finally be available. Some highlights of our prototyping efforts are described.

  1. Evaluation of the finite element fuel rod analysis code (FRANCO)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, K.; Feltus, M.A.

    1994-12-31

    Knowledge of temperature distribution in a nuclear fuel rod is required to predict the behavior of fuel elements during operating conditions. The thermal and mechanical properties and performance characteristics are strongly dependent on the temperature, which can vary greatly inside the fuel rod. A detailed model of fuel rod behavior can be described by various numerical methods, including the finite element approach. The finite element method has been successfully used in many engineering applications, including nuclear piping and reactor component analysis. However, fuel pin analysis has traditionally been carried out with finite difference codes, with the exception of Electric Powermore » Research Institute`s FREY code, which was developed for mainframe execution. This report describes FRANCO, a finite element fuel rod analysis code capable of computing temperature disrtibution and mechanical deformation of a single light water reactor fuel rod.« less

  2. Measurement-based reliability/performability models

    NASA Technical Reports Server (NTRS)

    Hsueh, Mei-Chen

    1987-01-01

    Measurement-based models based on real error-data collected on a multiprocessor system are described. Model development from the raw error-data to the estimation of cumulative reward is also described. A workload/reliability model is developed based on low-level error and resource usage data collected on an IBM 3081 system during its normal operation in order to evaluate the resource usage/error/recovery process in a large mainframe system. Thus, both normal and erroneous behavior of the system are modeled. The results provide an understanding of the different types of errors and recovery processes. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model the system behavior. A sensitivity analysis is performed to investigate the significance of using a semi-Markov process, as opposed to a Markov process, to model the measured system.

  3. Networking and AI systems: Requirements and benefits

    NASA Technical Reports Server (NTRS)

    1988-01-01

    The price performance benefits of network systems is well documented. The ability to share expensive resources sold timesharing for mainframes, department clusters of minicomputers, and now local area networks of workstations and servers. In the process, other fundamental system requirements emerged. These have now been generalized with open system requirements for hardware, software, applications and tools. The ability to interconnect a variety of vendor products has led to a specification of interfaces that allow new techniques to extend existing systems for new and exciting applications. As an example of the message passing system, local area networks provide a testbed for many of the issues addressed by future concurrent architectures: synchronization, load balancing, fault tolerance and scalability. Gold Hill has been working with a number of vendors on distributed architectures that range from a network of workstations to a hypercube of microprocessors with distributed memory. Results from early applications are promising both for performance and scalability.

  4. WATEQ4F - a personal computer Fortran translation of the geochemical model WATEQ2 with revised data base

    USGS Publications Warehouse

    Ball, J.W.; Nordstrom, D. Kirk; Zachmann, D.W.

    1987-01-01

    A FORTRAN 77 version of the PL/1 computer program for the geochemical model WATEQ2, which computes major and trace element speciation and mineral saturation for natural waters has been developed. The code (WATEQ4F) has been adapted to execute on an IBM PC or compatible microcomputer. Two versions of the code are available, one operating with IBM Professional FORTRAN and an 8087 or 89287 numeric coprocessor, and one which operates without a numeric coprocessor using Microsoft FORTRAN 77. The calculation procedure is identical to WATEQ2, which has been installed on many mainframes and minicomputers. Limited data base revisions include the addition of the following ions: AlHS04(++), BaS04, CaHS04(++), FeHS04(++), NaF, SrC03, and SrHCO3(+). This report provides the reactions and references for the data base revisions, instructions for program operation, and an explanation of the input and output files. Attachments contain sample output from three water analyses used as test cases and the complete FORTRAN source listing. U.S. Geological Survey geochemical simulation program PHREEQE and mass balance program BALANCE also have been adapted to execute on an IBM PC or compatible microcomputer with a numeric coprocessor and the IBM Professional FORTRAN compiler. (Author 's abstract)

  5. Japanese experiment module data management and communication system

    NASA Astrophysics Data System (ADS)

    Iizuka, Isao; Yamamoto, Harumitsu; Harada, Minoru; Eguchi, Iwao; Takahashi, Masami

    The data management and communications system (DMCS) for the Japanese experiment module (JEM) being developed for the Space Station is described. Data generated by JEM experiments will be transmitted via TDRS (primary link) to the NASDA Operation Control Center. The DMSC will provide data processing, test and graphics handling, schedule planning support, and data display and facilitate subsystems, payloads, emergency operations, status, and diagnostics and healthchecks management. The ground segment includes a mainframe, mass storage, a workstation, and a LAN, with the capability of receiving and manipulating data from the JEM, the Space Station, and the payload. Audio and alert functions are also included. The DMCS will be connected to the interior of the module with through-bulkhead optical fibers.

  6. Hardware Support for Malware Defense and End-to-End Trust

    DTIC Science & Technology

    2017-02-01

    IoT) sensors and actuators, mobile devices and servers; cloud based, stand alone, and traditional mainframes. The prototype developed demonstrated...virtual machines. For mobile platforms we developed and prototyped an architecture supporting separation of personalities on the same platform...4 3.1. MOBILE

  7. Curriculum Development through YTS Modular Credit Accumulation.

    ERIC Educational Resources Information Center

    Further Education Unit, London (England).

    This document reports the evaluation of the collaborately developed Modular Training Framework (MainFrame), a British curriculum development project, built around a commitment to a competency-based, modular credit accumulation program. The collaborators were three local education authorities (LEAs), those of Bedfordshire, Haringey, and Sheffield,…

  8. Automating Finance

    ERIC Educational Resources Information Center

    Moore, John

    2007-01-01

    In past years, higher education's financial management side has been riddled with manual processes and aging mainframe applications. This article discusses schools which had taken advantage of an array of technologies that automate billing, payment processing, and refund processing in the case of overpayment. The investments are well worth it:…

  9. Analysis of differential and active charging phenomena on ATS-5 and ATS-6

    NASA Technical Reports Server (NTRS)

    Olsen, R. C.; Whipple, E. C., Jr.

    1980-01-01

    Spacecraft charging on the differential charging and artificial particle emission experiments on ATS 5 and ATS 6 were studied. Differential charging of spacecraft surfaces generated large electrostatic barriers to spacecraft generated electrons, from photoemission, secondary emission, and thermal emitters. The electron emitter could partially or totally discharge the satellite, but the mainframe recharged negatively in a few 10's of seconds. The time dependence of the charging behavior was explained by the relatively large capacitance for differential charging in comparison to the small spacecraft to space capacitance. A daylight charging event on ATS 6 was shown to have a charging behavior suggesting the dominance of differential charging on the absolute potential of the mainframe. Ion engine operations and plasma emission experiments on ATS 6 were shown to be an effective means of controlling the spacecraft potential in eclipse and sunlight. Elimination of barrier effects around the detectors and improving the quality of the particle data are discussed.

  10. Interface design of VSOP'94 computer code for safety analysis

    NASA Astrophysics Data System (ADS)

    Natsir, Khairina; Yazid, Putranto Ilham; Andiwijayakusuma, D.; Wahanani, Nursinta Adi

    2014-09-01

    Today, most software applications, also in the nuclear field, come with a graphical user interface. VSOP'94 (Very Superior Old Program), was designed to simplify the process of performing reactor simulation. VSOP is a integrated code system to simulate the life history of a nuclear reactor that is devoted in education and research. One advantage of VSOP program is its ability to calculate the neutron spectrum estimation, fuel cycle, 2-D diffusion, resonance integral, estimation of reactors fuel costs, and integrated thermal hydraulics. VSOP also can be used to comparative studies and simulation of reactor safety. However, existing VSOP is a conventional program, which was developed using Fortran 65 and have several problems in using it, for example, it is only operated on Dec Alpha mainframe platforms and provide text-based output, difficult to use, especially in data preparation and interpretation of results. We develop a GUI-VSOP, which is an interface program to facilitate the preparation of data, run the VSOP code and read the results in a more user friendly way and useable on the Personal 'Computer (PC). Modifications include the development of interfaces on preprocessing, processing and postprocessing. GUI-based interface for preprocessing aims to provide a convenience way in preparing data. Processing interface is intended to provide convenience in configuring input files and libraries and do compiling VSOP code. Postprocessing interface designed to visualized the VSOP output in table and graphic forms. GUI-VSOP expected to be useful to simplify and speed up the process and analysis of safety aspects.

  11. Justification for, and design of, an economical programmable multiple flight simulator

    NASA Technical Reports Server (NTRS)

    Kreifeldt, J. G.; Wittenber, J.; Macdonald, G.

    1981-01-01

    The considered research interests in air traffic control (ATC) studies revolve about the concept of distributed ATC management based on the assumption that the pilot has a cockpit display of traffic and navigation information (CDTI) via CRT graphics. The basic premise is that a CDTI equipped pilot can, in coordination with a controller, manage a part of his local traffic situation thereby improving important aspects of ATC performance. A modularly designed programmable flight simulator system is prototyped as a means of providing an economical facility of up to eight simulators to interface with a mainframe/graphics system for ATC experimentation, particularly CDTI-distributed management in which pilot-pilot interaction can have a determining effect on system performance. Need for a multiman simulator facility is predicted on results from an earlier three simulator facility.

  12. An optical scan/statistical package for clinical data management in C-L psychiatry.

    PubMed

    Hammer, J S; Strain, J J; Lyerly, M

    1993-03-01

    This paper explores aspects of the need for clinical database management systems that permit ongoing service management, measurement of the quality and appropriateness of care, databased administration of consultation liaison (C-L) services, teaching/educational observations, and research. It describes an OPTICAL SCAN databased management system that permits flexible form generation, desktop publishing, and linking of observations in multiple files. This enhanced MICRO-CARES software system--Medical Application Platform (MAP)--permits direct transfer of the data to ASCII and SAS format for mainframe manipulation of the clinical information. The director of a C-L service may now develop his or her own forms, incorporate structured instruments, or develop "branch chains" of essential data to add to the core data set without the effort and expense to reprint forms or consult with commercial vendors.

  13. The Navy/NASA Engine Program (NNEP89): A user's manual

    NASA Technical Reports Server (NTRS)

    Plencner, Robert M.; Snyder, Christopher A.

    1991-01-01

    An engine simulation computer code called NNEP89 was written to perform 1-D steady state thermodynamic analysis of turbine engine cycles. By using a very flexible method of input, a set of standard components are connected at execution time to simulate almost any turbine engine configuration that the user could imagine. The code was used to simulate a wide range of engine cycles from turboshafts and turboprops to air turborockets and supersonic cruise variable cycle engines. Off design performance is calculated through the use of component performance maps. A chemical equilibrium model is incorporated to adequately predict chemical dissociation as well as model virtually any fuel. NNEP89 is written in standard FORTRAN77 with clear structured programming and extensive internal documentation. The standard FORTRAN77 programming allows it to be installed onto most mainframe computers and workstations without modification. The NNEP89 code was derived from the Navy/NASA Engine program (NNEP). NNEP89 provides many improvements and enhancements to the original NNEP code and incorporates features which make it easier to use for the novice user. This is a comprehensive user's guide for the NNEP89 code.

  14. Exhaustive Versus Randomized Searchers for Nonlinear Optimization in 21st Century Computing: Solar Application

    NASA Technical Reports Server (NTRS)

    Sen, Syamal K.; AliShaykhian, Gholam

    2010-01-01

    We present a simple multi-dimensional exhaustive search method to obtain, in a reasonable time, the optimal solution of a nonlinear programming problem. It is more relevant in the present day non-mainframe computing scenario where an estimated 95% computing resources remains unutilized and computing speed touches petaflops. While the processor speed is doubling every 18 months, the band width is doubling every 12 months, and the hard disk space is doubling every 9 months. A randomized search algorithm or, equivalently, an evolutionary search method is often used instead of an exhaustive search algorithm. The reason is that a randomized approach is usually polynomial-time, i.e., fast while an exhaustive search method is exponential-time i.e., slow. We discuss the increasing importance of exhaustive search in optimization with the steady increase of computing power for solving many real-world problems of reasonable size. We also discuss the computational error and complexity of the search algorithm focusing on the fact that no measuring device can usually measure a quantity with an accuracy greater than 0.005%. We stress the fact that the quality of solution of the exhaustive search - a deterministic method - is better than that of randomized search. In 21 st century computing environment, exhaustive search cannot be left aside as an untouchable and it is not always exponential. We also describe a possible application of these algorithms in improving the efficiency of solar cells - a real hot topic - in the current energy crisis. These algorithms could be excellent tools in the hands of experimentalists and could save not only large amount of time needed for experiments but also could validate the theory against experimental results fast.

  15. 32 CFR 1700.6 - Fees for records services.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 32 National Defense 6 2013-07-01 2013-07-01 false Fees for records services. 1700.6 Section 1700.6 National Defense Other Regulations Relating to National Defense OFFICE OF THE DIRECTOR OF NATIONAL... CD (recordable) Each 20.00 Telecommunications Per minute .50 Paper (mainframe printer) Per page .10...

  16. 32 CFR 1700.6 - Fees for records services.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 32 National Defense 6 2011-07-01 2011-07-01 false Fees for records services. 1700.6 Section 1700.6 National Defense Other Regulations Relating to National Defense OFFICE OF THE DIRECTOR OF NATIONAL... CD (recordable) Each 20.00 Telecommunications Per minute .50 Paper (mainframe printer) Per page .10...

  17. Space Station Freedom environmental database system (FEDS) for MSFC testing

    NASA Technical Reports Server (NTRS)

    Story, Gail S.; Williams, Wendy; Chiu, Charles

    1991-01-01

    The Water Recovery Test (WRT) at Marshall Space Flight Center (MSFC) is the first demonstration of integrated water recovery systems for potable and hygiene water reuse as envisioned for Space Station Freedom (SSF). In order to satisfy the safety and health requirements placed on the SSF program and facilitate test data assessment, an extensive laboratory analysis database was established to provide a central archive and data retrieval function. The database is required to store analysis results for physical, chemical, and microbial parameters measured from water, air and surface samples collected at various locations throughout the test facility. The Oracle Relational Database Management System (RDBMS) was utilized to implement a secured on-line information system with the ECLSS WRT program as the foundation for this system. The database is supported on a VAX/VMS 8810 series mainframe and is accessible from the Marshall Information Network System (MINS). This paper summarizes the database requirements, system design, interfaces, and future enhancements.

  18. Ada and the rapid development lifecycle

    NASA Technical Reports Server (NTRS)

    Deforrest, Lloyd; Gref, Lynn

    1991-01-01

    JPL is under contract, through NASA, with the US Army to develop a state-of-the-art Command Center System for the US European Command (USEUCOM). The Command Center System will receive, process, and integrate force status information from various sources and provide this integrated information to staff officers and decision makers in a format designed to enhance user comprehension and utility. The system is based on distributed workstation class microcomputers, VAX- and SUN-based data servers, and interfaces to existing military mainframe systems and communication networks. JPL is developing the Command Center System utilizing an incremental delivery methodology called the Rapid Development Methodology with adherence to government and industry standards including the UNIX operating system, X Windows, OSF/Motif, and the Ada programming language. Through a combination of software engineering techniques specific to the Ada programming language and the Rapid Development Approach, JPL was able to deliver capability to the military user incrementally, with comparable quality and improved economies of projects developed under more traditional software intensive system implementation methodologies.

  19. [Data collection in anesthesia. Experiences with the inauguration of a new information system].

    PubMed

    Zbinden, A M; Rothenbühler, H; Häberli, B

    1997-06-01

    In many institutions information systems are used to process off-line anaesthesia data for invoices, statistical purposes, and quality assurance. Information systems are also increasingly being used to improve process control in order to reduce costs. Most of today's systems were created when information technology and working processes in anaesthesia were very different from those in use today. Thus, many institutions must now replace their computer systems but are probably not aware of how complex this change will be. Modern information systems mostly use client-server architecture and relational data bases. Substituting an old system with a new one is frequently a greater task than designing a system from scratch. This article gives the conclusions drawn from the experience obtained when a large departmental computer system is redesigned in an university hospital. The new system was based on a client-server architecture and was developed by an external company without preceding conceptual analysis. Modules for patient, anaesthesia, surgical, and pain-service data were included. Data were analysed using a separate statistical package (RS/1 from Bolt Beranek), taking advantage of its powerful precompiled procedures. Development and introduction of the new system took much more time and effort than expected despite the use of modern software tools. Introduction of the new program required intensive user training despite the choice of modem graphic screen layouts. Automatic data-reading systems could not be used, as too many faults occurred and the effort for the user was too high. However, after the initial problems were solved the system turned out to be a powerful tool for quality control (both process and outcome quality), billing, and scheduling. The statistical analysis of the data resulted in meaningful and relevant conclusions. Before creating a new information system, the working processes have to be analysed and, if possible, made more efficient; a detailed programme specification must then be made. A servicing and maintenance contract should be drawn up before the order is given to a company. Time periods of equal duration have to be scheduled for defining, writing, testing and introducing the program. Modern client-server systems with relational data bases are by no means simpler to establish and maintain than previous mainframe systems with hierarchical data bases, and thus, experienced computer specialists need to be close at hand. We recommend collecting data only once for both statistics and quality control. To verify data quality, a system of random spot-sampling has to be established. Despite the large investments needed to build up such a system, we consider it a powerful tool for helping to solve the difficult daily problems of managing a surgical and anaesthesia unit.

  20. Commercial space development needs cheap launchers

    NASA Astrophysics Data System (ADS)

    Benson, James William

    1998-01-01

    SpaceDev is in the market for a deep space launch, and we are not going to pay $50 million for it. There is an ongoing debate about the elasticity of demand related to launch costs. On the one hand there are the ``big iron'' NASA and DoD contractors who say that there is no market for small or inexpensive launchers, that lowering launch costs will not result in significantly more launches, and that the current uncompetitive pricing scheme is appropriate. On the other hand are commercial companies which compete in the real world, and who say that there would be innumerable new launches if prices were to drop dramatically. I participated directly in the microcomputer revolution, and saw first hand what happened to the big iron computer companies who failed to see or heed the handwriting on the wall. We are at the same stage in the space access revolution that personal computers were in the late '70s and early '80s. The global economy is about to be changed in ways that are just as unpredictable as those changes wrought after the introduction of the personal computer. Companies which fail to innovate and keep producing only big iron will suffer the same fate as IBM and all the now-extinct mainframe and minicomputer companies. A few will remain, but with a small share of the market, never again to be in a position to dominate.

  1. Bridging the gap: linking a legacy hospital information system with a filmless radiology picture archiving and communications system within a nonhomogeneous environment.

    PubMed

    Rubin, R K; Henri, C J; Cox, R D

    1999-05-01

    A health level 7 (HL7)-conformant data link to exchange information between the mainframe hospital information system (HIS) of our hospital and our home-grown picture archiving and communications system (PACS) is a result of a collaborative effort between the HIS department and the PACS development team. Based of the ability to link examination requisitions and image studies, applications have been generated to optimise workflow and to improve the reliability and distribution of radiology information. Now, images can be routed to individual radiologists and clinicians; worklists facilitate radiology reporting; applications exist to create, edit, and view reports and images via the internet; and automated quality control now limits the incidence of "lost" cases and errors in image routing. By following the HL7 standard to develop the gateway to the legacy system, the development of a radiology information system for booking, reading, reporting, and billing remains universal and does not preclude the option to integrate off-the-shelf commercial products.

  2. Planning for advanced EDI operations in materiel management--a case study.

    PubMed

    Hanon, C

    1994-01-01

    Florida Hospital, a 1,462-bed organization in five locations in the central Florida area, wanted to implement an EDI system that would take redundancies, paper and FTEs out of their system. They hired a consultant to educate them about EDI and help them put together an EDI business plan. They decided to implement three initial transaction sets for a price catalog, purchase orders, and PO acknowledgments. Requesting departments will be able to order routine items directly from vendors via EDI. Future transaction sets will include advance ship notice with price (857) that will generate a receipt off of which the hospital will pay, and electronic funds transfer. Translation and communication software for their mainframe system was chosen to accommodate both the most and least electronically sophisticated trading partners, and negotiations/education on doing business with the hospital via EDI are ongoing.

  3. 76 FR 6839 - ActiveCore Technologies, Inc., Battery Technologies, Inc., China Media1 Corp., Dura Products...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-08

    ... SECURITIES AND EXCHANGE COMMISSION [File No. 500-1] ActiveCore Technologies, Inc., Battery Technologies, Inc., China Media1 Corp., Dura Products International, Inc. (n/k/a Dexx Corp.), Global Mainframe... Battery Technologies, Inc. because it has not filed any periodic reports since the period ended December...

  4. Spectral analysis of airflow sounds in patent versus occluded tracheostomy tubes: a pilot study in tracheostomized adult patients.

    PubMed

    Rao, A J; Niwa, H; Watanabe, Y; Fukuta, S; Yanagita, N

    1990-05-01

    Cannula occlusion is a life-threatening postoperative complication of tracheostomy. Current management largely relies on nursing care for prevention of fatalities because no proven mechanical, machine-based support monitoring exists. The objective of this paper was to address the problem of monitoring the state of cannula patency, based on analysis of airflow acoustic spectral patterns in tracheostomized adult patients in the patent and partially occluded cannula. Tracheal airflow sounds were picked up via a condenser microphone air-coupled to the skin just below the tracheal stoma. Signal output from Mic was amplified, high-pass filtered, digital tape-recorded, and analyzed on a mainframe computer. Although airflow frequencies for patient cannulae were predominantly low-pitched (0.1 to 0.3 kHz), occluded tubes had discrete high-pitched spectral peaks (1.3 to 1.6 kHz). These results suggest that frequency analysis of airflow sounds can identify a change in the status of cannula patency.

  5. Restructuring VA ambulatory care and medical education: the PACE model of primary care.

    PubMed

    Cope, D W; Sherman, S; Robbins, A S

    1996-07-01

    The Veterans Health Administration (VHA) Western Region and associated medical schools formulated a set of recommendations for an improved ambulatory health care delivery system during a 1988 strategic planning conference. As a result, the Department of Veterans Affairs (VA) Medical Center in Sepulveda, California, initiated the Pilot (now Primary) Ambulatory Care and Education (PACE) program in 1990 to implement and evaluate a model program. The PACE program represents a significant departure from traditional VA and non-VA academic medical center care, shifting the focus of care from the inpatient to the outpatient setting. From its inception, the PACE program has used an interdisciplinary team approach with three independent global care firms. Each firm is interdisciplinary in composition, with a matrix management structure that expands role function and empowers team members. Emphasis is on managed primary care, stressing a biopsychosocial approach and cost-effective comprehensive care emphasizing prevention and health maintenance. Information management is provided through a network of personal computers that serve as a front end to the VHA Decentralized Hospital Computer Program (DHCP) mainframe. In addition to providing comprehensive and cost-effective care, the PACE program educates trainees in all health care disciplines, conducts research, and disseminates information about important procedures and outcomes. Undergraduate and graduate trainees from 11 health care disciplines rotate through the PACE program to learn an integrated approach to managed ambulatory care delivery. All trainees are involved in a problem-based approach to learning that emphasizes shared training experiences among health care disciplines. This paper describes the transitional phases of the PACE program (strategic planning, reorganization, and quality improvement) that are relevant for other institutions that are shifting to training programs emphasizing primary and ambulatory care.

  6. Forecasting the need for physicians in the United States: the Health Resources and Services Administration's physician requirements model.

    PubMed Central

    Greenberg, L; Cultice, J M

    1997-01-01

    OBJECTIVE: The Health Resources and Services Administration's Bureau of Health Professions developed a demographic utilization-based model of physician specialty requirements to explore the consequences of a broad range of scenarios pertaining to the nation's health care delivery system on need for physicians. DATA SOURCE/STUDY SETTING: The model uses selected data primarily from the National Center for Health Statistics, the American Medical Association, and the U.S. Bureau of Census. Forecasts are national estimates. STUDY DESIGN: Current (1989) utilization rates for ambulatory and inpatient medical specialty services were obtained for the population according to age, gender, race/ethnicity, and insurance status. These rates are used to estimate specialty-specific total service utilization expressed in patient care minutes for future populations and converted to physician requirements by applying per-physician productivity estimates. DATA COLLECTION/EXTRACTION METHODS: Secondary data were analyzed and put into matrixes for use in the mainframe computer-based model. Several missing data points, e.g., for HMO-enrolled populations, were extrapolated from available data by the project's contractor. PRINCIPAL FINDINGS: The authors contend that the Bureau's demographic utilization model represents improvements over other data-driven methodologies that rely on staffing ratios and similar supply-determined bases for estimating requirements. The model's distinct utility rests in offering national-level physician specialty requirements forecasts. Images Figure 1 PMID:9018213

  7. Section 1. Simulation of surface-water integrated flow and transport in two-dimensions: SWIFT2D user's manual

    USGS Publications Warehouse

    Schaffranek, Raymond W.

    2004-01-01

    A numerical model for simulation of surface-water integrated flow and transport in two (horizontal-space) dimensions is documented. The model solves vertically integrated forms of the equations of mass and momentum conservation and solute transport equations for heat, salt, and constituent fluxes. An equation of state for salt balance directly couples solution of the hydrodynamic and transport equations to account for the horizontal density gradient effects of salt concentrations on flow. The model can be used to simulate the hydrodynamics, transport, and water quality of well-mixed bodies of water, such as estuaries, coastal seas, harbors, lakes, rivers, and inland waterways. The finite-difference model can be applied to geographical areas bounded by any combination of closed land or open water boundaries. The simulation program accounts for sources of internal discharges (such as tributary rivers or hydraulic outfalls), tidal flats, islands, dams, and movable flow barriers or sluices. Water-quality computations can treat reactive and (or) conservative constituents simultaneously. Input requirements include bathymetric and topographic data defining land-surface elevations, time-varying water level or flow conditions at open boundaries, and hydraulic coefficients. Optional input includes the geometry of hydraulic barriers and constituent concentrations at open boundaries. Time-dependent water level, flow, and constituent-concentration data are required for model calibration and verification. Model output consists of printed reports and digital files of numerical results in forms suitable for postprocessing by graphical software programs and (or) scientific visualization packages. The model is compatible with most mainframe, workstation, mini- and micro-computer operating systems and FORTRAN compilers. This report defines the mathematical formulation and computational features of the model, explains the solution technique and related model constraints, describes the model framework, documents the type and format of inputs required, and identifies the type and format of output available.

  8. Databank Software for the 1990s and Beyond--Part 1: The User's Wish List.

    ERIC Educational Resources Information Center

    Basch, Reva

    1990-01-01

    Describes desired software enhancements identified by the Southern California Online Users Group in the areas of search language, database selection, document retrieval and display, user interface, customer support, and cost and economic issues. The need to prioritize these wishes and to determine whether features should reside in the mainframe or…

  9. Checking the Goldbach conjecture up to 4\\cdot 10^11

    NASA Astrophysics Data System (ADS)

    Sinisalo, Matti K.

    1993-10-01

    One of the most studied problems in additive number theory, Goldbach's conjecture, states that every even integer greater than or equal to 4 can be expressed as a sum of two primes. In this paper checking of this conjecture up to 4 \\cdot {10^{11}} by the IBM 3083 mainframe with vector processor is reported.

  10. GSTARS computer models and their applications, part I: theoretical development

    USGS Publications Warehouse

    Yang, C.T.; Simoes, F.J.M.

    2008-01-01

    GSTARS is a series of computer models developed by the U.S. Bureau of Reclamation for alluvial river and reservoir sedimentation studies while the authors were employed by that agency. The first version of GSTARS was released in 1986 using Fortran IV for mainframe computers. GSTARS 2.0 was released in 1998 for personal computer application with most of the code in the original GSTARS revised, improved, and expanded using Fortran IV/77. GSTARS 2.1 is an improved and revised GSTARS 2.0 with graphical user interface. The unique features of all GSTARS models are the conjunctive use of the stream tube concept and of the minimum stream power theory. The application of minimum stream power theory allows the determination of optimum channel geometry with variable channel width and cross-sectional shape. The use of the stream tube concept enables the simulation of river hydraulics using one-dimensional numerical solutions to obtain a semi-two- dimensional presentation of the hydraulic conditions along and across an alluvial channel. According to the stream tube concept, no water or sediment particles can cross the walls of stream tubes, which is valid for many natural rivers. At and near sharp bends, however, sediment particles may cross the boundaries of stream tubes. GSTARS3, based on FORTRAN 90/95, addresses this phenomenon and further expands the capabilities of GSTARS 2.1 for cohesive and non-cohesive sediment transport in rivers and reservoirs. This paper presents the concepts, methods, and techniques used to develop the GSTARS series of computer models, especially GSTARS3. ?? 2008 International Research and Training Centre on Erosion and Sedimentation and the World Association for Sedimentation and Erosion Research.

  11. Power supply standardization and optimization study

    NASA Technical Reports Server (NTRS)

    Ware, C. L.; Ragusa, E. V.

    1972-01-01

    A comprehensive design study of a power supply for use in the space shuttle and other space flight applications is presented. The design specifications are established for a power supply capable of supplying over 90 percent of the anticipated voltage requirements for future spacecraft avionics systems. Analyses and tradeoff studies were performed on several alternative design approaches to assure that the selected design would provide near optimum performance of the planned applications. The selected design uses a dc-to-dc converter incorporating regenerative current feedback with a time-ratio controlled duty cycle to achieve high efficiency over a wide variation in input voltage and output loads. The packaging concept uses an expandable mainframe capable of accommodating up to six inverter/regulator modules with one common input filter module.

  12. Tell it like it is.

    PubMed

    Lee, S L

    2000-05-01

    Nurses, therapists and case managers were spending too much time each week on the phone waiting to read patient reports to live transcriptionists who would then type the reports for storage in VNSNY's clinical management mainframe database. A speech recognition system helped solve the problem by providing the staff 24-hour access to an automated transcription service any day of the week. Nurses and case managers no longer wait in long queues to transmit patient reports or to retrieve information from the database. Everything is done automatically within minutes. VNSNY saved both time and money by updating its transcription strategy. Now nurses can spend more time with patients and less time on the phone transcribing notes. It also means fewer staff members are needed on weekends to do manual transcribing.

  13. UPEML Version 3.0: A machine-portable CDC update emulator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mehlhorn, T.A.; Haill, T.A.

    1992-04-01

    UPEML is a machine-portable program that emulates a subset of the functions of the standard CDC Update. Machine-portability has been achieved by conforming to ANSI standards for Fortran-77. UPEML is compact and fairly efficient; however, it only allows a restricted syntax as compared with the CDC Update. This program was written primarily to facilitate the use of CDC-based scientific packages on alternate computer systems such as the VAX/VMS mainframes and UNIX workstations. UPEML has also been successfully used on the multiprocessor ELXSI, on CRAYs under both UNICOS and CTSS operating systems, and on Sun, HP, Stardent and IBM workstations. UPEMLmore » was originally released with the ITS electron/photon Monte Carlo transport package, which was developed on a CDC-7600 and makes extensive use of conditional file structure to combine several problem geometry and machine options into a single program file. UPEML 3.0 is an enhanced version of the original code and is being independently released for use at any installation or with any code package. Version 3.0 includes enhanced error checking, full ASCII character support, a program library audit capability, and a partial update option in which only selected or modified decks are written to the complete file. Version 3.0 also checks for overlapping corrections, allows processing of pested calls to common decks, and allows the use of alternate files in READ and ADDFILE commands. Finally, UPEML Version 3.0 allows the assignment of input and output files at runtime on the control line.« less

  14. UPEML Version 3. 0: A machine-portable CDC update emulator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mehlhorn, T.A.; Haill, T.A.

    1992-04-01

    UPEML is a machine-portable program that emulates a subset of the functions of the standard CDC Update. Machine-portability has been achieved by conforming to ANSI standards for Fortran-77. UPEML is compact and fairly efficient; however, it only allows a restricted syntax as compared with the CDC Update. This program was written primarily to facilitate the use of CDC-based scientific packages on alternate computer systems such as the VAX/VMS mainframes and UNIX workstations. UPEML has also been successfully used on the multiprocessor ELXSI, on CRAYs under both UNICOS and CTSS operating systems, and on Sun, HP, Stardent and IBM workstations. UPEMLmore » was originally released with the ITS electron/photon Monte Carlo transport package, which was developed on a CDC-7600 and makes extensive use of conditional file structure to combine several problem geometry and machine options into a single program file. UPEML 3.0 is an enhanced version of the original code and is being independently released for use at any installation or with any code package. Version 3.0 includes enhanced error checking, full ASCII character support, a program library audit capability, and a partial update option in which only selected or modified decks are written to the complete file. Version 3.0 also checks for overlapping corrections, allows processing of pested calls to common decks, and allows the use of alternate files in READ and ADDFILE commands. Finally, UPEML Version 3.0 allows the assignment of input and output files at runtime on the control line.« less

  15. DEAN: A program for dynamic engine analysis

    NASA Technical Reports Server (NTRS)

    Sadler, G. G.; Melcher, K. J.

    1985-01-01

    The Dynamic Engine Analysis program, DEAN, is a FORTRAN code implemented on the IBM/370 mainframe at NASA Lewis Research Center for digital simulation of turbofan engine dynamics. DEAN is an interactive program which allows the user to simulate engine subsystems as well as a full engine systems with relative ease. The nonlinear first order ordinary differential equations which define the engine model may be solved by one of four integration schemes, a second order Runge-Kutta, a fourth order Runge-Kutta, an Adams Predictor-Corrector, or Gear's method for still systems. The numerical data generated by the model equations are displayed at specified intervals between which the user may choose to modify various parameters affecting the model equations and transient execution. Following the transient run, versatile graphics capabilities allow close examination of the data. DEAN's modeling procedure and capabilities are demonstrated by generating a model of simple compressor rig.

  16. Standard high-reliability integrated circuit logic packaging. [for deep space tracking stations

    NASA Technical Reports Server (NTRS)

    Slaughter, D. W.

    1977-01-01

    A family of standard, high-reliability hardware used for packaging digital integrated circuits is described. The design transition from early prototypes to production hardware is covered and future plans are discussed. Interconnections techniques are described as well as connectors and related hardware available at both the microcircuit packaging and main-frame level. General applications information is also provided.

  17. Thematic mapper flight model preshipment review data package. Volume 2, part A: Subsystem data

    NASA Technical Reports Server (NTRS)

    1982-01-01

    Performance and acceptance data are presented for the multiplexer, scan mirror, power supply, mainframe/top mechanical and the aft optics, assemblies. Other major subsystems evaluated include the relay optics, the electronic module, the radiative cooler, and the cable harness. Reference lists of nonconforming materials reports, failure reports, and requests for deviation/waiver are also given.

  18. A modular finite-element model (MODFE) for areal and axisymmetric ground-water-flow problems, Part 3: Design philosophy and programming details

    USGS Publications Warehouse

    Torak, L.J.

    1993-01-01

    A MODular Finite-Element, digital-computer program (MODFE) was developed to simulate steady or unsteady-state, two-dimensional or axisymmetric ground-water-flow. The modular structure of MODFE places the computationally independent tasks that are performed routinely by digital-computer programs simulating ground-water flow into separate subroutines, which are executed from the main program by control statements. Each subroutine consists of complete sets of computations, or modules, which are identified by comment statements, and can be modified by the user without affecting unrelated computations elsewhere in the program. Simulation capabilities can be added or modified by either adding or modifying subroutines that perform specific computational tasks, and the modular-program structure allows the user to create versions of MODFE that contain only the simulation capabilities that pertain to the ground-water problem of interest. MODFE is written in a Fortran programming language that makes it virtually device independent and compatible with desk-top personal computers and large mainframes. MODFE uses computer storage and execution time efficiently by taking advantage of symmetry and sparseness within the coefficient matrices of the finite-element equations. Parts of the matrix coefficients are computed and stored as single-subscripted variables, which are assembled into a complete coefficient just prior to solution. Computer storage is reused during simulation to decrease storage requirements. Descriptions of subroutines that execute the computational steps of the modular-program structure are given in tables that cross reference the subroutines with particular versions of MODFE. Programming details of linear and nonlinear hydrologic terms are provided. Structure diagrams for the main programs show the order in which subroutines are executed for each version and illustrate some of the linear and nonlinear versions of MODFE that are possible. Computational aspects of changing stresses and boundary conditions with time and of mass-balance and error terms are given for each hydrologic feature. Program variables are listed and defined according to their occurrence in the main programs and in subroutines. Listings of the main programs and subroutines are given.

  19. [Establishing a clinical information system for surgical ophthalmology and orthopedics specialties with reference to GSG '93].

    PubMed

    Dick, B; Basad, E

    1996-04-01

    As a result of new health care guidelines (Gesundheitsstrukturgesetz) and the federal hospital and nursing ordinance, there has been a large increase in the documentation required for diagnoses (ICD-9) and service ("Operationenschlüssel nach section 301 SGB V" = ICPM), all of which is done in the form of a numeric code. The method of coding diagnoses is supposed to make possible data entry and statistical evaluation of plausibility controls, as well as conspicuous and random testing of economic feasibility. Our data processing system is designed to assist in the planning and organization of clinical activities, while at the same time making documentation in accordance with health care guidelines easier and providing scientific documentation and evaluation. The application MedAccess was developed by clinicians on the basis of a relational client-server database. The application has been in use since June 1992 and has been further developed during operation according to the requirements and wishes of clinic and administrative staff. In cooperation with the Institute for Medical Information Technology, a computer interface with the patient check-in system was created, making possible the importing of patient data. The application is continuously updated according to the current needs of the clinic and administration. The primary functions of MedAccess include managing patient data, planning of in-patient admissions, surgical planning, organization, documentation (surgery book, reports with follow-up treatment records), administration of the tissue bank, clinic communications, clinic work processing, and management of the staff duty roster. Clinical data are entered into a computer and processed on site, and the user is assisted by practical applications which do not require special knowledge of data processing or encoding systems. The data is entered only once, but can be further used for other purposes, such as evaluations or selective transfer, for example, to clinical documents. Through an integrated flow of data, information entered one time remains readily available, while, at the same time, preventing duplicate entries. The integration of hardware and software via a mainframe computer (clinic system WING) has proven to be well-suited for the exchange of data. The use of this thesaurus-supported and graphics-oriented system required no special knowledge of the ICD code and makes documentation much easier to produce. The advantages of computer-supported encoding not only include a savings in time, but also an improvement in the quality of the encoding from which clinical and scientific reports can be derived. The relational client-server, operating in a graphics-supported programming environment, makes it possible for the clinic's doctors to further develop and improve the system. Through the installation and support of a Macintosh network, and training of doctors, medical personnel and clerical staff, cost as well as investment of time have been kept to a minimum in comparison to other LAN servers.

  20. Generalized Support Software: Domain Analysis and Implementation

    NASA Technical Reports Server (NTRS)

    Stark, Mike; Seidewitz, Ed

    1995-01-01

    For the past five years, the Flight Dynamics Division (FDD) at NASA's Goddard Space Flight Center has been carrying out a detailed domain analysis effort and is now beginning to implement Generalized Support Software (GSS) based on this analysis. GSS is part of the larger Flight Dynamics Distributed System (FDDS), and is designed to run under the FDDS User Interface / Executive (UIX). The FDD is transitioning from a mainframe based environment to systems running on engineering workstations. The GSS will be a library of highly reusable components that may be configured within the standard FDDS architecture to quickly produce low-cost satellite ground support systems. The estimates for the first release is that this library will contain approximately 200,000 lines of code. The main driver for developing generalized software is development cost and schedule improvement. The goal is to ultimately have at least 80 percent of all software required for a spacecraft mission (within the domain supported by the GSS) to be configured from the generalized components.

  1. Chip architecture - A revolution brewing

    NASA Astrophysics Data System (ADS)

    Guterl, F.

    1983-07-01

    Techniques being explored by microchip designers and manufacturers to both speed up memory access and instruction execution while protecting memory are discussed. Attention is given to hardwiring control logic, pipelining for parallel processing, devising orthogonal instruction sets for interchangeable instruction fields, and the development of hardware for implementation of virtual memory and multiuser systems to provide memory management and protection. The inclusion of microcode in mainframes eliminated logic circuits that control timing and gating of the CPU. However, improvements in memory architecture have reduced access time to below that needed for instruction execution. Hardwiring the functions as a virtual memory enhances memory protection. Parallelism involves a redundant architecture, which allows identical operations to be performed simultaneously, and can be directed with microcode to avoid abortion of intermediate instructions once on set of instructions has been completed.

  2. Effect of Joule heating and current crowding on electromigration in mobile technology

    NASA Astrophysics Data System (ADS)

    Tu, K. N.; Liu, Yingxia; Li, Menglu

    2017-03-01

    In the present era of big data and internet of things, the use of microelectronic products in all aspects of our life is manifested by the ubiquitous presence of mobile devices as i-phones and wearable i-products. These devices are facing the need for higher power and greater functionality applications such as in i-health, yet they are limited by physical size. At the moment, software (Apps) is much ahead of hardware in mobile technology. To advance hardware, the end of Moore's law in two-dimensional integrated circuits can be extended by three-dimensional integrated circuits (3D ICs). The concept of 3D ICs has been with us for more than ten years. The challenge in 3D IC technology is dense packing by using both vertical and horizontal interconnections. Mass production of 3D IC devices is behind schedule due to cost because of low yield and uncertain reliability. Joule heating is serious in a dense structure because of heat generation and dissipation. A change of reliability paradigm has advanced from failure at a specific circuit component to failure at a system level weak-link. Currently, the electronic industry is introducing 3D IC devices in mainframe computers, where cost is not an issue, for the purpose of collecting field data of failure, especially the effect of Joule heating and current crowding on electromigration. This review will concentrate on the positive feedback between Joule heating and electromigration, resulting in an accelerated system level weak-link failure. A new driving force of electromigration, the electric potential gradient force due to current crowding, will be reviewed critically. The induced failure tends to occur in the low current density region.

  3. Yong-Ki Kim — His Life and Recent Work

    NASA Astrophysics Data System (ADS)

    Stone, Philip M.

    2007-08-01

    Dr. Kim made internationally recognized contributions in many areas of atomic physics research and applications, and was still very active when he was killed in an automobile accident. He joined NIST in 1983 after 17 years at the Argonne National Laboratory following his Ph.D. work at the University of Chicago. Much of his early work at Argonne and especially at NIST was the elucidation and detailed analysis of the structure of highly charged ions. He developed a sophisticated, fully relativistic atomic structure theory that accurately predicts atomic energy levels, transition wavelengths, lifetimes, and transition probabilities for a large number of ions. This information has been vital to model the properties of the hot interior of fusion research plasmas, where atomic ions must be described with relativistic atomic structure calculations. In recent years, Dr. Kim worked on the precise calculation of ionization and excitation cross sections of numerous atoms, ions, and molecules that are important in fusion research and in plasma processing for manufacturing semiconductor chips. Dr. Kim greatly advanced the state-of-the-art of calculations for these cross sections through development and implementation of highly innovative methods, including his Binary-Encounter-Bethe (BEB) theory and a scaled plane wave Born (scaled PWB) theory. His methods, using closed quantum mechanical formulas and no adjustable parameters, avoid tedious large-scale computations with main-frame computers. His calculations closely reproduce the results of benchmark experiments as well as large-scale calculations requiring hours of computer time. This recent work on BEB and scaled PWB is reviewed and examples of its capabilities are shown.

  4. Design of cryogenic tanks for launch vehicles

    NASA Technical Reports Server (NTRS)

    Copper, Charles; Pilkey, Walter D.; Haviland, John K.

    1990-01-01

    During the period since January 1990, work was concentrated on the problem of the buckling of the structure of an ALS (advanced launch systems) tank during the boost phase. The primary problem was to analyze a proposed hat stringer made by superplastic forming, and to compare it with an integrally stiffened stringer design. A secondary objective was to determine whether structural rings having the identical section to the stringers will provide adequate support against overall buckling. All of the analytical work was carried out with the TESTBED program on the CONVEX computer, using PATRAN programs to create models. Analyses of skin/stringer combinations have shown that the proposed stringer design is an adequate substitute for the integrally stiffened stringer. Using a highly refined mesh to represent the corrugations in the vertical webs of the hat stringers, effective values were obtained for cross-sectional area, moment of inertia, centroid height, and torsional constant. Not only can these values be used for comparison with experimental values, but they can also be used for beams to replace the stringers and frames in analytical models of complete sections of tank. The same highly refined model was used to represent a section of skin reinforced by a stringer and a ring segment in the configuration of a cross. It was intended that this would provide a baseline buckling analysis representing a basic mode, however, the analysis proved to be beyond the scope of the CONVEX computer. One quarter of this model was analyzed, however, to provide information on buckling between the spot welds. Models of large sections of the tank structure were made, using beam elements to model the stringers and frames. In order to represent the stiffening effects of pressure, stresses and deflections under pressure should first be obtained, and then the buckling analysis should be made on the structure so deflected. So far, uncharacteristic deflections under pressure were obtained from the TESTBED program using two types of structural elements. Similar results were obtained using the ANSYS program on a mainframe computer, although two finite element programs on microcomputers have yielded realistic results.

  5. A PC-based computer package for automatic detection and location of earthquakes: Application to a seismic network in eastern sicity (Italy)

    NASA Astrophysics Data System (ADS)

    Patanè, Domenico; Ferrari, Ferruccio; Giampiccolo, Elisabetta; Gresta, Stefano

    Few automated data acquisition and processing systems operate on mainframes, some run on UNIX-based workstations and others on personal computers, equipped with either DOS/WINDOWS or UNIX-derived operating systems. Several large and complex software packages for automatic and interactive analysis of seismic data have been developed in recent years (mainly for UNIX-based systems). Some of these programs use a variety of artificial intelligence techniques. The first operational version of a new software package, named PC-Seism, for analyzing seismic data from a local network is presented in Patanè et al. (1999). This package, composed of three separate modules, provides an example of a new generation of visual object-oriented programs for interactive and automatic seismic data-processing running on a personal computer. In this work, we mainly discuss the automatic procedures implemented in the ASDP (Automatic Seismic Data-Processing) module and real time application to data acquired by a seismic network running in eastern Sicily. This software uses a multi-algorithm approach and a new procedure MSA (multi-station-analysis) for signal detection, phase grouping and event identification and location. It is designed for an efficient and accurate processing of local earthquake records provided by single-site and array stations. Results from ASDP processing of two different data sets recorded at Mt. Etna volcano by a regional network are analyzed to evaluate its performance. By comparing the ASDP pickings with those revised manually, the detection and subsequently the location capabilities of this software are assessed. The first data set is composed of 330 local earthquakes recorded in the Mt. Etna erea during 1997 by the telemetry analog seismic network. The second data set comprises about 970 automatic locations of more than 2600 local events recorded at Mt. Etna during the last eruption (July 2001) at the present network. For the former data set, a comparison of the automatic results with the manual picks indicates that the ASDP module can accurately pick 80% of the P-waves and 65% of S-waves. The on-line application on the latter data set shows that automatic locations are affected by larger errors, due to the preliminary setting of the configuration parameters in the program. However, both automatic ASDP and manual hypocenter locations are comparable within the estimated error bounds. New improvements of the PC-Seism software for on-line analysis are also discussed.

  6. Digital divide, biometeorological data infrastructures and human vulnerability definition

    NASA Astrophysics Data System (ADS)

    Fdez-Arroyabe, Pablo; Lecha Estela, Luis; Schimt, Falko

    2018-05-01

    The design and implementation of any climate-related health service, nowadays, imply avoiding the digital divide as it means having access and being able to use complex technological devices, massive meteorological data, user's geographic location and biophysical information. This article presents the co-creation, in detail, of a biometeorological data infrastructure, which is a complex platform formed by multiple components: a mainframe, a biometeorological model called Pronbiomet, a relational database management system, data procedures, communication protocols, different software packages, users, datasets and a mobile application. The system produces four daily world maps of the partial density of the atmospheric oxygen and collects user feedback on their health condition. The infrastructure is shown to be a useful tool to delineate individual vulnerability to meteorological changes as one key factor in the definition of any biometeorological risk. This technological approach to study weather-related health impacts is the initial seed for the definition of biometeorological profiles of persons, and for the future development of customized climate services for users in the near future.

  7. Digital divide, biometeorological data infrastructures and human vulnerability definition.

    PubMed

    Fdez-Arroyabe, Pablo; Lecha Estela, Luis; Schimt, Falko

    2018-05-01

    The design and implementation of any climate-related health service, nowadays, imply avoiding the digital divide as it means having access and being able to use complex technological devices, massive meteorological data, user's geographic location and biophysical information. This article presents the co-creation, in detail, of a biometeorological data infrastructure, which is a complex platform formed by multiple components: a mainframe, a biometeorological model called Pronbiomet, a relational database management system, data procedures, communication protocols, different software packages, users, datasets and a mobile application. The system produces four daily world maps of the partial density of the atmospheric oxygen and collects user feedback on their health condition. The infrastructure is shown to be a useful tool to delineate individual vulnerability to meteorological changes as one key factor in the definition of any biometeorological risk. This technological approach to study weather-related health impacts is the initial seed for the definition of biometeorological profiles of persons, and for the future development of customized climate services for users in the near future.

  8. Digital divide, biometeorological data infrastructures and human vulnerability definition

    NASA Astrophysics Data System (ADS)

    Fdez-Arroyabe, Pablo; Lecha Estela, Luis; Schimt, Falko

    2017-06-01

    The design and implementation of any climate-related health service, nowadays, imply avoiding the digital divide as it means having access and being able to use complex technological devices, massive meteorological data, user's geographic location and biophysical information. This article presents the co-creation, in detail, of a biometeorological data infrastructure, which is a complex platform formed by multiple components: a mainframe, a biometeorological model called Pronbiomet, a relational database management system, data procedures, communication protocols, different software packages, users, datasets and a mobile application. The system produces four daily world maps of the partial density of the atmospheric oxygen and collects user feedback on their health condition. The infrastructure is shown to be a useful tool to delineate individual vulnerability to meteorological changes as one key factor in the definition of any biometeorological risk. This technological approach to study weather-related health impacts is the initial seed for the definition of biometeorological profiles of persons, and for the future development of customized climate services for users in the near future.

  9. NASADIG - NASA DEVICE INDEPENDENT GRAPHICS LIBRARY (AMDAHL VERSION)

    NASA Technical Reports Server (NTRS)

    Rogers, J. E.

    1994-01-01

    The NASA Device Independent Graphics Library, NASADIG, can be used with many computer-based engineering and management applications. The library gives the user the opportunity to translate data into effective graphic displays for presentation. The software offers many features which allow the user flexibility in creating graphics. These include two-dimensional plots, subplot projections in 3D-space, surface contour line plots, and surface contour color-shaded plots. Routines for three-dimensional plotting, wireframe surface plots, surface plots with hidden line removal, and surface contour line plots are provided. Other features include polar and spherical coordinate plotting, world map plotting utilizing either cylindrical equidistant or Lambert equal area projection, plot translation, plot rotation, plot blowup, splines and polynomial interpolation, area blanking control, multiple log/linear axes, legends and text control, curve thickness control, and multiple text fonts (18 regular, 4 bold). NASADIG contains several groups of subroutines. Included are subroutines for plot area and axis definition; text set-up and display; area blanking; line style set-up, interpolation, and plotting; color shading and pattern control; legend, text block, and character control; device initialization; mixed alphabets setting; and other useful functions. The usefulness of many routines is dependent on the prior definition of basic parameters. The program's control structure uses a serial-level construct with each routine restricted for activation at some prescribed level(s) of problem definition. NASADIG provides the following output device drivers: Selanar 100XL, VECTOR Move/Draw ASCII and PostScript files, Tektronix 40xx, 41xx, and 4510 Rasterizer, DEC VT-240 (4014 mode), IBM AT/PC compatible with SmartTerm 240 emulator, HP Lasergrafix Film Recorder, QMS 800/1200, DEC LN03+ Laserprinters, and HP LaserJet (Series III). NASADIG is written in FORTRAN and is available for several platforms. NASADIG 5.7 is available for DEC VAX series computers running VMS 5.0 or later (MSC-21801), Cray X-MP and Y-MP series computers running UNICOS (COS-10049), and Amdahl 5990 mainframe computers running UTS (COS-10050). NASADIG 5.1 is available for UNIX-based operating systems (MSC-22001). The UNIX version has been successfully implemented on Sun4 series computers running SunOS, SGI IRIS computers running IRIX, Hewlett Packard 9000 computers running HP-UX, and Convex computers running Convex OS (MSC-22001). The standard distribution medium for MSC-21801 is a set of two 6250 BPI 9-track magnetic tapes in DEC VAX BACKUP format. It is also available on a set of two TK50 tape cartridges in DEC VAX BACKUP format. The standard distribution medium for COS-10049 and COS-10050 is a 6250 BPI 9-track magnetic tape in UNIX tar format. Other distribution media and formats may be available upon request. The standard distribution medium for MSC-22001 is a .25 inch streaming magnetic tape cartridge (Sun QIC-24) in UNIX tar format. Alternate distribution media and formats are available upon request. With minor modification, the UNIX source code can be ported to other platforms including IBM PC/AT series computers and compatibles. NASADIG is also available bundled with TRASYS, the Thermal Radiation Analysis System (COS-10026, DEC VAX version; COS-10040, CRAY version).

  10. NASADIG - NASA DEVICE INDEPENDENT GRAPHICS LIBRARY (UNIX VERSION)

    NASA Technical Reports Server (NTRS)

    Rogers, J. E.

    1994-01-01

    The NASA Device Independent Graphics Library, NASADIG, can be used with many computer-based engineering and management applications. The library gives the user the opportunity to translate data into effective graphic displays for presentation. The software offers many features which allow the user flexibility in creating graphics. These include two-dimensional plots, subplot projections in 3D-space, surface contour line plots, and surface contour color-shaded plots. Routines for three-dimensional plotting, wireframe surface plots, surface plots with hidden line removal, and surface contour line plots are provided. Other features include polar and spherical coordinate plotting, world map plotting utilizing either cylindrical equidistant or Lambert equal area projection, plot translation, plot rotation, plot blowup, splines and polynomial interpolation, area blanking control, multiple log/linear axes, legends and text control, curve thickness control, and multiple text fonts (18 regular, 4 bold). NASADIG contains several groups of subroutines. Included are subroutines for plot area and axis definition; text set-up and display; area blanking; line style set-up, interpolation, and plotting; color shading and pattern control; legend, text block, and character control; device initialization; mixed alphabets setting; and other useful functions. The usefulness of many routines is dependent on the prior definition of basic parameters. The program's control structure uses a serial-level construct with each routine restricted for activation at some prescribed level(s) of problem definition. NASADIG provides the following output device drivers: Selanar 100XL, VECTOR Move/Draw ASCII and PostScript files, Tektronix 40xx, 41xx, and 4510 Rasterizer, DEC VT-240 (4014 mode), IBM AT/PC compatible with SmartTerm 240 emulator, HP Lasergrafix Film Recorder, QMS 800/1200, DEC LN03+ Laserprinters, and HP LaserJet (Series III). NASADIG is written in FORTRAN and is available for several platforms. NASADIG 5.7 is available for DEC VAX series computers running VMS 5.0 or later (MSC-21801), Cray X-MP and Y-MP series computers running UNICOS (COS-10049), and Amdahl 5990 mainframe computers running UTS (COS-10050). NASADIG 5.1 is available for UNIX-based operating systems (MSC-22001). The UNIX version has been successfully implemented on Sun4 series computers running SunOS, SGI IRIS computers running IRIX, Hewlett Packard 9000 computers running HP-UX, and Convex computers running Convex OS (MSC-22001). The standard distribution medium for MSC-21801 is a set of two 6250 BPI 9-track magnetic tapes in DEC VAX BACKUP format. It is also available on a set of two TK50 tape cartridges in DEC VAX BACKUP format. The standard distribution medium for COS-10049 and COS-10050 is a 6250 BPI 9-track magnetic tape in UNIX tar format. Other distribution media and formats may be available upon request. The standard distribution medium for MSC-22001 is a .25 inch streaming magnetic tape cartridge (Sun QIC-24) in UNIX tar format. Alternate distribution media and formats are available upon request. With minor modification, the UNIX source code can be ported to other platforms including IBM PC/AT series computers and compatibles. NASADIG is also available bundled with TRASYS, the Thermal Radiation Analysis System (COS-10026, DEC VAX version; COS-10040, CRAY version).

  11. High-tech breakthrough DNA scanner for reading sequence and detecting gene mutation: A powerful 1 lb, 20 {mu}m resolution, 16-bit personal scanner (PS) that scans 17inch x 14inch x-ray film in 48 s, with laser, uv and white light sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zeineh, J.A.; Zeineh, M.M.; Zeineh, R.A.

    1993-06-01

    The 17inch x 14inch X-ray film, gels, and blots are widely used in DNA research. However, DNA laser scanners are costly and unaffordable for the majority of surveyed biotech scientists who need it. The high-tech breakthrough analytical personal scanner (PS) presented in this report is an inexpensive 1 lb hand-held scanner priced at 2-4% of the bulky and costly 30-95 lb conventional laser scanners. This PS scanner is affordable from an operation budget and biotechnologists, who originate most science breakthroughs, can acquire it to enhance their speed, accuracy, and productivity. Compared to conventional laser scanners that are currently available onlymore » through hard-to-get capital-equipment budgets, the new PS scanner offers improved spatial resolution of 20 {mu}m, higher speed (scan up to 17inch x 14inch molecular X-ray film in 48 s), 1-32,768 gray levels (16-bits), student routines, versatility, and, most important, affordability. Its programs image the film, read DNA sequences automatically, and detect gene mutation. In parallel to the wide laboratory use of PC computers instead of mainframes, this PS scanner might become an integral part of a PC-PS powerful and cost-effective system where the PS performs the digital imaging and the PC acts on the data.« less

  12. A novel dismantling process of waste printed circuit boards using water-soluble ionic liquid.

    PubMed

    Zeng, Xianlai; Li, Jinhui; Xie, Henghua; Liu, Lili

    2013-10-01

    Recycling processes for waste printed circuit boards (WPCBs) have been well established in terms of scientific research and field pilots. However, current dismantling procedures for WPCBs have restricted the recycling process, due to their low efficiency and negative impacts on environmental and human health. This work aimed to seek an environmental-friendly dismantling process through heating with water-soluble ionic liquid to separate electronic components and tin solder from two main types of WPCBs-cathode ray tubes and computer mainframes. The work systematically investigates the influence factors, heating mechanism, and optimal parameters for opening solder connections on WPCBs during the dismantling process, and addresses its environmental performance and economic assessment. The results obtained demonstrate that the optimal temperature, retention time, and turbulence resulting from impeller rotation during the dismantling process, were 250 °C, 12 min, and 45 rpm, respectively. Nearly 90% of the electronic components were separated from the WPCBs under the optimal experimental conditions. This novel process offers the possibility of large industrial-scale operations for separating electronic components and recovering tin solder, and for a more efficient and environmentally sound process for WPCBs recycling. Copyright © 2013 Elsevier Ltd. All rights reserved.

  13. Compiler-Assisted Multiple Instruction Rollback Recovery Using a Read Buffer. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Alewine, Neal Jon

    1993-01-01

    Multiple instruction rollback (MIR) is a technique to provide rapid recovery from transient processor failures and was implemented in hardware by researchers and slow in mainframe computers. Hardware-based MIR designs eliminate rollback data hazards by providing data redundancy implemented in hardware. Compiler-based MIR designs were also developed which remove rollback data hazards directly with data flow manipulations, thus eliminating the need for most data redundancy hardware. Compiler-assisted techniques to achieve multiple instruction rollback recovery are addressed. It is observed that data some hazards resulting from instruction rollback can be resolved more efficiently by providing hardware redundancy while others are resolved more efficiently with compiler transformations. A compiler-assisted multiple instruction rollback scheme is developed which combines hardware-implemented data redundancy with compiler-driven hazard removal transformations. Experimental performance evaluations were conducted which indicate improved efficiency over previous hardware-based and compiler-based schemes. Various enhancements to the compiler transformations and to the data redundancy hardware developed for the compiler-assisted MIR scheme are described and evaluated. The final topic deals with the application of compiler-assisted MIR techniques to aid in exception repair and branch repair in a speculative execution architecture.

  14. Development of novel hybrid flexure-based microgrippers for precision micro-object manipulation.

    PubMed

    Mohd Zubir, Mohd Nashrul; Shirinzadeh, Bijan; Tian, Yanling

    2009-06-01

    This paper describes the process of developing a microgripper that is capable of high precision and fidelity manipulation of micro-objects. The design adopts the concept of flexure-based hinges on its joints to provide the rotational motion, thus eliminating the inherent nonlinearities associated with the application of conventional rigid hinges. A combination of two modeling techniques, namely, pseudorigid body model and finite element analysis was utilized to expedite the prototyping procedure, which leads to the establishment of a high performance mechanism. A new hybrid compliant structure integrating cantilever beam and flexural hinge configurations within microgripper mechanism mainframe has been developed. This concept provides a novel approach to harness the advantages within each individual configuration while mutually compensating the limitations inherent between them. A wire electrodischarge machining technique was utilized to fabricate the gripper out of high grade aluminum alloy (Al 7075T6). Experimental studies were conducted on the model to obtain various correlations governing the gripper performance as well as for model verification. The experimental results demonstrate high level of compliance in comparison to the computational results. A high amplification characteristic and maximum achievable stroke of 100 microm can be achieved.

  15. Development of novel hybrid flexure-based microgrippers for precision micro-object manipulation

    NASA Astrophysics Data System (ADS)

    Mohd Zubir, Mohd Nashrul; Shirinzadeh, Bijan; Tian, Yanling

    2009-06-01

    This paper describes the process of developing a microgripper that is capable of high precision and fidelity manipulation of micro-objects. The design adopts the concept of flexure-based hinges on its joints to provide the rotational motion, thus eliminating the inherent nonlinearities associated with the application of conventional rigid hinges. A combination of two modeling techniques, namely, pseudorigid body model and finite element analysis was utilized to expedite the prototyping procedure, which leads to the establishment of a high performance mechanism. A new hybrid compliant structure integrating cantilever beam and flexural hinge configurations within microgripper mechanism mainframe has been developed. This concept provides a novel approach to harness the advantages within each individual configuration while mutually compensating the limitations inherent between them. A wire electrodischarge machining technique was utilized to fabricate the gripper out of high grade aluminum alloy (Al 7075T6). Experimental studies were conducted on the model to obtain various correlations governing the gripper performance as well as for model verification. The experimental results demonstrate high level of compliance in comparison to the computational results. A high amplification characteristic and maximum achievable stroke of 100 μm can be achieved.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Colbert, C.; Moles, D.R.

    This paper reports that the authors developed for the Air Force the Mark VI Personal Identity Verifier (PIV) for controlling access to a fixed or mobile ICBM site, a computer terminal, or mainframe. The Mark VI records the digitized silhouettes of four fingers of each hand on an AT and T smart card. Like fingerprints, finger shapes, lengths, and widths constitute an unguessable biometric password. A Security Officer enrolls an authorized person who places each hand, in turn, on a backlighted panel. An overhead scanning camera records the right and left hand reference templates on the smart card. The Securitymore » Officer adds to the card: name, personal identification number (PIN), and access restrictions such as permitted days of the week, times of day, and doors. To gain access, cardowner inserts card into a reader slot and places either hand on the panel. Resulting access template is matched to the reference template by three sameness algorithms. The final match score is an average of 12 scores (each of the four fingers, matched for shape, length, and width), expressing the degree of sameness. (A perfect match would score 100.00.) The final match score is compared to a predetermined score (threshold), generating an accept or reject decision.« less

  17. Flexible missile autopilot design studies with PC-MATLAB/386

    NASA Technical Reports Server (NTRS)

    Ruth, Michael J.

    1989-01-01

    Development of a responsive, high-bandwidth missile autopilot for airframes which have structural modes of unusually low frequency presents a challenging design task. Such systems are viable candidates for modern, state-space control design methods. The PC-MATLAB interactive software package provides an environment well-suited to the development of candidate linear control laws for flexible missile autopilots. The strengths of MATLAB include: (1) exceptionally high speed (MATLAB's version for 80386-based PC's offers benchmarks approaching minicomputer and mainframe performance); (2) ability to handle large design models of several hundred degrees of freedom, if necessary; and (3) broad extensibility through user-defined functions. To characterize MATLAB capabilities, a simplified design example is presented. This involves interactive definition of an observer-based state-space compensator for a flexible missile autopilot design task. MATLAB capabilities and limitations, in the context of this design task, are then summarized.

  18. Velocity Profile Characterization for the 5-CM Agent Fate Wind Tunnels

    DTIC Science & Technology

    2008-01-01

    denominator in the turbulence intensity) decreases near the floor. As can be see , the turbulence intensity ranges from about 0.5 to 2% for the low...Profiles The friction velocity calculated by the above procedure is a factor of two larger than the operational profile. It is difficult to see how the...the toolbar, see Figure 5. 2. Connect appropriate length co-axial cable and probe holder to desired input channel on the IFA300 mainframe. 3. Install

  19. Worldwide Report, Telecommunications Policy, Research and Development, No. 277.

    DTIC Science & Technology

    1983-07-01

    244128 JPRS 83810 July 1983 BiäiTCBüTröR’ zmmmn A Approved for pybHc ie2eo*e; Distribution Uniixrütsd Worldwide Report TELECOMMUNICATIONS...business would choose to attack would lead to increased charges for consumers, especially in rural areas. /’The Phone Book1, by Ian Reinecke and...hefty minicom- puter power for distributed data processing and it is in this field that the low-end mainframe mar- ket is being squeezed out by 32

  20. Enterprise storage report for the 1990's

    NASA Technical Reports Server (NTRS)

    Moore, Fred

    1991-01-01

    Data processing has become an increasingly vital function, if not the most vital function, in most businesses today. No longer only a mainframe domain, the data processing enterprise also includes the midrange and workstation platforms, either local or remote. This expanded view of the enterprise has encouraged more and more businesses to take a strategic, long-range view of information management rather than the short-term tactical approaches of the past. Some of the significant aspects of data storage in the enterprise for the 1990's are highlighted.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Granhus, B.; Heid, S.

    Den norske statsoljeselskap a.s. (Statoil) which is a major Norwegian oil company has used a mainframe (VM/CMS) based occupational health information system (OHIS) since 1991. The system is distributed among 11 offshore platforms, two refineries and three office centers. It contains medical (25000) workplace (1500) and 6500 material safety data sheet (MSDS) records. The paper deals with the experiences and challenges met during the development of this system and a new client/server based version for Windows{reg_sign}. In 1992 the Norwegian Data Inspectorate introduced new legislation setting extremely strict standards for data protection and privacy. This demanded new solutions not yetmore » utilized for systems of this scale. The solution implements a fully encrypted data flow between the user of the medical modules, while the non sensitive data from the other modules are not encrypted. This involves the use of a special {open_quotes}smart-card{close_quotes} containing the user privileges as well as the encryption key. The system will combine the advantages of a local system together with the integration force of a centralized system. The new system was operational by February 1996. The paper also summarizes the experiences we have had with our OHIS, areas of good and bad cost/benefit, development pitfalls, and which factors are most important for customer satisfaction. This is very important because of the ever increasing demand for efficiency together with company reorganization and changing technology.« less

  2. The €100 lab: A 3D-printable open-source platform for fluorescence microscopy, optogenetics, and accurate temperature control during behaviour of zebrafish, Drosophila, and Caenorhabditis elegans

    PubMed Central

    Maia Chagas, Andre; Prieto-Godino, Lucia L.; Arrenberg, Aristides B.

    2017-01-01

    Small, genetically tractable species such as larval zebrafish, Drosophila, or Caenorhabditis elegans have become key model organisms in modern neuroscience. In addition to their low maintenance costs and easy sharing of strains across labs, one key appeal is the possibility to monitor single or groups of animals in a behavioural arena while controlling the activity of select neurons using optogenetic or thermogenetic tools. However, the purchase of a commercial solution for these types of experiments, including an appropriate camera system as well as a controlled behavioural arena, can be costly. Here, we present a low-cost and modular open-source alternative called ‘FlyPi’. Our design is based on a 3D-printed mainframe, a Raspberry Pi computer, and high-definition camera system as well as Arduino-based optical and thermal control circuits. Depending on the configuration, FlyPi can be assembled for well under €100 and features optional modules for light-emitting diode (LED)-based fluorescence microscopy and optogenetic stimulation as well as a Peltier-based temperature stimulator for thermogenetics. The complete version with all modules costs approximately €200 or substantially less if the user is prepared to ‘shop around’. All functions of FlyPi can be controlled through a custom-written graphical user interface. To demonstrate FlyPi’s capabilities, we present its use in a series of state-of-the-art neurogenetics experiments. In addition, we demonstrate FlyPi’s utility as a medical diagnostic tool as well as a teaching aid at Neurogenetics courses held at several African universities. Taken together, the low cost and modular nature as well as fully open design of FlyPi make it a highly versatile tool in a range of applications, including the classroom, diagnostic centres, and research labs. PMID:28719603

  3. The €100 lab: A 3D-printable open-source platform for fluorescence microscopy, optogenetics, and accurate temperature control during behaviour of zebrafish, Drosophila, and Caenorhabditis elegans.

    PubMed

    Maia Chagas, Andre; Prieto-Godino, Lucia L; Arrenberg, Aristides B; Baden, Tom

    2017-07-01

    Small, genetically tractable species such as larval zebrafish, Drosophila, or Caenorhabditis elegans have become key model organisms in modern neuroscience. In addition to their low maintenance costs and easy sharing of strains across labs, one key appeal is the possibility to monitor single or groups of animals in a behavioural arena while controlling the activity of select neurons using optogenetic or thermogenetic tools. However, the purchase of a commercial solution for these types of experiments, including an appropriate camera system as well as a controlled behavioural arena, can be costly. Here, we present a low-cost and modular open-source alternative called 'FlyPi'. Our design is based on a 3D-printed mainframe, a Raspberry Pi computer, and high-definition camera system as well as Arduino-based optical and thermal control circuits. Depending on the configuration, FlyPi can be assembled for well under €100 and features optional modules for light-emitting diode (LED)-based fluorescence microscopy and optogenetic stimulation as well as a Peltier-based temperature stimulator for thermogenetics. The complete version with all modules costs approximately €200 or substantially less if the user is prepared to 'shop around'. All functions of FlyPi can be controlled through a custom-written graphical user interface. To demonstrate FlyPi's capabilities, we present its use in a series of state-of-the-art neurogenetics experiments. In addition, we demonstrate FlyPi's utility as a medical diagnostic tool as well as a teaching aid at Neurogenetics courses held at several African universities. Taken together, the low cost and modular nature as well as fully open design of FlyPi make it a highly versatile tool in a range of applications, including the classroom, diagnostic centres, and research labs.

  4. Enterprise storage report for the 1990's

    NASA Technical Reports Server (NTRS)

    Moore, Fred

    1992-01-01

    Data processing has become an increasingly vital function, if not the most vital function, in most businesses today. No longer only a mainframe domain, the data processing enterprise also includes the midrange and workstation platforms, either local or remote. This expanded view of the enterprise has encouraged more and more businesses to take a strategic, long-range view of information management rather than the short-term tactical approaches of the past. This paper will highlight some of the significant aspects of data storage in the enterprise for the 1990's.

  5. Development of a microcomputer data base of manufacturing, installation, and operating experience for the NSSS designer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borchers, W.A.; Markowski, E.S.

    1986-01-01

    Future nuclear steam supply systems (NSSSs) will be designed in an environment of powerful micro hardware and software and these systems will be linked by local area networks (LAN). With such systems, individual NSSS designers and design groups will establish and maintain local data bases to replace existing manual files and data sources. One such effort of this type in Combustion Engineering's (C-E's) NSSS engineering organization is the establishment of a data base of historical manufacturing, installation, and operating experience to provide designers with information to improve on current designs and practices. In contrast to large mainframe or minicomputer datamore » bases, which compile industry-wide data, the data base described here is implemented on a microcomputer, is design specific, and contains a level of detail that is of interest to system and component designers. DBASE III, a popular microcomputer data base management software package, is used. In addition to the immediate benefits provided by the data base, the development itself provided a vehicle for identifying procedural and control aspects that need to be addressed in the environment of local microcomputer data bases. This paper describes the data base and provides some observations on the development, use, and control of local microcomputer data bases in a design organization.« less

  6. Sources of Cryogenic Data and Information

    NASA Astrophysics Data System (ADS)

    Mohling, R. A.; Hufferd, W. L.; Marquardt, E. D.

    It is commonly known that cryogenic data, technology, and information are applied across many military, National Aeronautics and Space Administration (NASA), and civilian product lines. Before 1950, however, there was no centralized US source of cryogenic technology data. The Cryogenic Data Center of the National Bureau of Standards (NBS) maintained a database of cryogenic technical documents that served the national need well from the mid 1950s to the early 1980s. The database, maintained on a mainframe computer, was a highly specific bibliography of cryogenic literature and thermophysical properties that covered over 100 years of data. In 1983, however, the Cryogenic Data Center was discontinued when NBS's mission and scope were redefined. In 1998, NASA contracted with the Chemical Propulsion Information Agency (CPIA) and Technology Applications, Inc. (TAI) to reconstitute and update Cryogenic Data Center information and establish a self-sufficient entity to provide technical services for the cryogenic community. The Cryogenic Information Center (CIC) provided this service until 2004, when it was discontinued due to a lack of market interest. The CIC technical assets were distributed to NASA Marshall Space Flight Center and the National Institute of Standards and Technology. Plans are under way in 2006 for CPIA to launch an e-commerce cryogenic website to offer bibliography data with capability to download cryogenic documents.

  7. National Geochronological Database

    USGS Publications Warehouse

    Revised by Sloan, Jan; Henry, Christopher D.; Hopkins, Melanie; Ludington, Steve; Original database by Zartman, Robert E.; Bush, Charles A.; Abston, Carl

    2003-01-01

    The National Geochronological Data Base (NGDB) was established by the United States Geological Survey (USGS) to collect and organize published isotopic (also known as radiometric) ages of rocks in the United States. The NGDB (originally known as the Radioactive Age Data Base, RADB) was started in 1974. A committee appointed by the Director of the USGS was given the mission to investigate the feasibility of compiling the published radiometric ages for the United States into a computerized data bank for ready access by the user community. A successful pilot program, which was conducted in 1975 and 1976 for the State of Wyoming, led to a decision to proceed with the compilation of the entire United States. For each dated rock sample reported in published literature, a record containing information on sample location, rock description, analytical data, age, interpretation, and literature citation was constructed and included in the NGDB. The NGDB was originally constructed and maintained on a mainframe computer, and later converted to a Helix Express relational database maintained on an Apple Macintosh desktop computer. The NGDB and a program to search the data files were published and distributed on Compact Disc-Read Only Memory (CD-ROM) in standard ISO 9660 format as USGS Digital Data Series DDS-14 (Zartman and others, 1995). As of May 1994, the NGDB consisted of more than 18,000 records containing over 30,000 individual ages, which is believed to represent approximately one-half the number of ages published for the United States through 1991. Because the organizational unit responsible for maintaining the database was abolished in 1996, and because we wanted to provide the data in more usable formats, we have reformatted the data, checked and edited the information in some records, and provided this online version of the NGDB. This report describes the changes made to the data and formats, and provides instructions for the use of the database in geographic information system (GIS) applications. The data are provided in .mdb (Microsoft Access), .xls (Microsoft Excel), and .txt (tab-separated value) formats. We also provide a single non-relational file that contains a subset of the data for ease of use.

  8. The business of demographics.

    PubMed

    Russell, C

    1984-06-01

    The emergence of "demographics" in the past 15 years is a vital tool for American business research and planning. Tracing demographic trends became important for businesses when traditional consumer markets splintered with the enormous changes since the 1960s in US population growth, age structure, geographic distribution, income, education, living arrangements, and life-styles. The mass of reliable, small-area demographic data needed for market estimates and projections became available with the electronic census--public release of Census Bureau census and survey data on computer tape, beginning with the 1970 census. Census Bureau tapes as well as printed reports and microfiche are now widely accessible at low cost through summary tape processing centers designated by the bureau and its 12 regional offices and State Data Center Program. Data accessibility, plummeting computer costs, and businessess' unfamiliarity with demographics spawned the private data industry. By 1984, 70 private companies were offering demographic services to business clients--customized information repackaged from public data or drawn from proprietary data bases created from such data. Critics protest the for-profit use of public data by companies able to afford expensive mainframe computer technology. Business people defend their rights to public data as taxpaying ceitzens, but they must ensure that the data are indeed used for the public good. They must also question the quality of demographic data generated by private companies. Business' demographic expertise will improve when business schools offer training in demography, as few now do, though 40 of 88 graduate-level demographic programs now include business-oriented courses. Lower cost, easier access to business demographics is growing as more census data become available on microcomputer diskettes and through on-line linkages with large data bases--from private data companies and the Census Bureau itself. A directory of private and public demographic resources is appended, including forecasting, consulting and research services available.

  9. Web client and ODBC access to legacy database information: a low cost approach.

    PubMed Central

    Sanders, N. W.; Mann, N. H.; Spengler, D. M.

    1997-01-01

    A new method has been developed for the Department of Orthopaedics of Vanderbilt University Medical Center to access departmental clinical data. Previously this data was stored only in the medical center's mainframe DB2 database, it is now additionally stored in a departmental SQL database. Access to this data is available via any ODBC compliant front-end or a web client. With a small budget and no full time staff, we were able to give our department on-line access to many years worth of patient data that was previously inaccessible. PMID:9357735

  10. Wind-tunnel investigation at supersonic speeds of a remote-controlled canard missile with a free-rolling-tail brake torque system

    NASA Technical Reports Server (NTRS)

    Blair, A. B., Jr.

    1985-01-01

    Wind tunnel tests were conducted at Mach numbers 1.70, 2.16, and 2.86 to determine the static aerodynamic characteristics of a cruciform canard-controlled missile with fixed or free rolling tailfin afterbodies. Mechanical coupling effects of the free-rolling-tail afterbody were investigated by using an electronic electromagnetic brake system providing arbitrary tail-fin brake torques with continuous measurements of tail-to-mainframe torque and tail roll rate. Remote-controlled canards were deflected to provide pitch, yaw, and roll control. Results indicate that the induced rolling moment coefficients due to canard yaw control are reduced and linearized for the free-rolling-tail (free-tail) configuration. The canards of the latter provide conventional roll control for the entire angle-of-attack test range. For the free-tail configuration, the induced rolling moment coefficient due to canard yaw control increased and the canard roll control decreased with increases in brake torque, which simulated bearing friction torque. It appears that a compromise in regard to bearing friction, for example, low-cost bearings with some friction, may allow satisfactory free-tail aerodynamic characteristics that include reductions in adverse rolling-moment coefficients and lower tail roll rates.

  11. Karlsruhe Database for Radioactive Wastes (KADABRA) - Accounting and Management System for Radioactive Waste Treatment - 12275

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Himmerkus, Felix; Rittmeyer, Cornelia

    2012-07-01

    The data management system KADABRA was designed according to the purposes of the Cen-tral Decontamination Department (HDB) of the Wiederaufarbeitungsanlage Karlsruhe Rueckbau- und Entsorgungs-GmbH (WAK GmbH), which is specialized in the treatment and conditioning of radioactive waste. The layout considers the major treatment processes of the HDB as well as regulatory and legal requirements. KADABRA is designed as an SAG ADABAS application on IBM system Z mainframe. The main function of the system is the data management of all processes related to treatment, transfer and storage of radioactive material within HDB. KADABRA records the relevant data concerning radioactive residues, interimmore » products and waste products as well as the production parameters relevant for final disposal. Analytical data from the laboratory and non destructive assay systems, that describe the chemical and radiological properties of residues, production batches, interim products as well as final waste products, can be linked to the respective dataset for documentation and declaration. The system enables the operator to trace the radioactive material through processing and storage. Information on the actual sta-tus of the material as well as radiological data and storage position can be gained immediately on request. A variety of programs accessed to the database allow the generation of individual reports on periodic or special request. KADABRA offers a high security standard and is constantly adapted to the recent requirements of the organization. (authors)« less

  12. Easy boundary definition for EGUN

    NASA Astrophysics Data System (ADS)

    Becker, R.

    1989-06-01

    The relativistic electron optics program EGUN [1] has reached a broad distribution, and many users have asked for an easier way of boundary input. A preprocessor to EGUN has been developed that accepts polygonal input of boundary points, and offers features such as rounding off of corners, shifting and squeezing of electrodes and simple input of slanted Neumann boundaries. This preprocessor can either be used on a PC that is linked to a mainframe using the FORTRAN version of EGUN, or in connection with the version EGNc, which also runs on a PC. In any case, direct graphic response on the PC greatly facilitates the creation of correct input files for EGUN.

  13. Fan Noise Prediction System Development: Source/Radiation Field Coupling and Workstation Conversion for the Acoustic Radiation Code

    NASA Technical Reports Server (NTRS)

    Meyer, H. D.

    1993-01-01

    The Acoustic Radiation Code (ARC) is a finite element program used on the IBM mainframe to predict far-field acoustic radiation from a turbofan engine inlet. In this report, requirements for developers of internal aerodynamic codes regarding use of their program output an input for the ARC are discussed. More specifically, the particular input needed from the Bolt, Beranek and Newman/Pratt and Whitney (turbofan source noise generation) Code (BBN/PWC) is described. In a separate analysis, a method of coupling the source and radiation models, that recognizes waves crossing the interface in both directions, has been derived. A preliminary version of the coupled code has been developed and used for initial evaluation of coupling issues. Results thus far have shown that reflection from the inlet is sufficient to indicate that full coupling of the source and radiation fields is needed for accurate noise predictions ' Also, for this contract, the ARC has been modified for use on the Sun and Silicon Graphics Iris UNIX workstations. Changes and additions involved in this effort are described in an appendix.

  14. Optical storage networking

    NASA Astrophysics Data System (ADS)

    Mohr, Ulrich

    2001-11-01

    For efficient business continuance and backup of mission- critical data an inter-site storage network is required. Where traditional telecommunications costs are prohibitive for all but the largest organizations, there is an opportunity for regional carries to deliver an innovative storage service. This session reveals how a combination of optical networking and protocol-aware SAN gateways can provide an extended storage networking platform with the lowest cost of ownership and the highest possible degree of reliability, security and availability. Companies of every size, with mainframe and open-systems environments, can afford to use this integrated service. Three mayor applications are explained; channel extension, Network Attached Storage (NAS), Storage Area Networks (SAN) and how optical networks address the specific requirements. One advantage of DWDM is the ability for protocols such as ESCON, Fibre Channel, ATM and Gigabit Ethernet, to be transported natively and simultaneously across a single fiber pair, and the ability to multiplex many individual fiber pairs over a single pair, thereby reducing fiber cost and recovering fiber pairs already in use. An optical storage network enables a new class of service providers, Storage Service Providers (SSP) aiming to deliver value to the enterprise by managing storage, backup, replication and restoration as an outsourced service.

  15. EIA model documentation: World oil refining logistics demand model,``WORLD`` reference manual. Version 1.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1994-04-11

    This manual is intended primarily for use as a reference by analysts applying the WORLD model to regional studies. It also provides overview information on WORLD features of potential interest to managers and analysts. Broadly, the manual covers WORLD model features in progressively increasing detail. Section 2 provides an overview of the WORLD model, how it has evolved, what its design goals are, what it produces, and where it can be taken with further enhancements. Section 3 reviews model management covering data sources, managing over-optimization, calibration and seasonality, check-points for case construction and common errors. Section 4 describes in detailmore » the WORLD system, including: data and program systems in overview; details of mainframe and PC program control and files;model generation, size management, debugging and error analysis; use with different optimizers; and reporting and results analysis. Section 5 provides a detailed description of every WORLD model data table, covering model controls, case and technology data. Section 6 goes into the details of WORLD matrix structure. It provides an overview, describes how regional definitions are controlled and defines the naming conventions for-all model rows, columns, right-hand sides, and bounds. It also includes a discussion of the formulation of product blending and specifications in WORLD. Several Appendices supplement the main sections.« less

  16. TARGET/CRYOCHIL - THERMODYNAMIC ANALYSIS AND SUBSCALE MODELING OF SPACE-BASED ORBIT TRANSFER VEHICLE CRYOGENIC PROPELLANT RESUPPLY

    NASA Technical Reports Server (NTRS)

    Defelice, D. M.

    1994-01-01

    The resupply of the cryogenic propellants is an enabling technology for space-based transfer vehicles. As part of NASA Lewis's ongoing efforts in micro-gravity fluid management, thermodynamic analysis and subscale modeling techniques have been developed to support an on-orbit test bed for cryogenic fluid management technologies. These efforts have been incorporated into two FORTRAN programs, TARGET and CRYOCHIL. The TARGET code is used to determine the maximum temperature at which the filling of a given tank can be initiated and subsequently filled to a specified pressure and fill level without venting. The main process is the transfer of the energy stored in the thermal mass of the tank walls into the inflowing liquid. This process is modeled by examining the end state of the no-vent fill process. This state is assumed to be at thermal equilibrium between the tank and the fluid which is well mixed and saturated at the tank pressure. No specific assumptions are made as to the processes or the intermediate thermodynamic states during the filling. It is only assumed that the maximum tank pressure occurs at the final state. This assumption implies that, during the initial phases of the filling, the injected liquid must pass through the bulk vapor in such a way that it absorbs a sufficient amount of its superheat so that moderate tank pressures can be maintained. It is believed that this is an achievable design goal for liquid injection systems. TARGET can be run with any fluid for which the user has a properties data base. Currently it will only run for hydrogen, oxygen, and nitrogen since pressure-enthalpy data sets have been included for these fluids only. CRYOCHIL's primary function is to predict the optimum liquid charge to be injected for each of a series of charge-hold-vent chilldown cycles. This information can then be used with specified mass flow rates and valve response times to control a liquid injection system for tank chilldown operations. This will insure that the operations proceed quickly and efficiently. These programs are written in FORTRAN for batch execution on IBM 370 class mainframe computers. It requires 360K of RAM for execution. The standard distribution medium for this program is a 1600 BPI 9track magnetic tape in EBCDIC format. TARGET/CRYOCHIL was developed in 1988.

  17. Combining neural networks and genetic algorithms for hydrological flow forecasting

    NASA Astrophysics Data System (ADS)

    Neruda, Roman; Srejber, Jan; Neruda, Martin; Pascenko, Petr

    2010-05-01

    We present a neural network approach to rainfall-runoff modeling for small size river basins based on several time series of hourly measured data. Different neural networks are considered for short time runoff predictions (from one to six hours lead time) based on runoff and rainfall data observed in previous time steps. Correlation analysis shows that runoff data, short time rainfall history, and aggregated API values are the most significant data for the prediction. Neural models of multilayer perceptron and radial basis function networks with different numbers of units are used and compared with more traditional linear time series predictors. Out of possible 48 hours of relevant history of all the input variables, the most important ones are selected by means of input filters created by a genetic algorithm. The genetic algorithm works with population of binary encoded vectors defining input selection patterns. Standard genetic operators of two-point crossover, random bit-flipping mutation, and tournament selection were used. The evaluation of objective function of each individual consists of several rounds of building and testing a particular neural network model. The whole procedure is rather computational exacting (taking hours to days on a desktop PC), thus a high-performance mainframe computer has been used for our experiments. Results based on two years worth data from the Ploucnice river in Northern Bohemia suggest that main problems connected with this approach to modeling are ovetraining that can lead to poor generalization, and relatively small number of extreme events which makes it difficult for a model to predict the amplitude of the event. Thus, experiments with both absolute and relative runoff predictions were carried out. In general it can be concluded that the neural models show about 5 per cent improvement in terms of efficiency coefficient over liner models. Multilayer perceptrons with one hidden layer trained by back propagation algorithm and predicting relative runoff show the best behavior so far. Utilizing the genetically evolved input filter improves the performance of yet another 5 per cent. In the future we would like to continue with experiments in on-line prediction using real-time data from Smeda River with 6 hours lead time forecast. Following the operational reality we will focus on classification of the runoffs into flood alert levels, and reformulation of the time series prediction task as a classification problem. The main goal of all this work is to improve flood warning system operated by the Czech Hydrometeorological Institute.

  18. The Database Query Support Processor (QSP)

    NASA Technical Reports Server (NTRS)

    1993-01-01

    The number and diversity of databases available to users continues to increase dramatically. Currently, the trend is towards decentralized, client server architectures that (on the surface) are less expensive to acquire, operate, and maintain than information architectures based on centralized, monolithic mainframes. The database query support processor (QSP) effort evaluates the performance of a network level, heterogeneous database access capability. Air Force Material Command's Rome Laboratory has developed an approach, based on ANSI standard X3.138 - 1988, 'The Information Resource Dictionary System (IRDS)' to seamless access to heterogeneous databases based on extensions to data dictionary technology. To successfully query a decentralized information system, users must know what data are available from which source, or have the knowledge and system privileges necessary to find out this information. Privacy and security considerations prohibit free and open access to every information system in every network. Even in completely open systems, time required to locate relevant data (in systems of any appreciable size) would be better spent analyzing the data, assuming the original question was not forgotten. Extensions to data dictionary technology have the potential to more fully automate the search and retrieval for relevant data in a decentralized environment. Substantial amounts of time and money could be saved by not having to teach users what data resides in which systems and how to access each of those systems. Information describing data and how to get it could be removed from the application and placed in a dedicated repository where it belongs. The result simplified applications that are less brittle and less expensive to build and maintain. Software technology providing the required functionality is off the shelf. The key difficulty is in defining the metadata required to support the process. The database query support processor effort will provide quantitative data on the amount of effort required to implement an extended data dictionary at the network level, add new systems, adapt to changing user needs, and provide sound estimates on operations and maintenance costs and savings.

  19. PACS-Graz, 1985-2000: from a scientific pilot to a state-wide multimedia radiological information system

    NASA Astrophysics Data System (ADS)

    Gell, Guenther

    2000-05-01

    1971/72 began the implementation of a computerized radiological documentation system as the Department of Radiology of the University of Graz, which developed over the years into a full RIS. 1985 started a scientific cooperation with SIEMENS to develop a PACS. The two systems were linked and evolved into a highly integrated RIS-PACS for the state wide hospital system in Styria. During its lifetime the RIS, originally implemented in FORTRAN on a UNIVAC 494 mainframe migrated to a PDP15, on to a PDP11, then VAX and Alphas. The flexible original record structure with variable length fields and the powerful retrieval language were retained. The data acquisition part with the user interface was rewritten several times and many service programs have been added. During our PACS cooperation many ideas like the folder concept or functionalities of the GUI have been designed and tested and were then implemented in the SIENET product. The actual RIS/PACS supports the whole workflow in the Radiology Department. It is installed in a 2.300 bed university hospital and the smaller hospitals of the State of Styria. Modalities from different vendors are connected via DICOM to the RIS (modality worklist) and to the PACS. PACSubsystems from other vendors have been integrated. Images are distributed to referring clinics and for teleconsultation and image processing and reports are available on line to all connected hospitals. We spent great efforts to guarantee optimal support of the workflow and to ensure an enhanced cost/benefit ratio for each user (class). Another special feature is selective image distribution. Using the high level retrieval language individual filters can be constructed easily to implement any image distribution policy agreed upon by radiologists and referring clinicians.

  20. The DBBC environment for millimeter radioastronomy

    NASA Astrophysics Data System (ADS)

    Tuccari, Gino; Comoretto, Giovanni; Melis, Andrea; Buttaccio, Salvo

    2012-09-01

    The Digital Base Band Converter project developed in the last decade produced a general architecture and a class of boards, firmware and software, giving the possibility to build a general purpose back-end system for VLBI or single-dish observational activities. Such approach suggests the realization of a digital radio system, i.e. a receiver with conversion not realized with analogue techniques, maintaining only amplification stages in the analogue domain. This solution can be applied until a maximum around 16 GHz, the present limit for the instantaneous input band in the latest version of the DBBC project, while in the millimeter frequency range this maximum limit of 0.5-2 GHz of the previous versions allows the intermediate frequency to be processed in the digital domain. A description of the elements developed in the DBBC project is presented, with their use in different environments. The architecture is composed of a PC controlled mainframe, and of different modules that can be combined in a very flexible way in order to realize different instruments. The instrument can be expanded or retrofitted to meet increasing observational demands. Available modules include ADC converters, processing boards, physical interfaces (VSI and 10G Ethernet). Several applications have already been implemented and used in radioastronomic observations: a DDC (Direct Digital Conversion) for VLBI observations, a Polyphase Digital Filter Bank, and a Multiband Scansion Spectrometer. Other applications are currently studied for additional functionalities like a spectropolarimeter, a linear-to-circular polarization converter, a RFI-mitigation tool, and a phase-reference holographic tool-kit.

  1. Manufacture, alignment and measurement for a reflective triplet optics in imaging spectrometer

    NASA Astrophysics Data System (ADS)

    Yuan, Liyin; He, Zhiping; Wang, Yueming; Lv, Gang

    2016-09-01

    Reflective triplet (RT) optics is an optical form with decenters and tilts of all the three mirrors. It can be used in spectrometer as collimator and reimager to get fine optical and spectral performances. To alleviate thermal and assembly stress deformation, opto-mechanical integrated design suggests that as with all the machine elements and the mainframe, the mirrors substrates are aluminum. All the mirrors are manufactured by single-point diamond turning technology and measured by interferometer or profilometer. Because of retro-reflection by grating or prism and reimaging away from the object field, solo three mirrors optical path of RT has some aberrations. So its alignment and measurement needs an aberration corrected measuring optical system with auxiliary plane and sphere mirrors and in which the RT optics used in four pass. Manufacture, alignment and measurement for a RT optics used in long wave infrared grating spectrometer is discussed here. We realized the manufacture, alignment and test for the RT optics of a longwave infrared spectromter by CMM and interferometer. Wavefront error test by interferometer and surface profiles measured by profilometer indicate that performances of the manufactured mirrors exceed the requirements. Interferogram of the assembled RT optics shows that wavefront error rms is less than 0.0493λ@10.6μm vs design result 0.0207λ.

  2. Workstation-Based Real-Time Mesoscale Modeling Designed for Weather Support to Operations at the Kennedy Space Center and Cape Canaveral Air Station

    NASA Technical Reports Server (NTRS)

    Manobianco, John; Zack, John W.; Taylor, Gregory E.

    1996-01-01

    This paper describes the capabilities and operational utility of a version of the Mesoscale Atmospheric Simulation System (MASS) that has been developed to support operational weather forecasting at the Kennedy Space Center (KSC) and Cape Canaveral Air Station (CCAS). The implementation of local, mesoscale modeling systems at KSC/CCAS is designed to provide detailed short-range (less than 24 h) forecasts of winds, clouds, and hazardous weather such as thunderstorms. Short-range forecasting is a challenge for daily operations, and manned and unmanned launches since KSC/CCAS is located in central Florida where the weather during the warm season is dominated by mesoscale circulations like the sea breeze. For this application, MASS has been modified to run on a Stardent 3000 workstation. Workstation-based, real-time numerical modeling requires a compromise between the requirement to run the system fast enough so that the output can be used before expiration balanced against the desire to improve the simulations by increasing resolution and using more detailed physical parameterizations. It is now feasible to run high-resolution mesoscale models such as MASS on local workstations to provide timely forecasts at a fraction of the cost required to run these models on mainframe supercomputers. MASS has been running in the Applied Meteorology Unit (AMU) at KSC/CCAS since January 1994 for the purpose of system evaluation. In March 1995, the AMU began sending real-time MASS output to the forecasters and meteorologists at CCAS, Spaceflight Meteorology Group (Johnson Space Center, Houston, Texas), and the National Weather Service (Melbourne, Florida). However, MASS is not yet an operational system. The final decision whether to transition MASS for operational use will depend on a combination of forecaster feedback, the AMU's final evaluation results, and the life-cycle costs of the operational system.

  3. Spacecraft Avionics Software Development Then and Now: Different but the Same

    NASA Technical Reports Server (NTRS)

    Mangieri, Mark L.; Garman, John (Jack); Vice, Jason

    2012-01-01

    NASA has always been in the business of balancing new technologies and techniques to achieve human space travel objectives. NASA s historic Software Production Facility (SPF) was developed to serve complex avionics software solutions during an era dominated by mainframes, tape drives, and lower level programming languages. These systems have proven themselves resilient enough to serve the Shuttle Orbiter Avionics life cycle for decades. The SPF and its predecessor the Software Development Lab (SDL) at NASA s Johnson Space Center (JSC) hosted flight software (FSW) engineering, development, simulation, and test. It was active from the beginning of Shuttle Orbiter development in 1972 through the end of the shuttle program in the summer of 2011 almost 40 years. NASA s Kedalion engineering analysis lab is on the forefront of validating and using many contemporary avionics HW/SW development and integration techniques, which represent new paradigms to NASA s heritage culture in avionics software engineering. Kedalion has validated many of the Orion project s HW/SW engineering techniques borrowed from the adjacent commercial aircraft avionics environment, inserting new techniques and skills into the Multi-Purpose Crew Vehicle (MPCV) Orion program. Using contemporary agile techniques, COTS products, early rapid prototyping, in-house expertise and tools, and customer collaboration, NASA has adopted a cost effective paradigm that is currently serving Orion effectively. This paper will explore and contrast differences in technology employed over the years of NASA s space program, due largely to technological advances in hardware and software systems, while acknowledging that the basic software engineering and integration paradigms share many similarities.

  4. Type I diabetes mellitus in human and chimpanzee: a comparison of kyoto encyclopedia of genes and genomes pathway.

    PubMed

    Wiwanitkit, Viroj

    2007-04-01

    Diabetes is a worldwide medical problem and is a significant cause of morbidity and mortality. Type 1 diabetes results from the autoimmune destruction of insulin-producing beta cells in the pancreas. The identification of causative genes for the autoimmune disease type 1 diabetes in humans has made significant progress in recent years. Studies of pathways for type 1 diabetes in other living things can give useful information on the nature of type 1 diabetes. Here, the author used a new pathway technology to compare type 1 diabetes mellitus in the human and the chimpanzee. According to the comparison, the mainframes of pathways are similar for both the human and the chimpanzee. These results can imply a close relation between the human and the chimpanzee. They also confirm usage of the chimpanzee model for studies of type 1 diabetes pathophysiology.

  5. Interactive graphics for the Macintosh: software review of FlexiGraphs.

    PubMed

    Antonak, R F

    1990-01-01

    While this product is clearly unique, its usefulness to individuals outside small business environments is somewhat limited. FlexiGraphs is, however, a reasonable first attempt to design a microcomputer software package that controls data through interactive editing within a graph. Although the graphics capabilities of mainframe programs such as MINITAB (Ryan, Joiner, & Ryan, 1981) and the graphic manipulations available through exploratory data analysis (e.g., Velleman & Hoaglin, 1981) will not be surpassed anytime soon by this program, a researcher may want to add this program to a software library containing other Macintosh statistics, drawing, and graphics programs if only to obtain the easy-to-obtain curve fitting and line smoothing options. I welcome the opportunity to review the enhanced "scientific" version of FlexiGraphs that the author of the program indicates is currently under development. An MS-DOS version of the program should be available within the year.

  6. Reference manual for generation and analysis of Habitat Time Series: version II

    USGS Publications Warehouse

    Milhous, Robert T.; Bartholow, John M.; Updike, Marlys A.; Moos, Alan R.

    1990-01-01

    The selection of an instream flow requirement for water resource management often requires the review of how the physical habitat changes through time. This review is referred to as 'Time Series Analysis." The Tune Series Library (fSLIB) is a group of programs to enter, transform, analyze, and display time series data for use in stream habitat assessment. A time series may be defined as a sequence of data recorded or calculated over time. Examples might be historical monthly flow, predicted monthly weighted usable area, daily electrical power generation, annual irrigation diversion, and so forth. The time series can be analyzed, both descriptively and analytically, to understand the importance of the variation in the events over time. This is especially useful in the development of instream flow needs based on habitat availability. The TSLIB group of programs assumes that you have an adequate study plan to guide you in your analysis. You need to already have knowledge about such things as time period and time step, species and life stages to consider, and appropriate comparisons or statistics to be produced and displayed or tabulated. Knowing your destination, you must first evaluate whether TSLIB can get you there. Remember, data are not answers. This publication is a reference manual to TSLIB and is intended to be a guide to the process of using the various programs in TSLIB. This manual is essentially limited to the hands-on use of the various programs. a TSLIB use interface program (called RTSM) has been developed to provide an integrated working environment where the use has a brief on-line description of each TSLIB program with the capability to run the TSLIB program while in the user interface. For information on the RTSM program, refer to Appendix F. Before applying the computer models described herein, it is recommended that the user enroll in the short course "Problem Solving with the Instream Flow Incremental Methodology (IFIM)." This course is offered by the Aquatic Systems Branch of the National Ecology Research Center. For more information about the TSLIB software, refer to the Memorandum of Understanding. Chapter 1 provides a brief introduction to the Instream Flow Incremental Methodology and TSLIB. Other chapters in this manual provide information on the different aspects of using the models. The information contained in the other chapters includes (2) acquisition, entry, manipulation, and listing of streamflow data; (3) entry, manipulation, and listing of the habitat-versus-streamflow function; (4) transferring streamflow data; (5) water resources systems analysis; (6) generation and analysis of daily streamflow and habitat values; (7) generation of the time series of monthly habitats; (8) manipulation, analysis, and display of month time series data; and (9) generation, analysis, and display of annual time series data. Each section includes documentation for the programs therein with at least one page of information for each program, including a program description, instructions for running the program, and sample output. The Appendixes contain the following: (A) sample file formats; (B) descriptions of default filenames; (C) alphabetical summary of batch-procedure files; (D) installing and running TSLIB on a microcomputer; (E) running TSLIB on a CDC Cyber computer; (F) using the TSLIB user interface program (RTSM); and (G) running WATSTORE on the USGS Amdahl mainframe computer. The number for this version of TSLIB--Version II-- is somewhat arbitrary, as the TSLIB programs were collected into a library some time ago; but operators tended to use and manage them as individual programs. Therefore, we will consider the group of programs from the past that were only on the CDC Cyber computer as Version 0; the programs from the past that were on both the Cyber and the IBM-compatible microcomputer as Version I; and the programs contained in this reference manual as Version II.

  7. Organization of the secure distributed computing based on multi-agent system

    NASA Astrophysics Data System (ADS)

    Khovanskov, Sergey; Rumyantsev, Konstantin; Khovanskova, Vera

    2018-04-01

    Nowadays developing methods for distributed computing is received much attention. One of the methods of distributed computing is using of multi-agent systems. The organization of distributed computing based on the conventional network computers can experience security threats performed by computational processes. Authors have developed the unified agent algorithm of control system of computing network nodes operation. Network PCs is used as computing nodes. The proposed multi-agent control system for the implementation of distributed computing allows in a short time to organize using of the processing power of computers any existing network to solve large-task by creating a distributed computing. Agents based on a computer network can: configure a distributed computing system; to distribute the computational load among computers operated agents; perform optimization distributed computing system according to the computing power of computers on the network. The number of computers connected to the network can be increased by connecting computers to the new computer system, which leads to an increase in overall processing power. Adding multi-agent system in the central agent increases the security of distributed computing. This organization of the distributed computing system reduces the problem solving time and increase fault tolerance (vitality) of computing processes in a changing computing environment (dynamic change of the number of computers on the network). Developed a multi-agent system detects cases of falsification of the results of a distributed system, which may lead to wrong decisions. In addition, the system checks and corrects wrong results.

  8. Analysis, biomedicine, collaboration, and determinism challenges and guidance: wish list for biopharmaceuticals on the interface of computing and statistics.

    PubMed

    Goodman, Arnold F

    2011-11-01

    I have personally witnessed processing advance from desk calculators and mainframes, through timesharing and PCs, to supercomputers and cloud computing. I have also witnessed resources grow from too little data into almost too much data, and from theory dominating data into data beginning to dominate theory while needing new theory. Finally, I have witnessed problems advance from simple in a lone discipline into becoming almost too complex in multiple disciplines, as well as approaches evolve from analysis driving solutions into solutions by data mining beginning to drive the analysis itself. How we do all of this has transitioned from competition overcoming collaboration into collaboration starting to overcome competition, as well as what is done being more important than how it is done has transitioned into how it is done becoming as important as what is done. In addition, what or how we do it being more important than what or how we should actually do it has shifted into what or how we should do it becoming just as important as what or how we do it, if not more so. Although we have come a long way in both our methodology and technology, are they sufficient for our current or future complex and multidisciplinary problems with their massive databases? Since the apparent answer is not a resounding yes, we are presented with tremendous challenges and opportunities. This personal perspective adapts my background and experience to be appropriate for biopharmaceuticals. In these times of exploding change, informed perspectives on what challenges should be explored with accompanying guidance may be even more valuable than the far more typical literature reviews in conferences and journals of what has already been accomplished without challenges or guidance. Would we believe that an architect who designs a skyscraper determines the skyscraper's exact exterior, interior and furnishings or only general characteristics? Why not increase dependability of conclusions in genetics and translational medicine by enriching genetic determinism with uncertainty? Uncertainty is our friend if exploited or potential enemy if ignored. Genes design proteins, but they cannot operationally determine all protein characteristics: they begin a long chain of complex events occurring many times via intricate feedbacks plus interactions which are not all determined. Genes influence proteins and diseases by just determining their probability distributions, not by determining them. From any sample of diseased people, we may more successfully infer gene probability distributions than genes themselves, and it poses an issue to resolve. My position is supported by 2-3 articles a week in ScienceDaily, 2011.

  9. RESOURCESAT-2: a mission for Earth resources management

    NASA Astrophysics Data System (ADS)

    Venkata Rao, M.; Gupta, J. P.; Rattan, Ram; Thyagarajan, K.

    2006-12-01

    The Indian Space Research Organisation (ISRO) has established an operational Remote sensing satellite system by launching its first satellite, IRS-1A in 1988, followed by a series of IRS spacecraft. The IRS-1C/1D satellites with their unique combination of Payloads have taken a lead position in the Global remote sensing scenario. Realising the growing User demands for the "Multi" level approach in terms of Spatial, Spectral, Temporal and Radiometric resolutions, ISRO identified the Resourcesat as a continuity as well as improved RS Satellite. The Resourcesat-1 (IRS-P6) was launched in October 2003 using PSLV launch vehicle and it is in operational service. Resourcesat-2 is its follow-on Mission scheduled for launch in 2008. Each Resourcesat satellite carries three Electro-optical cameras as its payload - LISS-3, LISS-4 and AWIFS. All the three are multi-spectral push-broom scanners with linear array CCDs as Detectors. LISS-3 and AWIFS operate in four identical spectral bands in the VIS-NIR-SWIR range while LISS-4 is a high resolution camera with three spectral bands in VIS-NIR range. In order to meet the stringent requirements of band-to-band registration and platform stability, several improvements have been incorporated in the mainframe Bus configuration like wide field Star trackers, precision Gyroscopes, on-board GPS receiver etc,. The Resourcesat data finds its application in several areas like agricultural crop discrimination and monitoring, crop acreage/yield estimation, precision farming, water resources, forest mapping, Rural infrastructure development, disaster management etc,. to name a few. A brief description of the Payload cameras, spacecraft bus elements and operational modes and few applications are presented.

  10. Solving the Tautomeric Equilibrium of Purine Through the Analysis of the Complex Hyperfine Structure of the Four 14N Nuclei

    NASA Astrophysics Data System (ADS)

    Cocinero, Emilio J.; Uriarte, Iciar; Ecija, Patricia; Favero, Laura B.; Spada, Lorenzo; Calabrese, Camilla; Caminati, Walther

    2016-06-01

    Microwave spectroscopy has been restricted to the investigation of small molecules in the last years. However, with the advent of FTMW and CP-FTMW spectroscopies coupled with laser vaporization techniques it has turned into a very competitive methodology in the studies of moderate-size biomolecules. Here, we present the study of purine, characterized by two aromatic rings, one six- and one five-membered, fused together to give a planar aromatic bicycle. Biologically, it is the mainframe of two of the five nucleobases of DNA and RNA. Two tautomers were observed by FTMW spectroscopy coupled to UV ultrafast laser vaporization system. The population ratio of the two main tautomers [N(7)H]/[N(9)H] is about 1/40 in the gas phase. It contrasts with the solid state where only the N(7)H species is present, or in solution where a mixture of both tautomers is observed. For both species, a full quadrupolar hyperfine analysis has been performed. This has led to the determination of the full sets of diagonal quadrupole coupling constants of the four 14N atoms, which have provided crucial information for the unambiguous identification of both species. T. J. Balle and W. H. Flygare Rev. Sci. Instrum. 52, 33-45, 1981 J.-U. Grabow, W. Stahl and H. Dreizler Rev. Sci. Instrum. 67, 4072-4084, 1996 G. G. Brown, B. D. Dian, K. O. Douglass, S. M. Geyer, S. T. Shipman and B. H. Pate Rev. Sci. Instrum. 79, 0531031/1-053103/13, 2008 E. J. Cocinero, A. Lesarri, P. écija, F. J. Basterretxea, J. U. Grabow, J. A. Fernández and F. Castaño Angew. Chem. Int. Ed. 51, 3119-3124, 2012

  11. When does a physical system compute?

    PubMed

    Horsman, Clare; Stepney, Susan; Wagner, Rob C; Kendon, Viv

    2014-09-08

    Computing is a high-level process of a physical system. Recent interest in non-standard computing systems, including quantum and biological computers, has brought this physical basis of computing to the forefront. There has been, however, no consensus on how to tell if a given physical system is acting as a computer or not; leading to confusion over novel computational devices, and even claims that every physical event is a computation. In this paper, we introduce a formal framework that can be used to determine whether a physical system is performing a computation. We demonstrate how the abstract computational level interacts with the physical device level, in comparison with the use of mathematical models in experimental science. This powerful formulation allows a precise description of experiments, technology, computation and simulation, giving our central conclusion: physical computing is the use of a physical system to predict the outcome of an abstract evolution . We give conditions for computing, illustrated using a range of non-standard computing scenarios. The framework also covers broader computing contexts, where there is no obvious human computer user. We introduce the notion of a 'computational entity', and its critical role in defining when computing is taking place in physical systems.

  12. When does a physical system compute?

    PubMed Central

    Horsman, Clare; Stepney, Susan; Wagner, Rob C.; Kendon, Viv

    2014-01-01

    Computing is a high-level process of a physical system. Recent interest in non-standard computing systems, including quantum and biological computers, has brought this physical basis of computing to the forefront. There has been, however, no consensus on how to tell if a given physical system is acting as a computer or not; leading to confusion over novel computational devices, and even claims that every physical event is a computation. In this paper, we introduce a formal framework that can be used to determine whether a physical system is performing a computation. We demonstrate how the abstract computational level interacts with the physical device level, in comparison with the use of mathematical models in experimental science. This powerful formulation allows a precise description of experiments, technology, computation and simulation, giving our central conclusion: physical computing is the use of a physical system to predict the outcome of an abstract evolution. We give conditions for computing, illustrated using a range of non-standard computing scenarios. The framework also covers broader computing contexts, where there is no obvious human computer user. We introduce the notion of a ‘computational entity’, and its critical role in defining when computing is taking place in physical systems. PMID:25197245

  13. High-Performance Computing Systems and Operations | Computational Science |

    Science.gov Websites

    NREL Systems and Operations High-Performance Computing Systems and Operations NREL operates high-performance computing (HPC) systems dedicated to advancing energy efficiency and renewable energy technologies. Capabilities NREL's HPC capabilities include: High-Performance Computing Systems We operate

  14. Non-developmental item computer systems and the malicious software threat

    NASA Technical Reports Server (NTRS)

    Bown, Rodney L.

    1991-01-01

    The following subject areas are covered: a DOD development system - the Army Secure Operating System; non-development commercial computer systems; security, integrity, and assurance of service (SI and A); post delivery SI and A and malicious software; computer system unique attributes; positive feedback to commercial computer systems vendors; and NDI (Non-Development Item) computers and software safety.

  15. Small Universal Bacteria and Plasmid Computing Systems.

    PubMed

    Wang, Xun; Zheng, Pan; Ma, Tongmao; Song, Tao

    2018-05-29

    Bacterial computing is a known candidate in natural computing, the aim being to construct "bacterial computers" for solving complex problems. In this paper, a new kind of bacterial computing system, named the bacteria and plasmid computing system (BP system), is proposed. We investigate the computational power of BP systems with finite numbers of bacteria and plasmids. Specifically, it is obtained in a constructive way that a BP system with 2 bacteria and 34 plasmids is Turing universal. The results provide a theoretical cornerstone to construct powerful bacterial computers and demonstrate a concept of paradigms using a "reasonable" number of bacteria and plasmids for such devices.

  16. Methodical and technological aspects of creation of interactive computer learning systems

    NASA Astrophysics Data System (ADS)

    Vishtak, N. M.; Frolov, D. A.

    2017-01-01

    The article presents a methodology for the development of an interactive computer training system for training power plant. The methods used in the work are a generalization of the content of scientific and methodological sources on the use of computer-based training systems in vocational education, methods of system analysis, methods of structural and object-oriented modeling of information systems. The relevance of the development of the interactive computer training systems in the preparation of the personnel in the conditions of the educational and training centers is proved. Development stages of the computer training systems are allocated, factors of efficient use of the interactive computer training system are analysed. The algorithm of work performance at each development stage of the interactive computer training system that enables one to optimize time, financial and labor expenditure on the creation of the interactive computer training system is offered.

  17. Feasibility of Executing MIMS on Interdata 80.

    DTIC Science & Technology

    CDC 6500 computers, CDC 6600 computers, MIMS(Medical Information Management System ), Medical information management system , File structures, Computer...storage managementThe report examines the feasibility of implementing large information management system on mini-computers. The Medical Information ... Management System and the Interdata 80 mini-computer were selected as being representative systems. The FORTRAN programs currently being used in MIMS

  18. Commonsense System Pricing; Or, How Much Will that $1,200 Computer Really Cost?

    ERIC Educational Resources Information Center

    Crawford, Walt

    1984-01-01

    Three methods employed to price and sell computer equipment are discussed: computer pricing, hardware pricing, system pricing (system includes complete computer and support hardware system and relatively complete software package). Advantages of system pricing are detailed, the author's system is described, and 10 systems currently available are…

  19. A Model-based Framework for Risk Assessment in Human-Computer Controlled Systems

    NASA Technical Reports Server (NTRS)

    Hatanaka, Iwao

    2000-01-01

    The rapid growth of computer technology and innovation has played a significant role in the rise of computer automation of human tasks in modem production systems across all industries. Although the rationale for automation has been to eliminate "human error" or to relieve humans from manual repetitive tasks, various computer-related hazards and accidents have emerged as a direct result of increased system complexity attributed to computer automation. The risk assessment techniques utilized for electromechanical systems are not suitable for today's software-intensive systems or complex human-computer controlled systems. This thesis will propose a new systemic model-based framework for analyzing risk in safety-critical systems where both computers and humans are controlling safety-critical functions. A new systems accident model will be developed based upon modem systems theory and human cognitive processes to better characterize system accidents, the role of human operators, and the influence of software in its direct control of significant system functions. Better risk assessments will then be achievable through the application of this new framework to complex human-computer controlled systems.

  20. Safety Metrics for Human-Computer Controlled Systems

    NASA Technical Reports Server (NTRS)

    Leveson, Nancy G; Hatanaka, Iwao

    2000-01-01

    The rapid growth of computer technology and innovation has played a significant role in the rise of computer automation of human tasks in modem production systems across all industries. Although the rationale for automation has been to eliminate "human error" or to relieve humans from manual repetitive tasks, various computer-related hazards and accidents have emerged as a direct result of increased system complexity attributed to computer automation. The risk assessment techniques utilized for electromechanical systems are not suitable for today's software-intensive systems or complex human-computer controlled systems.This thesis will propose a new systemic model-based framework for analyzing risk in safety-critical systems where both computers and humans are controlling safety-critical functions. A new systems accident model will be developed based upon modem systems theory and human cognitive processes to better characterize system accidents, the role of human operators, and the influence of software in its direct control of significant system functions Better risk assessments will then be achievable through the application of this new framework to complex human-computer controlled systems.

  1. Current state and future direction of computer systems at NASA Langley Research Center

    NASA Technical Reports Server (NTRS)

    Rogers, James L. (Editor); Tucker, Jerry H. (Editor)

    1992-01-01

    Computer systems have advanced at a rate unmatched by any other area of technology. As performance has dramatically increased there has been an equally dramatic reduction in cost. This constant cost performance improvement has precipitated the pervasiveness of computer systems into virtually all areas of technology. This improvement is due primarily to advances in microelectronics. Most people are now convinced that the new generation of supercomputers will be built using a large number (possibly thousands) of high performance microprocessors. Although the spectacular improvements in computer systems have come about because of these hardware advances, there has also been a steady improvement in software techniques. In an effort to understand how these hardware and software advances will effect research at NASA LaRC, the Computer Systems Technical Committee drafted this white paper to examine the current state and possible future directions of computer systems at the Center. This paper discusses selected important areas of computer systems including real-time systems, embedded systems, high performance computing, distributed computing networks, data acquisition systems, artificial intelligence, and visualization.

  2. Noise tolerant spatiotemporal chaos computing.

    PubMed

    Kia, Behnam; Kia, Sarvenaz; Lindner, John F; Sinha, Sudeshna; Ditto, William L

    2014-12-01

    We introduce and design a noise tolerant chaos computing system based on a coupled map lattice (CML) and the noise reduction capabilities inherent in coupled dynamical systems. The resulting spatiotemporal chaos computing system is more robust to noise than a single map chaos computing system. In this CML based approach to computing, under the coupled dynamics, the local noise from different nodes of the lattice diffuses across the lattice, and it attenuates each other's effects, resulting in a system with less noise content and a more robust chaos computing architecture.

  3. Noise tolerant spatiotemporal chaos computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kia, Behnam; Kia, Sarvenaz; Ditto, William L.

    We introduce and design a noise tolerant chaos computing system based on a coupled map lattice (CML) and the noise reduction capabilities inherent in coupled dynamical systems. The resulting spatiotemporal chaos computing system is more robust to noise than a single map chaos computing system. In this CML based approach to computing, under the coupled dynamics, the local noise from different nodes of the lattice diffuses across the lattice, and it attenuates each other's effects, resulting in a system with less noise content and a more robust chaos computing architecture.

  4. 10 CFR 35.457 - Therapy-related computer systems.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 1 2014-01-01 2014-01-01 false Therapy-related computer systems. 35.457 Section 35.457... Therapy-related computer systems. The licensee shall perform acceptance testing on the treatment planning system of therapy-related computer systems in accordance with published protocols accepted by nationally...

  5. 10 CFR 35.457 - Therapy-related computer systems.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 1 2012-01-01 2012-01-01 false Therapy-related computer systems. 35.457 Section 35.457... Therapy-related computer systems. The licensee shall perform acceptance testing on the treatment planning system of therapy-related computer systems in accordance with published protocols accepted by nationally...

  6. 10 CFR 35.457 - Therapy-related computer systems.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10 Energy 1 2013-01-01 2013-01-01 false Therapy-related computer systems. 35.457 Section 35.457... Therapy-related computer systems. The licensee shall perform acceptance testing on the treatment planning system of therapy-related computer systems in accordance with published protocols accepted by nationally...

  7. 10 CFR 35.457 - Therapy-related computer systems.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 1 2010-01-01 2010-01-01 false Therapy-related computer systems. 35.457 Section 35.457... Therapy-related computer systems. The licensee shall perform acceptance testing on the treatment planning system of therapy-related computer systems in accordance with published protocols accepted by nationally...

  8. 10 CFR 35.457 - Therapy-related computer systems.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 1 2011-01-01 2011-01-01 false Therapy-related computer systems. 35.457 Section 35.457... Therapy-related computer systems. The licensee shall perform acceptance testing on the treatment planning system of therapy-related computer systems in accordance with published protocols accepted by nationally...

  9. Advanced information processing system: Local system services

    NASA Technical Reports Server (NTRS)

    Burkhardt, Laura; Alger, Linda; Whittredge, Roy; Stasiowski, Peter

    1989-01-01

    The Advanced Information Processing System (AIPS) is a multi-computer architecture composed of hardware and software building blocks that can be configured to meet a broad range of application requirements. The hardware building blocks are fault-tolerant, general-purpose computers, fault-and damage-tolerant networks (both computer and input/output), and interfaces between the networks and the computers. The software building blocks are the major software functions: local system services, input/output, system services, inter-computer system services, and the system manager. The foundation of the local system services is an operating system with the functions required for a traditional real-time multi-tasking computer, such as task scheduling, inter-task communication, memory management, interrupt handling, and time maintenance. Resting on this foundation are the redundancy management functions necessary in a redundant computer and the status reporting functions required for an operator interface. The functional requirements, functional design and detailed specifications for all the local system services are documented.

  10. ABSENTEE COMPUTATIONS IN A MULTIPLE-ACCESS COMPUTER SYSTEM.

    DTIC Science & Technology

    require user interaction, and the user may therefore want to run these computations ’ absentee ’ (or, user not present). A mechanism is presented which...provides for the handling of absentee computations in a multiple-access computer system. The design is intended to be implementation-independent...Some novel features of the system’s design are: a user can switch computations from interactive to absentee (and vice versa), the system can

  11. Biocellion: accelerating computer simulation of multicellular biological system models

    PubMed Central

    Kang, Seunghwa; Kahan, Simon; McDermott, Jason; Flann, Nicholas; Shmulevich, Ilya

    2014-01-01

    Motivation: Biological system behaviors are often the outcome of complex interactions among a large number of cells and their biotic and abiotic environment. Computational biologists attempt to understand, predict and manipulate biological system behavior through mathematical modeling and computer simulation. Discrete agent-based modeling (in combination with high-resolution grids to model the extracellular environment) is a popular approach for building biological system models. However, the computational complexity of this approach forces computational biologists to resort to coarser resolution approaches to simulate large biological systems. High-performance parallel computers have the potential to address the computing challenge, but writing efficient software for parallel computers is difficult and time-consuming. Results: We have developed Biocellion, a high-performance software framework, to solve this computing challenge using parallel computers. To support a wide range of multicellular biological system models, Biocellion asks users to provide their model specifics by filling the function body of pre-defined model routines. Using Biocellion, modelers without parallel computing expertise can efficiently exploit parallel computers with less effort than writing sequential programs from scratch. We simulate cell sorting, microbial patterning and a bacterial system in soil aggregate as case studies. Availability and implementation: Biocellion runs on x86 compatible systems with the 64 bit Linux operating system and is freely available for academic use. Visit http://biocellion.com for additional information. Contact: seunghwa.kang@pnnl.gov PMID:25064572

  12. On the Computational Power of Spiking Neural P Systems with Self-Organization.

    PubMed

    Wang, Xun; Song, Tao; Gong, Faming; Zheng, Pan

    2016-06-10

    Neural-like computing models are versatile computing mechanisms in the field of artificial intelligence. Spiking neural P systems (SN P systems for short) are one of the recently developed spiking neural network models inspired by the way neurons communicate. The communications among neurons are essentially achieved by spikes, i. e. short electrical pulses. In terms of motivation, SN P systems fall into the third generation of neural network models. In this study, a novel variant of SN P systems, namely SN P systems with self-organization, is introduced, and the computational power of the system is investigated and evaluated. It is proved that SN P systems with self-organization are capable of computing and accept the family of sets of Turing computable natural numbers. Moreover, with 87 neurons the system can compute any Turing computable recursive function, thus achieves Turing universality. These results demonstrate promising initiatives to solve an open problem arisen by Gh Păun.

  13. On the Computational Power of Spiking Neural P Systems with Self-Organization

    PubMed Central

    Wang, Xun; Song, Tao; Gong, Faming; Zheng, Pan

    2016-01-01

    Neural-like computing models are versatile computing mechanisms in the field of artificial intelligence. Spiking neural P systems (SN P systems for short) are one of the recently developed spiking neural network models inspired by the way neurons communicate. The communications among neurons are essentially achieved by spikes, i. e. short electrical pulses. In terms of motivation, SN P systems fall into the third generation of neural network models. In this study, a novel variant of SN P systems, namely SN P systems with self-organization, is introduced, and the computational power of the system is investigated and evaluated. It is proved that SN P systems with self-organization are capable of computing and accept the family of sets of Turing computable natural numbers. Moreover, with 87 neurons the system can compute any Turing computable recursive function, thus achieves Turing universality. These results demonstrate promising initiatives to solve an open problem arisen by Gh Păun. PMID:27283843

  14. On the Computational Power of Spiking Neural P Systems with Self-Organization

    NASA Astrophysics Data System (ADS)

    Wang, Xun; Song, Tao; Gong, Faming; Zheng, Pan

    2016-06-01

    Neural-like computing models are versatile computing mechanisms in the field of artificial intelligence. Spiking neural P systems (SN P systems for short) are one of the recently developed spiking neural network models inspired by the way neurons communicate. The communications among neurons are essentially achieved by spikes, i. e. short electrical pulses. In terms of motivation, SN P systems fall into the third generation of neural network models. In this study, a novel variant of SN P systems, namely SN P systems with self-organization, is introduced, and the computational power of the system is investigated and evaluated. It is proved that SN P systems with self-organization are capable of computing and accept the family of sets of Turing computable natural numbers. Moreover, with 87 neurons the system can compute any Turing computable recursive function, thus achieves Turing universality. These results demonstrate promising initiatives to solve an open problem arisen by Gh Păun.

  15. Optimum spaceborne computer system design by simulation

    NASA Technical Reports Server (NTRS)

    Williams, T.; Kerner, H.; Weatherbee, J. E.; Taylor, D. S.; Hodges, B.

    1973-01-01

    A deterministic simulator is described which models the Automatically Reconfigurable Modular Multiprocessor System (ARMMS), a candidate computer system for future manned and unmanned space missions. Its use as a tool to study and determine the minimum computer system configuration necessary to satisfy the on-board computational requirements of a typical mission is presented. The paper describes how the computer system configuration is determined in order to satisfy the data processing demand of the various shuttle booster subsytems. The configuration which is developed as a result of studies with the simulator is optimal with respect to the efficient use of computer system resources.

  16. Cluster Computing for Embedded/Real-Time Systems

    NASA Technical Reports Server (NTRS)

    Katz, D.; Kepner, J.

    1999-01-01

    Embedded and real-time systems, like other computing systems, seek to maximize computing power for a given price, and thus can significantly benefit from the advancing capabilities of cluster computing.

  17. Systems analysis of the space shuttle. [communication systems, computer systems, and power distribution

    NASA Technical Reports Server (NTRS)

    Schilling, D. L.; Oh, S. J.; Thau, F.

    1975-01-01

    Developments in communications systems, computer systems, and power distribution systems for the space shuttle are described. The use of high speed delta modulation for bit rate compression in the transmission of television signals is discussed. Simultaneous Multiprocessor Organization, an approach to computer organization, is presented. Methods of computer simulation and automatic malfunction detection for the shuttle power distribution system are also described.

  18. System-wide power management control via clock distribution network

    DOEpatents

    Coteus, Paul W.; Gara, Alan; Gooding, Thomas M.; Haring, Rudolf A.; Kopcsay, Gerard V.; Liebsch, Thomas A.; Reed, Don D.

    2015-05-19

    An apparatus, method and computer program product for automatically controlling power dissipation of a parallel computing system that includes a plurality of processors. A computing device issues a command to the parallel computing system. A clock pulse-width modulator encodes the command in a system clock signal to be distributed to the plurality of processors. The plurality of processors in the parallel computing system receive the system clock signal including the encoded command, and adjusts power dissipation according to the encoded command.

  19. RASCAL: A Rudimentary Adaptive System for Computer-Aided Learning.

    ERIC Educational Resources Information Center

    Stewart, John Christopher

    Both the background of computer-assisted instruction (CAI) systems in general and the requirements of a computer-aided learning system which would be a reasonable assistant to a teacher are discussed. RASCAL (Rudimentary Adaptive System for Computer-Aided Learning) is a first attempt at defining a CAI system which would individualize the learning…

  20. Computer system for definition of the quantitative geometry of musculature from CT images.

    PubMed

    Daniel, Matej; Iglic, Ales; Kralj-Iglic, Veronika; Konvicková, Svatava

    2005-02-01

    The computer system for quantitative determination of musculoskeletal geometry from computer tomography (CT) images has been developed. The computer system processes series of CT images to obtain three-dimensional (3D) model of bony structures where the effective muscle fibres can be interactively defined. Presented computer system has flexible modular structure and is suitable also for educational purposes.

  1. Fault recovery for real-time, multi-tasking computer system

    NASA Technical Reports Server (NTRS)

    Hess, Richard (Inventor); Kelly, Gerald B. (Inventor); Rogers, Randy (Inventor); Stange, Kent A. (Inventor)

    2011-01-01

    System and methods for providing a recoverable real time multi-tasking computer system are disclosed. In one embodiment, a system comprises a real time computing environment, wherein the real time computing environment is adapted to execute one or more applications and wherein each application is time and space partitioned. The system further comprises a fault detection system adapted to detect one or more faults affecting the real time computing environment and a fault recovery system, wherein upon the detection of a fault the fault recovery system is adapted to restore a backup set of state variables.

  2. New computing systems, future computing environment, and their implications on structural analysis and design

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Housner, Jerrold M.

    1993-01-01

    Recent advances in computer technology that are likely to impact structural analysis and design of flight vehicles are reviewed. A brief summary is given of the advances in microelectronics, networking technologies, and in the user-interface hardware and software. The major features of new and projected computing systems, including high performance computers, parallel processing machines, and small systems, are described. Advances in programming environments, numerical algorithms, and computational strategies for new computing systems are reviewed. The impact of the advances in computer technology on structural analysis and the design of flight vehicles is described. A scenario for future computing paradigms is presented, and the near-term needs in the computational structures area are outlined.

  3. Reducing power consumption during execution of an application on a plurality of compute nodes

    DOEpatents

    Archer, Charles J.; Blocksome, Michael A.; Peters, Amanda E.; Ratterman, Joseph D.; Smith, Brian E.

    2013-09-10

    Methods, apparatus, and products are disclosed for reducing power consumption during execution of an application on a plurality of compute nodes that include: powering up, during compute node initialization, only a portion of computer memory of the compute node, including configuring an operating system for the compute node in the powered up portion of computer memory; receiving, by the operating system, an instruction to load an application for execution; allocating, by the operating system, additional portions of computer memory to the application for use during execution; powering up the additional portions of computer memory allocated for use by the application during execution; and loading, by the operating system, the application into the powered up additional portions of computer memory.

  4. Mississippi Curriculum Framework for Computer Information Systems Technology. Computer Information Systems Technology (Program CIP: 52.1201--Management Information Systems & Business Data). Computer Programming (Program CIP: 52.1201). Network Support (Program CIP: 52.1290--Computer Network Support Technology). Postsecondary Programs.

    ERIC Educational Resources Information Center

    Mississippi Research and Curriculum Unit for Vocational and Technical Education, State College.

    This document, which is intended for use by community and junior colleges throughout Mississippi, contains curriculum frameworks for two programs in the state's postsecondary-level computer information systems technology cluster: computer programming and network support. Presented in the introduction are program descriptions and suggested course…

  5. Biocellion: accelerating computer simulation of multicellular biological system models.

    PubMed

    Kang, Seunghwa; Kahan, Simon; McDermott, Jason; Flann, Nicholas; Shmulevich, Ilya

    2014-11-01

    Biological system behaviors are often the outcome of complex interactions among a large number of cells and their biotic and abiotic environment. Computational biologists attempt to understand, predict and manipulate biological system behavior through mathematical modeling and computer simulation. Discrete agent-based modeling (in combination with high-resolution grids to model the extracellular environment) is a popular approach for building biological system models. However, the computational complexity of this approach forces computational biologists to resort to coarser resolution approaches to simulate large biological systems. High-performance parallel computers have the potential to address the computing challenge, but writing efficient software for parallel computers is difficult and time-consuming. We have developed Biocellion, a high-performance software framework, to solve this computing challenge using parallel computers. To support a wide range of multicellular biological system models, Biocellion asks users to provide their model specifics by filling the function body of pre-defined model routines. Using Biocellion, modelers without parallel computing expertise can efficiently exploit parallel computers with less effort than writing sequential programs from scratch. We simulate cell sorting, microbial patterning and a bacterial system in soil aggregate as case studies. Biocellion runs on x86 compatible systems with the 64 bit Linux operating system and is freely available for academic use. Visit http://biocellion.com for additional information. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  6. Concept of a Cloud Service for Data Preparation and Computational Control on Custom HPC Systems in Application to Molecular Dynamics

    NASA Astrophysics Data System (ADS)

    Puzyrkov, Dmitry; Polyakov, Sergey; Podryga, Viktoriia; Markizov, Sergey

    2018-02-01

    At the present stage of computer technology development it is possible to study the properties and processes in complex systems at molecular and even atomic levels, for example, by means of molecular dynamics methods. The most interesting are problems related with the study of complex processes under real physical conditions. Solving such problems requires the use of high performance computing systems of various types, for example, GRID systems and HPC clusters. Considering the time consuming computational tasks, the need arises of software for automatic and unified monitoring of such computations. A complex computational task can be performed over different HPC systems. It requires output data synchronization between the storage chosen by a scientist and the HPC system used for computations. The design of the computational domain is also quite a problem. It requires complex software tools and algorithms for proper atomistic data generation on HPC systems. The paper describes the prototype of a cloud service, intended for design of atomistic systems of large volume for further detailed molecular dynamic calculations and computational management for this calculations, and presents the part of its concept aimed at initial data generation on the HPC systems.

  7. Computational System For Rapid CFD Analysis In Engineering

    NASA Technical Reports Server (NTRS)

    Barson, Steven L.; Ascoli, Edward P.; Decroix, Michelle E.; Sindir, Munir M.

    1995-01-01

    Computational system comprising modular hardware and software sub-systems developed to accelerate and facilitate use of techniques of computational fluid dynamics (CFD) in engineering environment. Addresses integration of all aspects of CFD analysis process, including definition of hardware surfaces, generation of computational grids, CFD flow solution, and postprocessing. Incorporates interfaces for integration of all hardware and software tools needed to perform complete CFD analysis. Includes tools for efficient definition of flow geometry, generation of computational grids, computation of flows on grids, and postprocessing of flow data. System accepts geometric input from any of three basic sources: computer-aided design (CAD), computer-aided engineering (CAE), or definition by user.

  8. High-Performance Computing Data Center | Energy Systems Integration

    Science.gov Websites

    Facility | NREL High-Performance Computing Data Center High-Performance Computing Data Center The Energy Systems Integration Facility's High-Performance Computing Data Center is home to Peregrine -the largest high-performance computing system in the world exclusively dedicated to advancing

  9. Impacts and Opportunities for Engineering in the Era of Cloud Computing Systems

    DTIC Science & Technology

    2012-01-31

    2012 UNCLASSIFIED 1 of 58 Impacts and Opportunities for Engineering in the Era of Cloud Computing Systems A Report to the U.S. Department...2.1.7 Engineering of Computational Behavior .............................................................18 2.2 How the Cloud Will Impact Systems...58 Executive Summary This report discusses the impact of cloud computing and the broader revolution in computing on systems, on the disciplines of

  10. T-Check in System-of-Systems Technologies: Cloud Computing

    DTIC Science & Technology

    2010-09-01

    T-Check in System-of-Systems Technologies: Cloud Computing Harrison D. Strowd Grace A. Lewis September 2010 TECHNICAL NOTE CMU/SEI-2010... Cloud Computing 1 1.2 Types of Cloud Computing 2 1.3 Drivers and Barriers to Cloud Computing Adoption 5 2 Using the T-Check Method 7 2.1 T-Check...Hypothesis 3 25 3.4.2 Deployment View of the Solution for Testing Hypothesis 3 27 3.5 Selecting Cloud Computing Providers 30 3.6 Implementing the T-Check

  11. 14 CFR 415.123 - Computing systems and software.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 4 2013-01-01 2013-01-01 false Computing systems and software. 415.123... Launch Vehicle From a Non-Federal Launch Site § 415.123 Computing systems and software. (a) An applicant's safety review document must describe all computing systems and software that perform a safety...

  12. 14 CFR 415.123 - Computing systems and software.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 14 Aeronautics and Space 4 2014-01-01 2014-01-01 false Computing systems and software. 415.123... Launch Vehicle From a Non-Federal Launch Site § 415.123 Computing systems and software. (a) An applicant's safety review document must describe all computing systems and software that perform a safety...

  13. 14 CFR 415.123 - Computing systems and software.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Computing systems and software. 415.123... Launch Vehicle From a Non-Federal Launch Site § 415.123 Computing systems and software. (a) An applicant's safety review document must describe all computing systems and software that perform a safety...

  14. 14 CFR 415.123 - Computing systems and software.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 4 2012-01-01 2012-01-01 false Computing systems and software. 415.123... Launch Vehicle From a Non-Federal Launch Site § 415.123 Computing systems and software. (a) An applicant's safety review document must describe all computing systems and software that perform a safety...

  15. 14 CFR 415.123 - Computing systems and software.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 4 2011-01-01 2011-01-01 false Computing systems and software. 415.123... Launch Vehicle From a Non-Federal Launch Site § 415.123 Computing systems and software. (a) An applicant's safety review document must describe all computing systems and software that perform a safety...

  16. 78 FR 18353 - Guidance for Industry: Blood Establishment Computer System Validation in the User's Facility...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-26

    ...; (Formerly FDA-2007D-0393)] Guidance for Industry: Blood Establishment Computer System Validation in the User... Industry: Blood Establishment Computer System Validation in the User's Facility'' dated April 2013. The... document entitled ``Guidance for Industry: Blood Establishment Computer System Validation in the User's...

  17. Evaluation of Microcomputer-Based Operation and Maintenance Management Systems for Army Water/Wastewater Treatment Plant Operation.

    DTIC Science & Technology

    1986-07-01

    COMPUTER-AIDED OPERATION MANAGEMENT SYSTEM ................. 29 Functions of an Off-Line Computer-Aided Operation Management System Applications of...System Comparisons 85 DISTRIBUTION 5V J. • 0. FIGURES Number Page 1 Hardware Components 21 2 Basic Functions of a Computer-Aided Operation Management System...Plant Visits 26 4 Computer-Aided Operation Management Systems Reviewed for Analysis of Basic Functions 29 5 Progress of Software System Installation and

  18. Computer software.

    PubMed

    Rosenthal, L E

    1986-10-01

    Software is the component in a computer system that permits the hardware to perform the various functions that a computer system is capable of doing. The history of software and its development can be traced to the early nineteenth century. All computer systems are designed to utilize the "stored program concept" as first developed by Charles Babbage in the 1850s. The concept was lost until the mid-1940s, when modern computers made their appearance. Today, because of the complex and myriad tasks that a computer system can perform, there has been a differentiation of types of software. There is software designed to perform specific business applications. There is software that controls the overall operation of a computer system. And there is software that is designed to carry out specialized tasks. Regardless of types, software is the most critical component of any computer system. Without it, all one has is a collection of circuits, transistors, and silicone chips.

  19. Models and techniques for evaluating the effectiveness of aircraft computing systems

    NASA Technical Reports Server (NTRS)

    Meyer, J. F.

    1982-01-01

    Models, measures, and techniques for evaluating the effectiveness of aircraft computing systems were developed. By "effectiveness" in this context we mean the extent to which the user, i.e., a commercial air carrier, may expect to benefit from the computational tasks accomplished by a computing system in the environment of an advanced commercial aircraft. Thus, the concept of effectiveness involves aspects of system performance, reliability, and worth (value, benefit) which are appropriately integrated in the process of evaluating system effectiveness. Specifically, the primary objectives are: the development of system models that provide a basis for the formulation and evaluation of aircraft computer system effectiveness, the formulation of quantitative measures of system effectiveness, and the development of analytic and simulation techniques for evaluating the effectiveness of a proposed or existing aircraft computer.

  20. A new taxonomy for distributed computer systems based upon operating system structure

    NASA Technical Reports Server (NTRS)

    Foudriat, E. C.

    1985-01-01

    Characteristics of the resource structure found in the operating system are considered as a mechanism for classifying distributed computer systems. Since the operating system resources, themselves, are too diversified to provide a consistent classification, the structure upon which resources are built and shared are examined. The location and control character of this indivisibility provides the taxonomy for separating uniprocessors, computer networks, network computers (fully distributed processing systems or decentralized computers) and algorithm and/or data control multiprocessors. The taxonomy is important because it divides machines into a classification that is relevant or important to the client and not the hardware architect. It also defines the character of the kernel O/S structure needed for future computer systems. What constitutes an operating system for a fully distributed processor is discussed in detail.

  1. From photons to big-data applications: terminating terabits

    PubMed Central

    2016-01-01

    Computer architectures have entered a watershed as the quantity of network data generated by user applications exceeds the data-processing capacity of any individual computer end-system. It will become impossible to scale existing computer systems while a gap grows between the quantity of networked data and the capacity for per system data processing. Despite this, the growth in demand in both task variety and task complexity continues unabated. Networked computer systems provide a fertile environment in which new applications develop. As networked computer systems become akin to infrastructure, any limitation upon the growth in capacity and capabilities becomes an important constraint of concern to all computer users. Considering a networked computer system capable of processing terabits per second, as a benchmark for scalability, we critique the state of the art in commodity computing, and propose a wholesale reconsideration in the design of computer architectures and their attendant ecosystem. Our proposal seeks to reduce costs, save power and increase performance in a multi-scale approach that has potential application from nanoscale to data-centre-scale computers. PMID:26809573

  2. From photons to big-data applications: terminating terabits.

    PubMed

    Zilberman, Noa; Moore, Andrew W; Crowcroft, Jon A

    2016-03-06

    Computer architectures have entered a watershed as the quantity of network data generated by user applications exceeds the data-processing capacity of any individual computer end-system. It will become impossible to scale existing computer systems while a gap grows between the quantity of networked data and the capacity for per system data processing. Despite this, the growth in demand in both task variety and task complexity continues unabated. Networked computer systems provide a fertile environment in which new applications develop. As networked computer systems become akin to infrastructure, any limitation upon the growth in capacity and capabilities becomes an important constraint of concern to all computer users. Considering a networked computer system capable of processing terabits per second, as a benchmark for scalability, we critique the state of the art in commodity computing, and propose a wholesale reconsideration in the design of computer architectures and their attendant ecosystem. Our proposal seeks to reduce costs, save power and increase performance in a multi-scale approach that has potential application from nanoscale to data-centre-scale computers. © 2016 The Authors.

  3. eXascale PRogramming Environment and System Software (XPRESS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chapman, Barbara; Gabriel, Edgar

    Exascale systems, with a thousand times the compute capacity of today’s leading edge petascale computers, are expected to emerge during the next decade. Their software systems will need to facilitate the exploitation of exceptional amounts of concurrency in applications, and ensure that jobs continue to run despite the occurrence of system failures and other kinds of hard and soft errors. Adapting computations at runtime to cope with changes in the execution environment, as well as to improve power and performance characteristics, is likely to become the norm. As a result, considerable innovation is required to develop system support to meetmore » the needs of future computing platforms. The XPRESS project aims to develop and prototype a revolutionary software system for extreme-­scale computing for both exascale and strong­scaled problems. The XPRESS collaborative research project will advance the state-­of-­the-­art in high performance computing and enable exascale computing for current and future DOE mission-­critical applications and supporting systems. The goals of the XPRESS research project are to: A. enable exascale performance capability for DOE applications, both current and future, B. develop and deliver a practical computing system software X-­stack, OpenX, for future practical DOE exascale computing systems, and C. provide programming methods and environments for effective means of expressing application and system software for portable exascale system execution.« less

  4. Organising a University Computer System: Analytical Notes.

    ERIC Educational Resources Information Center

    Jacquot, J. P.; Finance, J. P.

    1990-01-01

    Thirteen trends in university computer system development are identified, system user requirements are analyzed, critical system qualities are outlined, and three options for organizing a computer system are presented. The three systems include a centralized network, local network, and federation of local networks. (MSE)

  5. Computer-Mediated Communications Systems: Will They Catch On?

    ERIC Educational Resources Information Center

    Cook, Dave; Ridley, Michael

    1990-01-01

    Describes the use of CoSy, a computer conferencing system, by academic librarians at McMaster University in Ontario. Computer-mediated communications systems (CMCS) are discussed, the use of the system for electronic mail and computer conferencing is described, the perceived usefulness of CMCS is examined, and a sidebar explains details of the…

  6. Rhetorical Consequences of the Computer Society: Expert Systems and Human Communication.

    ERIC Educational Resources Information Center

    Skopec, Eric Wm.

    Expert systems are computer programs that solve selected problems by modelling domain-specific behaviors of human experts. These computer programs typically consist of an input/output system that feeds data into the computer and retrieves advice, an inference system using the reasoning and heuristic processes of human experts, and a knowledge…

  7. Design and implementation of a UNIX based distributed computing system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Love, J.S.; Michael, M.W.

    1994-12-31

    We have designed, implemented, and are running a corporate-wide distributed processing batch queue on a large number of networked workstations using the UNIX{reg_sign} operating system. Atlas Wireline researchers and scientists have used the system for over a year. The large increase in available computer power has greatly reduced the time required for nuclear and electromagnetic tool modeling. Use of remote distributed computing has simultaneously reduced computation costs and increased usable computer time. The system integrates equipment from different manufacturers, using various CPU architectures, distinct operating system revisions, and even multiple processors per machine. Various differences between the machines have tomore » be accounted for in the master scheduler. These differences include shells, command sets, swap spaces, memory sizes, CPU sizes, and OS revision levels. Remote processing across a network must be performed in a manner that is seamless from the users` perspective. The system currently uses IBM RISC System/6000{reg_sign}, SPARCstation{sup TM}, HP9000s700, HP9000s800, and DEC Alpha AXP{sup TM} machines. Each CPU in the network has its own speed rating, allowed working hours, and workload parameters. The system if designed so that all of the computers in the network can be optimally scheduled without adversely impacting the primary users of the machines. The increase in the total usable computational capacity by means of distributed batch computing can change corporate computing strategy. The integration of disparate computer platforms eliminates the need to buy one type of computer for computations, another for graphics, and yet another for day-to-day operations. It might be possible, for example, to meet all research and engineering computing needs with existing networked computers.« less

  8. Computer Music

    NASA Astrophysics Data System (ADS)

    Cook, Perry

    This chapter covers algorithms, technologies, computer languages, and systems for computer music. Computer music involves the application of computers and other digital/electronic technologies to music composition, performance, theory, history, and perception. The field combines digital signal processing, computational algorithms, computer languages, hardware and software systems, acoustics, psychoacoustics (low-level perception of sounds from the raw acoustic signal), and music cognition (higher-level perception of musical style, form, emotion, etc.). Although most people would think that analog synthesizers and electronic music substantially predate the use of computers in music, many experiments and complete computer music systems were being constructed and used as early as the 1950s.

  9. Quantum Accelerators for High-performance Computing Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humble, Travis S.; Britt, Keith A.; Mohiyaddin, Fahd A.

    We define some of the programming and system-level challenges facing the application of quantum processing to high-performance computing. Alongside barriers to physical integration, prominent differences in the execution of quantum and conventional programs challenges the intersection of these computational models. Following a brief overview of the state of the art, we discuss recent advances in programming and execution models for hybrid quantum-classical computing. We discuss a novel quantum-accelerator framework that uses specialized kernels to offload select workloads while integrating with existing computing infrastructure. We elaborate on the role of the host operating system to manage these unique accelerator resources, themore » prospects for deploying quantum modules, and the requirements placed on the language hierarchy connecting these different system components. We draw on recent advances in the modeling and simulation of quantum computing systems with the development of architectures for hybrid high-performance computing systems and the realization of software stacks for controlling quantum devices. Finally, we present simulation results that describe the expected system-level behavior of high-performance computing systems composed from compute nodes with quantum processing units. We describe performance for these hybrid systems in terms of time-to-solution, accuracy, and energy consumption, and we use simple application examples to estimate the performance advantage of quantum acceleration.« less

  10. 5 CFR 551.210 - Computer employees.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ..., creation or modification of computer programs related to machine operating systems; or (4) A combination of...) Computer systems analysts, computer programmers, software engineers, or other similarly skilled workers in... consist of: (1) The application of systems analysis techniques and procedures, including consulting with...

  11. 5 CFR 551.210 - Computer employees.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ..., creation or modification of computer programs related to machine operating systems; or (4) A combination of...) Computer systems analysts, computer programmers, software engineers, or other similarly skilled workers in... consist of: (1) The application of systems analysis techniques and procedures, including consulting with...

  12. 5 CFR 551.210 - Computer employees.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ..., creation or modification of computer programs related to machine operating systems; or (4) A combination of...) Computer systems analysts, computer programmers, software engineers, or other similarly skilled workers in... consist of: (1) The application of systems analysis techniques and procedures, including consulting with...

  13. 5 CFR 551.210 - Computer employees.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ..., creation or modification of computer programs related to machine operating systems; or (4) A combination of...) Computer systems analysts, computer programmers, software engineers, or other similarly skilled workers in... consist of: (1) The application of systems analysis techniques and procedures, including consulting with...

  14. A Modular Environment for Geophysical Inversion and Run-time Autotuning using Heterogeneous Computing Systems

    NASA Astrophysics Data System (ADS)

    Myre, Joseph M.

    Heterogeneous computing systems have recently come to the forefront of the High-Performance Computing (HPC) community's interest. HPC computer systems that incorporate special purpose accelerators, such as Graphics Processing Units (GPUs), are said to be heterogeneous. Large scale heterogeneous computing systems have consistently ranked highly on the Top500 list since the beginning of the heterogeneous computing trend. By using heterogeneous computing systems that consist of both general purpose processors and special- purpose accelerators, the speed and problem size of many simulations could be dramatically increased. Ultimately this results in enhanced simulation capabilities that allows, in some cases for the first time, the execution of parameter space and uncertainty analyses, model optimizations, and other inverse modeling techniques that are critical for scientific discovery and engineering analysis. However, simplifying the usage and optimization of codes for heterogeneous computing systems remains a challenge. This is particularly true for scientists and engineers for whom understanding HPC architectures and undertaking performance analysis may not be primary research objectives. To enable scientists and engineers to remain focused on their primary research objectives, a modular environment for geophysical inversion and run-time autotuning on heterogeneous computing systems is presented. This environment is composed of three major components: 1) CUSH---a framework for reducing the complexity of programming heterogeneous computer systems, 2) geophysical inversion routines which can be used to characterize physical systems, and 3) run-time autotuning routines designed to determine configurations of heterogeneous computing systems in an attempt to maximize the performance of scientific and engineering codes. Using three case studies, a lattice-Boltzmann method, a non-negative least squares inversion, and a finite-difference fluid flow method, it is shown that this environment provides scientists and engineers with means to reduce the programmatic complexity of their applications, to perform geophysical inversions for characterizing physical systems, and to determine high-performing run-time configurations of heterogeneous computing systems using a run-time autotuner.

  15. Distributed intrusion detection system based on grid security model

    NASA Astrophysics Data System (ADS)

    Su, Jie; Liu, Yahui

    2008-03-01

    Grid computing has developed rapidly with the development of network technology and it can solve the problem of large-scale complex computing by sharing large-scale computing resource. In grid environment, we can realize a distributed and load balance intrusion detection system. This paper first discusses the security mechanism in grid computing and the function of PKI/CA in the grid security system, then gives the application of grid computing character in the distributed intrusion detection system (IDS) based on Artificial Immune System. Finally, it gives a distributed intrusion detection system based on grid security system that can reduce the processing delay and assure the detection rates.

  16. Survey of Computer-Based Message Systems; COM/PortaCOM Conference System: Design Goals and Principles; Computer Conferencing Is More Than Electronic Mail; Effects of the COM Computer Conference System.

    ERIC Educational Resources Information Center

    Palme, Jacob

    The four papers contained in this document provide: (1) a survey of computer based mail and conference systems; (2) an evaluation of systems for both individually addressed mail and group addressing through conferences and distribution lists; (3) a discussion of various methods of structuring the text data in existing systems; and (4) a…

  17. Reservoir computing with a slowly modulated mask signal for preprocessing using a mutually coupled optoelectronic system

    NASA Astrophysics Data System (ADS)

    Tezuka, Miwa; Kanno, Kazutaka; Bunsen, Masatoshi

    2016-08-01

    Reservoir computing is a machine-learning paradigm based on information processing in the human brain. We numerically demonstrate reservoir computing with a slowly modulated mask signal for preprocessing by using a mutually coupled optoelectronic system. The performance of our system is quantitatively evaluated by a chaotic time series prediction task. Our system can produce comparable performance with reservoir computing with a single feedback system and a fast modulated mask signal. We showed that it is possible to slow down the modulation speed of the mask signal by using the mutually coupled system in reservoir computing.

  18. Design of a modular digital computer system, DRL 4. [for meeting future requirements of spaceborne computers

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The design is reported of an advanced modular computer system designated the Automatically Reconfigurable Modular Multiprocessor System, which anticipates requirements for higher computing capacity and reliability for future spaceborne computers. Subjects discussed include: an overview of the architecture, mission analysis, synchronous and nonsynchronous scheduling control, reliability, and data transmission.

  19. Morgantown People Mover Redundant Computing System Design Summary

    DOT National Transportation Integrated Search

    1980-09-01

    The purpose of this report is to describe the redundant computing system design used for the current 1980 Phase II Morgantown People Mover (MPM) system. The redundant computing system is that part of the control and communications system (C&CS) consi...

  20. Software Systems for High-performance Quantum Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humble, Travis S; Britt, Keith A

    Quantum computing promises new opportunities for solving hard computational problems, but harnessing this novelty requires breakthrough concepts in the design, operation, and application of computing systems. We define some of the challenges facing the development of quantum computing systems as well as software-based approaches that can be used to overcome these challenges. Following a brief overview of the state of the art, we present models for the quantum programming and execution models, the development of architectures for hybrid high-performance computing systems, and the realization of software stacks for quantum networking. This leads to a discussion of the role that conventionalmore » computing plays in the quantum paradigm and how some of the current challenges for exascale computing overlap with those facing quantum computing.« less

  1. An integrated compact airborne multispectral imaging system using embedded computer

    NASA Astrophysics Data System (ADS)

    Zhang, Yuedong; Wang, Li; Zhang, Xuguo

    2015-08-01

    An integrated compact airborne multispectral imaging system using embedded computer based control system was developed for small aircraft multispectral imaging application. The multispectral imaging system integrates CMOS camera, filter wheel with eight filters, two-axis stabilized platform, miniature POS (position and orientation system) and embedded computer. The embedded computer has excellent universality and expansibility, and has advantages in volume and weight for airborne platform, so it can meet the requirements of control system of the integrated airborne multispectral imaging system. The embedded computer controls the camera parameters setting, filter wheel and stabilized platform working, image and POS data acquisition, and stores the image and data. The airborne multispectral imaging system can connect peripheral device use the ports of the embedded computer, so the system operation and the stored image data management are easy. This airborne multispectral imaging system has advantages of small volume, multi-function, and good expansibility. The imaging experiment results show that this system has potential for multispectral remote sensing in applications such as resource investigation and environmental monitoring.

  2. 40 CFR 86.099-17 - Emission control diagnostic system for 1999 and later light-duty vehicles and light-duty trucks.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... of computer codes. The emission control diagnostic system shall record and store in computer memory..., shall be stored in computer memory to identify correctly functioning emission control systems and those... in computer memory. Should a subsequent fuel system or misfire malfunction occur, any previously...

  3. 40 CFR 86.099-17 - Emission control diagnostic system for 1999 and later light-duty vehicles and light-duty trucks.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... of computer codes. The emission control diagnostic system shall record and store in computer memory..., shall be stored in computer memory to identify correctly functioning emission control systems and those... in computer memory. Should a subsequent fuel system or misfire malfunction occur, any previously...

  4. Automated validation of a computer operating system

    NASA Technical Reports Server (NTRS)

    Dervage, M. M.; Milberg, B. A.

    1970-01-01

    Programs apply selected input/output loads to complex computer operating system and measure performance of that system under such loads. Technique lends itself to checkout of computer software designed to monitor automated complex industrial systems.

  5. Data processing for water monitoring system

    NASA Technical Reports Server (NTRS)

    Monford, L.; Linton, A. T.

    1978-01-01

    Water monitoring data acquisition system is structured about central computer that controls sampling and sensor operation, and analyzes and displays data in real time. Unit is essentially separated into two systems: computer system, and hard wire backup system which may function separately or with computer.

  6. AGIS: Integration of new technologies used in ATLAS Distributed Computing

    NASA Astrophysics Data System (ADS)

    Anisenkov, Alexey; Di Girolamo, Alessandro; Alandes Pradillo, Maria

    2017-10-01

    The variety of the ATLAS Distributed Computing infrastructure requires a central information system to define the topology of computing resources and to store different parameters and configuration data which are needed by various ATLAS software components. The ATLAS Grid Information System (AGIS) is the system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing applications and services. Being an intermediate middleware system between clients and external information sources (like central BDII, GOCDB, MyOSG), AGIS defines the relations between experiment specific used resources and physical distributed computing capabilities. Being in production during LHC Runl AGIS became the central information system for Distributed Computing in ATLAS and it is continuously evolving to fulfil new user requests, enable enhanced operations and follow the extension of the ATLAS Computing model. The ATLAS Computing model and data structures used by Distributed Computing applications and services are continuously evolving and trend to fit newer requirements from ADC community. In this note, we describe the evolution and the recent developments of AGIS functionalities, related to integration of new technologies recently become widely used in ATLAS Computing, like flexible computing utilization of opportunistic Cloud and HPC resources, ObjectStore services integration for Distributed Data Management (Rucio) and ATLAS workload management (PanDA) systems, unified storage protocols declaration required for PandDA Pilot site movers and others. The improvements of information model and general updates are also shown, in particular we explain how other collaborations outside ATLAS could benefit the system as a computing resources information catalogue. AGIS is evolving towards a common information system, not coupled to a specific experiment.

  7. Automated Help System For A Supercomputer

    NASA Technical Reports Server (NTRS)

    Callas, George P.; Schulbach, Catherine H.; Younkin, Michael

    1994-01-01

    Expert-system software developed to provide automated system of user-helping displays in supercomputer system at Ames Research Center Advanced Computer Facility. Users located at remote computer terminals connected to supercomputer and each other via gateway computers, local-area networks, telephone lines, and satellite links. Automated help system answers routine user inquiries about how to use services of computer system. Available 24 hours per day and reduces burden on human experts, freeing them to concentrate on helping users with complicated problems.

  8. Verification Methodology of Fault-tolerant, Fail-safe Computers Applied to MAGLEV Control Computer Systems

    DOT National Transportation Integrated Search

    1993-05-01

    The Maglev control computer system should be designed to verifiably possess high reliability and safety as well as high availability to make Maglev a dependable and attractive transportation alternative to the public. A Maglev computer system has bee...

  9. TOWARD A COMPUTER BASED INSTRUCTIONAL SYSTEM.

    ERIC Educational Resources Information Center

    GARIGLIO, LAWRENCE M.; RODGERS, WILLIAM A.

    THE INFORMATION FOR THIS REPORT WAS OBTAINED FROM VARIOUS COMPUTER ASSISTED INSTRUCTION INSTALLATIONS. COMPUTER BASED INSTRUCTION REFERS TO A SYSTEM AIMED AT INDIVIDUALIZED INSTRUCTION, WITH THE COMPUTER AS CENTRAL CONTROL. SUCH A SYSTEM HAS 3 MAJOR SUBSYSTEMS--INSTRUCTIONAL, RESEARCH, AND MANAGERIAL. THIS REPORT EMPHASIZES THE INSTRUCTIONAL…

  10. The 'Biologically-Inspired Computing' Column

    NASA Technical Reports Server (NTRS)

    Hinchey, Mike

    2006-01-01

    The field of Biology changed dramatically in 1953, with the determination by Francis Crick and James Dewey Watson of the double helix structure of DNA. This discovery changed Biology for ever, allowing the sequencing of the human genome, and the emergence of a "new Biology" focused on DNA, genes, proteins, data, and search. Computational Biology and Bioinformatics heavily rely on computing to facilitate research into life and development. Simultaneously, an understanding of the biology of living organisms indicates a parallel with computing systems: molecules in living cells interact, grow, and transform according to the "program" dictated by DNA. Moreover, paradigms of Computing are emerging based on modelling and developing computer-based systems exploiting ideas that are observed in nature. This includes building into computer systems self-management and self-governance mechanisms that are inspired by the human body's autonomic nervous system, modelling evolutionary systems analogous to colonies of ants or other insects, and developing highly-efficient and highly-complex distributed systems from large numbers of (often quite simple) largely homogeneous components to reflect the behaviour of flocks of birds, swarms of bees, herds of animals, or schools of fish. This new field of "Biologically-Inspired Computing", often known in other incarnations by other names, such as: Autonomic Computing, Pervasive Computing, Organic Computing, Biomimetics, and Artificial Life, amongst others, is poised at the intersection of Computer Science, Engineering, Mathematics, and the Life Sciences. Successes have been reported in the fields of drug discovery, data communications, computer animation, control and command, exploration systems for space, undersea, and harsh environments, to name but a few, and augur much promise for future progress.

  11. Computer Operating System Maintenance.

    DTIC Science & Technology

    1982-06-01

    FACILITY The Computer Management Information Facility ( CMIF ) system was developed by Rapp Systems to fulfill the need at the CRF to record and report on...computer center resource usage and utilization. The foundation of the CMIF system is a System 2000 data base (CRFMGMT) which stores and permits access

  12. Computing with dynamical systems based on insulator-metal-transition oscillators

    NASA Astrophysics Data System (ADS)

    Parihar, Abhinav; Shukla, Nikhil; Jerry, Matthew; Datta, Suman; Raychowdhury, Arijit

    2017-04-01

    In this paper, we review recent work on novel computing paradigms using coupled oscillatory dynamical systems. We explore systems of relaxation oscillators based on linear state transitioning devices, which switch between two discrete states with hysteresis. By harnessing the dynamics of complex, connected systems, we embrace the philosophy of "let physics do the computing" and demonstrate how complex phase and frequency dynamics of such systems can be controlled, programmed, and observed to solve computationally hard problems. Although our discussion in this paper is limited to insulator-to-metallic state transition devices, the general philosophy of such computing paradigms can be translated to other mediums including optical systems. We present the necessary mathematical treatments necessary to understand the time evolution of these systems and demonstrate through recent experimental results the potential of such computational primitives.

  13. A comparison of queueing, cluster and distributed computing systems

    NASA Technical Reports Server (NTRS)

    Kaplan, Joseph A.; Nelson, Michael L.

    1993-01-01

    Using workstation clusters for distributed computing has become popular with the proliferation of inexpensive, powerful workstations. Workstation clusters offer both a cost effective alternative to batch processing and an easy entry into parallel computing. However, a number of workstations on a network does not constitute a cluster. Cluster management software is necessary to harness the collective computing power. A variety of cluster management and queuing systems are compared: Distributed Queueing Systems (DQS), Condor, Load Leveler, Load Balancer, Load Sharing Facility (LSF - formerly Utopia), Distributed Job Manager (DJM), Computing in Distributed Networked Environments (CODINE), and NQS/Exec. The systems differ in their design philosophy and implementation. Based on published reports on the different systems and conversations with the system's developers and vendors, a comparison of the systems are made on the integral issues of clustered computing.

  14. Universal blind quantum computation for hybrid system

    NASA Astrophysics Data System (ADS)

    Huang, He-Liang; Bao, Wan-Su; Li, Tan; Li, Feng-Guang; Fu, Xiang-Qun; Zhang, Shuo; Zhang, Hai-Long; Wang, Xiang

    2017-08-01

    As progress on the development of building quantum computer continues to advance, first-generation practical quantum computers will be available for ordinary users in the cloud style similar to IBM's Quantum Experience nowadays. Clients can remotely access the quantum servers using some simple devices. In such a situation, it is of prime importance to keep the security of the client's information. Blind quantum computation protocols enable a client with limited quantum technology to delegate her quantum computation to a quantum server without leaking any privacy. To date, blind quantum computation has been considered only for an individual quantum system. However, practical universal quantum computer is likely to be a hybrid system. Here, we take the first step to construct a framework of blind quantum computation for the hybrid system, which provides a more feasible way for scalable blind quantum computation.

  15. Making classical ground-state spin computing fault-tolerant.

    PubMed

    Crosson, I J; Bacon, D; Brown, K R

    2010-09-01

    We examine a model of classical deterministic computing in which the ground state of the classical system is a spatial history of the computation. This model is relevant to quantum dot cellular automata as well as to recent universal adiabatic quantum computing constructions. In its most primitive form, systems constructed in this model cannot compute in an error-free manner when working at nonzero temperature. However, by exploiting a mapping between the partition function for this model and probabilistic classical circuits we are able to show that it is possible to make this model effectively error-free. We achieve this by using techniques in fault-tolerant classical computing and the result is that the system can compute effectively error-free if the temperature is below a critical temperature. We further link this model to computational complexity and show that a certain problem concerning finite temperature classical spin systems is complete for the complexity class Merlin-Arthur. This provides an interesting connection between the physical behavior of certain many-body spin systems and computational complexity.

  16. Today's Personal Computers: Products for Every Need--Part II.

    ERIC Educational Resources Information Center

    Personal Computing, 1981

    1981-01-01

    Looks at microcomputers manufactured by Altos Computer Systems, Cromemco, Exidy, Intelligent Systems, Intertec Data Systems, Mattel, Nippon Electronics, Northstar, Personal Micro Computers, and Sinclair. (Part I of this article, examining other computers, appeared in the May 1981 issue.) Journal availability: Hayden Publishing Company, 50 Essex…

  17. 10 CFR 35.657 - Therapy-related computer systems.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 1 2012-01-01 2012-01-01 false Therapy-related computer systems. 35.657 Section 35.657... Units, Teletherapy Units, and Gamma Stereotactic Radiosurgery Units § 35.657 Therapy-related computer... computer systems in accordance with published protocols accepted by nationally recognized bodies. At a...

  18. 10 CFR 35.657 - Therapy-related computer systems.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10 Energy 1 2013-01-01 2013-01-01 false Therapy-related computer systems. 35.657 Section 35.657... Units, Teletherapy Units, and Gamma Stereotactic Radiosurgery Units § 35.657 Therapy-related computer... computer systems in accordance with published protocols accepted by nationally recognized bodies. At a...

  19. 10 CFR 35.657 - Therapy-related computer systems.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 1 2014-01-01 2014-01-01 false Therapy-related computer systems. 35.657 Section 35.657... Units, Teletherapy Units, and Gamma Stereotactic Radiosurgery Units § 35.657 Therapy-related computer... computer systems in accordance with published protocols accepted by nationally recognized bodies. At a...

  20. 10 CFR 35.657 - Therapy-related computer systems.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 1 2011-01-01 2011-01-01 false Therapy-related computer systems. 35.657 Section 35.657... Units, Teletherapy Units, and Gamma Stereotactic Radiosurgery Units § 35.657 Therapy-related computer... computer systems in accordance with published protocols accepted by nationally recognized bodies. At a...

  1. 10 CFR 35.657 - Therapy-related computer systems.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 1 2010-01-01 2010-01-01 false Therapy-related computer systems. 35.657 Section 35.657... Units, Teletherapy Units, and Gamma Stereotactic Radiosurgery Units § 35.657 Therapy-related computer... computer systems in accordance with published protocols accepted by nationally recognized bodies. At a...

  2. 18 CFR 3b.204 - Safeguarding information in manual and computer-based record systems.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... information in manual and computer-based record systems. 3b.204 Section 3b.204 Conservation of Power and Water... Collection of Records § 3b.204 Safeguarding information in manual and computer-based record systems. (a) The administrative and physical controls to protect the information in the manual and computer-based record systems...

  3. 18 CFR 3b.204 - Safeguarding information in manual and computer-based record systems.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... information in manual and computer-based record systems. 3b.204 Section 3b.204 Conservation of Power and Water... Collection of Records § 3b.204 Safeguarding information in manual and computer-based record systems. (a) The administrative and physical controls to protect the information in the manual and computer-based record systems...

  4. 18 CFR 3b.204 - Safeguarding information in manual and computer-based record systems.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... information in manual and computer-based record systems. 3b.204 Section 3b.204 Conservation of Power and Water... Collection of Records § 3b.204 Safeguarding information in manual and computer-based record systems. (a) The administrative and physical controls to protect the information in the manual and computer-based record systems...

  5. 18 CFR 3b.204 - Safeguarding information in manual and computer-based record systems.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... information in manual and computer-based record systems. 3b.204 Section 3b.204 Conservation of Power and Water... Collection of Records § 3b.204 Safeguarding information in manual and computer-based record systems. (a) The administrative and physical controls to protect the information in the manual and computer-based record systems...

  6. Implementation and evaluation of an efficient secure computation system using ‘R’ for healthcare statistics

    PubMed Central

    Chida, Koji; Morohashi, Gembu; Fuji, Hitoshi; Magata, Fumihiko; Fujimura, Akiko; Hamada, Koki; Ikarashi, Dai; Yamamoto, Ryuichi

    2014-01-01

    Background and objective While the secondary use of medical data has gained attention, its adoption has been constrained due to protection of patient privacy. Making medical data secure by de-identification can be problematic, especially when the data concerns rare diseases. We require rigorous security management measures. Materials and methods Using secure computation, an approach from cryptography, our system can compute various statistics over encrypted medical records without decrypting them. An issue of secure computation is that the amount of processing time required is immense. We implemented a system that securely computes healthcare statistics from the statistical computing software ‘R’ by effectively combining secret-sharing-based secure computation with original computation. Results Testing confirmed that our system could correctly complete computation of average and unbiased variance of approximately 50 000 records of dummy insurance claim data in a little over a second. Computation including conditional expressions and/or comparison of values, for example, t test and median, could also be correctly completed in several tens of seconds to a few minutes. Discussion If medical records are simply encrypted, the risk of leaks exists because decryption is usually required during statistical analysis. Our system possesses high-level security because medical records remain in encrypted state even during statistical analysis. Also, our system can securely compute some basic statistics with conditional expressions using ‘R’ that works interactively while secure computation protocols generally require a significant amount of processing time. Conclusions We propose a secure statistical analysis system using ‘R’ for medical data that effectively integrates secret-sharing-based secure computation and original computation. PMID:24763677

  7. Implementation and evaluation of an efficient secure computation system using 'R' for healthcare statistics.

    PubMed

    Chida, Koji; Morohashi, Gembu; Fuji, Hitoshi; Magata, Fumihiko; Fujimura, Akiko; Hamada, Koki; Ikarashi, Dai; Yamamoto, Ryuichi

    2014-10-01

    While the secondary use of medical data has gained attention, its adoption has been constrained due to protection of patient privacy. Making medical data secure by de-identification can be problematic, especially when the data concerns rare diseases. We require rigorous security management measures. Using secure computation, an approach from cryptography, our system can compute various statistics over encrypted medical records without decrypting them. An issue of secure computation is that the amount of processing time required is immense. We implemented a system that securely computes healthcare statistics from the statistical computing software 'R' by effectively combining secret-sharing-based secure computation with original computation. Testing confirmed that our system could correctly complete computation of average and unbiased variance of approximately 50,000 records of dummy insurance claim data in a little over a second. Computation including conditional expressions and/or comparison of values, for example, t test and median, could also be correctly completed in several tens of seconds to a few minutes. If medical records are simply encrypted, the risk of leaks exists because decryption is usually required during statistical analysis. Our system possesses high-level security because medical records remain in encrypted state even during statistical analysis. Also, our system can securely compute some basic statistics with conditional expressions using 'R' that works interactively while secure computation protocols generally require a significant amount of processing time. We propose a secure statistical analysis system using 'R' for medical data that effectively integrates secret-sharing-based secure computation and original computation. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  8. Threshold-based queuing system for performance analysis of cloud computing system with dynamic scaling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shorgin, Sergey Ya.; Pechinkin, Alexander V.; Samouylov, Konstantin E.

    Cloud computing is promising technology to manage and improve utilization of computing center resources to deliver various computing and IT services. For the purpose of energy saving there is no need to unnecessarily operate many servers under light loads, and they are switched off. On the other hand, some servers should be switched on in heavy load cases to prevent very long delays. Thus, waiting times and system operating cost can be maintained on acceptable level by dynamically adding or removing servers. One more fact that should be taken into account is significant server setup costs and activation times. Formore » better energy efficiency, cloud computing system should not react on instantaneous increase or instantaneous decrease of load. That is the main motivation for using queuing systems with hysteresis for cloud computing system modelling. In the paper, we provide a model of cloud computing system in terms of multiple server threshold-based infinite capacity queuing system with hysteresis and noninstantanuous server activation. For proposed model, we develop a method for computing steady-state probabilities that allow to estimate a number of performance measures.« less

  9. The Feasibility of Classifying Breast Masses Using a Computer-Assisted Diagnosis (CAD) System Based on Ultrasound Elastography and BI-RADS Lexicon.

    PubMed

    Fleury, Eduardo F C; Gianini, Ana Claudia; Marcomini, Karem; Oliveira, Vilmar

    2018-01-01

    To determine the applicability of a computer-aided diagnostic system strain elastography system for the classification of breast masses diagnosed by ultrasound and scored using the criteria proposed by the breast imaging and reporting data system ultrasound lexicon and to determine the diagnostic accuracy and interobserver variability. This prospective study was conducted between March 1, 2016, and May 30, 2016. A total of 83 breast masses subjected to percutaneous biopsy were included. Ultrasound elastography images before biopsy were interpreted by 3 radiologists with and without the aid of computer-aided diagnostic system for strain elastography. The parameters evaluated by each radiologist results were sensitivity, specificity, and diagnostic accuracy, with and without computer-aided diagnostic system for strain elastography. Interobserver variability was assessed using a weighted κ test and an intraclass correlation coefficient. The areas under the receiver operating characteristic curves were also calculated. The areas under the receiver operating characteristic curve were 0.835, 0.801, and 0.765 for readers 1, 2, and 3, respectively, without computer-aided diagnostic system for strain elastography, and 0.900, 0.926, and 0.868, respectively, with computer-aided diagnostic system for strain elastography. The intraclass correlation coefficient between the 3 readers was 0.6713 without computer-aided diagnostic system for strain elastography and 0.811 with computer-aided diagnostic system for strain elastography. The proposed computer-aided diagnostic system for strain elastography system has the potential to improve the diagnostic performance of radiologists in breast examination using ultrasound associated with elastography.

  10. Pulse cleaning flow models and numerical computation of candle ceramic filters.

    PubMed

    Tian, Gui-shan; Ma, Zhen-ji; Zhang, Xin-yi; Xu, Ting-xiang

    2002-04-01

    Analytical and numerical computed models are developed for reverse pulse cleaning system of candle ceramic filters. A standard turbulent model is demonstrated suitably to the designing computation of reverse pulse cleaning system from the experimental and one-dimensional computational result. The computed results can be used to guide the designing of reverse pulse cleaning system, which is optimum Venturi geometry. From the computed results, the general conclusions and the designing methods are obtained.

  11. Recent Advances in Point-of-Care Diagnostics for Cardiac Markers

    PubMed Central

    2014-01-01

    National and international cardiology guidelines have recommended a 1-hour turnaround time for reporting results of cardiac troponin to emergency department personnel, measured from the time of blood collection to reporting. Use of point-of-care testing (POCT) can reduce turnaround times for cardiac markers, but current devices are not as precise or sensitive as central laboratory assays. The gap is growing as manufacturers of mainframe immunoassay instruments have or will release troponin assays that are even higher than those currently available. These assays have analytical sensitivity that enables detection of nearly 100% of all healthy subjects which is not possible for current POCT assays. Use of high sensitivity troponin results in a lower value for the 99th percentile of a healthy population. Clinically, this enables for the detection of more cases of myocardial injury. In order to compete analytically, next generation POCT assays will to make technologic advancements, such as the use of microfluidic to better control sample delivery, nanoparticles or nanotubes to increase the surface-to-volume ratios for analytes and antibodies, and novel detection schemes such as chemiluminescence and electrochemical detectors to enhance analytical sensitivity. Multi-marker analysis using POCT is also on the horizon for tests that complement cardiac troponin. PMID:27683464

  12. Hierarchical polymerized high internal phase emulsions synthesized from surfactant-stabilized emulsion templates.

    PubMed

    Wong, Ling L C; Villafranca, Pedro M Baiz; Menner, Angelika; Bismarck, Alexander

    2013-05-21

    In building construction, structural elements, such as lattice girders, are positioned specifically to support the mainframe of a building. This arrangement provides additional structural hierarchy, facilitating the transfer of load to its foundation while keeping the building weight down. We applied the same concept when synthesizing hierarchical open-celled macroporous polymers from high internal phase emulsion (HIPE) templates stabilized by varying concentrations of a polymeric non-ionic surfactant from 0.75 to 20 w/vol %. These hierarchical poly(merized)HIPEs have multimodally distributed pores, which are efficiently arranged to enhance the load transfer mechanism in the polymer foam. As a result, hierarchical polyHIPEs produced from HIPEs stabilized by 5 vol % surfactant showed a 93% improvement in Young's moduli compared to conventional polyHIPEs produced from HIPEs stabilized by 20 vol % of surfactant with the same porosity of 84%. The finite element method (FEM) was used to determine the effect of pore hierarchy on the mechanical performance of porous polymers under small periodic compressions. Results from the FEM showed a clear improvement in Young's moduli for simulated hierarchical porous geometries. This methodology could be further adapted as a predictive tool to determine the influence of hierarchy on the mechanical properties of a range of porous materials.

  13. Computational approaches to vision

    NASA Technical Reports Server (NTRS)

    Barrow, H. G.; Tenenbaum, J. M.

    1986-01-01

    Vision is examined in terms of a computational process, and the competence, structure, and control of computer vision systems are analyzed. Theoretical and experimental data on the formation of a computer vision system are discussed. Consideration is given to early vision, the recovery of intrinsic surface characteristics, higher levels of interpretation, and system integration and control. A computational visual processing model is proposed and its architecture and operation are described. Examples of state-of-the-art vision systems, which include some of the levels of representation and processing mechanisms, are presented.

  14. High-Performance Computing and Visualization | Energy Systems Integration

    Science.gov Websites

    Facility | NREL High-Performance Computing and Visualization High-Performance Computing and Visualization High-performance computing (HPC) and visualization at NREL propel technology innovation as a . Capabilities High-Performance Computing NREL is home to Peregrine-the largest high-performance computing system

  15. CAA: Computer Assisted Athletics.

    ERIC Educational Resources Information Center

    Hall, John H.

    Computers have been used in a variety of applications for athletics since the late 1950's. These have ranged from computer-controlled electric scoreboards to computer-designed pole vaulting poles. Described in this paper are a computer-based athletic injury reporting system and a computer-assisted football scouting system. The injury reporting…

  16. Overreaction to External Attacks on Computer Systems Could Be More Harmful than the Viruses Themselves.

    ERIC Educational Resources Information Center

    King, Kenneth M.

    1988-01-01

    Discussion of the recent computer virus attacks on computers with vulnerable operating systems focuses on the values of educational computer networks. The need for computer security procedures is emphasized, and the ethical use of computer hardware and software is discussed. (LRW)

  17. Cloud Computing and Its Applications in GIS

    NASA Astrophysics Data System (ADS)

    Kang, Cao

    2011-12-01

    Cloud computing is a novel computing paradigm that offers highly scalable and highly available distributed computing services. The objectives of this research are to: 1. analyze and understand cloud computing and its potential for GIS; 2. discover the feasibilities of migrating truly spatial GIS algorithms to distributed computing infrastructures; 3. explore a solution to host and serve large volumes of raster GIS data efficiently and speedily. These objectives thus form the basis for three professional articles. The first article is entitled "Cloud Computing and Its Applications in GIS". This paper introduces the concept, structure, and features of cloud computing. Features of cloud computing such as scalability, parallelization, and high availability make it a very capable computing paradigm. Unlike High Performance Computing (HPC), cloud computing uses inexpensive commodity computers. The uniform administration systems in cloud computing make it easier to use than GRID computing. Potential advantages of cloud-based GIS systems such as lower barrier to entry are consequently presented. Three cloud-based GIS system architectures are proposed: public cloud- based GIS systems, private cloud-based GIS systems and hybrid cloud-based GIS systems. Public cloud-based GIS systems provide the lowest entry barriers for users among these three architectures, but their advantages are offset by data security and privacy related issues. Private cloud-based GIS systems provide the best data protection, though they have the highest entry barriers. Hybrid cloud-based GIS systems provide a compromise between these extremes. The second article is entitled "A cloud computing algorithm for the calculation of Euclidian distance for raster GIS". Euclidean distance is a truly spatial GIS algorithm. Classical algorithms such as the pushbroom and growth ring techniques require computational propagation through the entire raster image, which makes it incompatible with the distributed nature of cloud computing. This paper presents a parallel Euclidean distance algorithm that works seamlessly with the distributed nature of cloud computing infrastructures. The mechanism of this algorithm is to subdivide a raster image into sub-images and wrap them with a one pixel deep edge layer of individually computed distance information. Each sub-image is then processed by a separate node, after which the resulting sub-images are reassembled into the final output. It is shown that while any rectangular sub-image shape can be used, those approximating squares are computationally optimal. This study also serves as a demonstration of this subdivide and layer-wrap strategy, which would enable the migration of many truly spatial GIS algorithms to cloud computing infrastructures. However, this research also indicates that certain spatial GIS algorithms such as cost distance cannot be migrated by adopting this mechanism, which presents significant challenges for the development of cloud-based GIS systems. The third article is entitled "A Distributed Storage Schema for Cloud Computing based Raster GIS Systems". This paper proposes a NoSQL Database Management System (NDDBMS) based raster GIS data storage schema. NDDBMS has good scalability and is able to use distributed commodity computers, which make it superior to Relational Database Management Systems (RDBMS) in a cloud computing environment. In order to provide optimized data service performance, the proposed storage schema analyzes the nature of commonly used raster GIS data sets. It discriminates two categories of commonly used data sets, and then designs corresponding data storage models for both categories. As a result, the proposed storage schema is capable of hosting and serving enormous volumes of raster GIS data speedily and efficiently on cloud computing infrastructures. In addition, the scheme also takes advantage of the data compression characteristics of Quadtrees, thus promoting efficient data storage. Through this assessment of cloud computing technology, the exploration of the challenges and solutions to the migration of GIS algorithms to cloud computing infrastructures, and the examination of strategies for serving large amounts of GIS data in a cloud computing infrastructure, this dissertation lends support to the feasibility of building a cloud-based GIS system. However, there are still challenges that need to be addressed before a full-scale functional cloud-based GIS system can be successfully implemented. (Abstract shortened by UMI.)

  18. RICIS Symposium 1992: Mission and Safety Critical Systems Research and Applications

    NASA Technical Reports Server (NTRS)

    1992-01-01

    This conference deals with computer systems which control systems whose failure to operate correctly could produce the loss of life and or property, mission and safety critical systems. Topics covered are: the work of standards groups, computer systems design and architecture, software reliability, process control systems, knowledge based expert systems, and computer and telecommunication protocols.

  19. Integrated Computer System of Management in Logistics

    NASA Astrophysics Data System (ADS)

    Chwesiuk, Krzysztof

    2011-06-01

    This paper aims at presenting a concept of an integrated computer system of management in logistics, particularly in supply and distribution chains. Consequently, the paper includes the basic idea of the concept of computer-based management in logistics and components of the system, such as CAM and CIM systems in production processes, and management systems for storage, materials flow, and for managing transport, forwarding and logistics companies. The platform which integrates computer-aided management systems is that of electronic data interchange.

  20. Integration of the Execution Support System for the Computer-Aided Prototyping System (CAPS)

    DTIC Science & Technology

    1990-09-01

    SUPPORT SYSTEM FOR THE COMPUTER -AIDED PROTOTYPING SYSTEM (CAPS) by Frank V. Palazzo September 1990 Thesis Advisor: Luq± Approved for public release...ZATON REPOR ,,.VBE (, 6a NAME OF PERPORMING ORGAN ZAT7ON 6b OFF:CE SYVBOL 7a NAME OF MONITORINC O0-CA’Za- ON Computer Science Department (if applicable...Include Security Classification) Integration of the Execution Support System for the Computer -Aided Prototyping System (C S) 12 PERSONAL AUTHOR(S) Frank V

  1. Computer system performance measurement techniques for ARTS III computer systems.

    DOT National Transportation Integrated Search

    1973-12-01

    Direct measurement of computer systems is of vital importance in: a) developing an intelligent grasp of the variables which affect overall performance; b)tuning the system for optimum benefit; c)determining under what conditions saturation thresholds...

  2. Collectively loading an application in a parallel computer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aho, Michael E.; Attinella, John E.; Gooding, Thomas M.

    Collectively loading an application in a parallel computer, the parallel computer comprising a plurality of compute nodes, including: identifying, by a parallel computer control system, a subset of compute nodes in the parallel computer to execute a job; selecting, by the parallel computer control system, one of the subset of compute nodes in the parallel computer as a job leader compute node; retrieving, by the job leader compute node from computer memory, an application for executing the job; and broadcasting, by the job leader to the subset of compute nodes in the parallel computer, the application for executing the job.

  3. Active optical control system design of the SONG-China Telescope

    NASA Astrophysics Data System (ADS)

    Ye, Yu; Kou, Songfeng; Niu, Dongsheng; Li, Cheng; Wang, Guomin

    2012-09-01

    The standard SONG node structure of control system is presented. The active optical control system of the project is a distributed system, and a host computer and a slave intelligent controller are included. The host control computer collects the information from wave front sensor and sends commands to the slave computer to realize a closed loop model. For intelligent controller, a programmable logic controller (PLC) system is used. This system combines with industrial personal computer (IPC) and PLC to make up a control system with powerful and reliable.

  4. Computing the Moore-Penrose Inverse of a Matrix with a Computer Algebra System

    ERIC Educational Resources Information Center

    Schmidt, Karsten

    2008-01-01

    In this paper "Derive" functions are provided for the computation of the Moore-Penrose inverse of a matrix, as well as for solving systems of linear equations by means of the Moore-Penrose inverse. Making it possible to compute the Moore-Penrose inverse easily with one of the most commonly used Computer Algebra Systems--and to have the blueprint…

  5. 78 FR 59927 - Next Generation Risk Assessment: Incorporation of Recent Advances in Molecular, Computational...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-30

    ... Generation Risk Assessment: Incorporation of Recent Advances in Molecular, Computational, and Systems Biology..., Computational, and Systems Biology [External Review Draft]'' (EPA/600/R-13/214A). EPA is also announcing that... Advances in Molecular, Computational, and Systems Biology [External Review Draft]'' is available primarily...

  6. 78 FR 68058 - Next Generation Risk Assessment: Incorporation of Recent Advances in Molecular, Computational...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-13

    ... Generation Risk Assessment: Incorporation of Recent Advances in Molecular, Computational, and Systems Biology... Generation Risk Assessment: Incorporation of Recent Advances in Molecular, Computational, and Systems Biology..., computational, and systems biology data can better inform risk assessment. This draft document is available for...

  7. The PLATO System and Language Study.

    ERIC Educational Resources Information Center

    Hart, Robert S., Ed.

    1981-01-01

    This issue presents an overview of research in computer-based language instruction using the PLATO IV computer system. The following articles are presented: (1) "Language Study and the PLATO system," by R. Hart; (2) "Reflections on the Use of Computers in Second-Language Acquisition," by F. Marty; (3) "Computer-Based…

  8. Analysis of Software Systems for Specialized Computers,

    DTIC Science & Technology

    computer) with given computer hardware and software . The object of study is the software system of a computer, designed for solving a fixed complex of...purpose of the analysis is to find parameters that characterize the system and its elements during operation, i.e., when servicing the given requirement flow. (Author)

  9. The Effect of Computer Automation on Institutional Review Board (IRB) Office Efficiency

    ERIC Educational Resources Information Center

    Oder, Karl; Pittman, Stephanie

    2015-01-01

    Companies purchase computer systems to make their processes more efficient through automation. Some academic medical centers (AMC) have purchased computer systems for their institutional review boards (IRB) to increase efficiency and compliance with regulations. IRB computer systems are expensive to purchase, deploy, and maintain. An AMC should…

  10. Spiking Neural P Systems With Rules on Synapses Working in Maximum Spiking Strategy.

    PubMed

    Tao Song; Linqiang Pan

    2015-06-01

    Spiking neural P systems (called SN P systems for short) are a class of parallel and distributed neural-like computation models inspired by the way the neurons process information and communicate with each other by means of impulses or spikes. In this work, we introduce a new variant of SN P systems, called SN P systems with rules on synapses working in maximum spiking strategy, and investigate the computation power of the systems as both number and vector generators. Specifically, we prove that i) if no limit is imposed on the number of spikes in any neuron during any computation, such systems can generate the sets of Turing computable natural numbers and the sets of vectors of positive integers computed by k-output register machine; ii) if an upper bound is imposed on the number of spikes in each neuron during any computation, such systems can characterize semi-linear sets of natural numbers as number generating devices; as vector generating devices, such systems can only characterize the family of sets of vectors computed by sequential monotonic counter machine, which is strictly included in family of semi-linear sets of vectors. This gives a positive answer to the problem formulated in Song et al., Theor. Comput. Sci., vol. 529, pp. 82-95, 2014.

  11. Challenges in Securing the Interface Between the Cloud and Pervasive Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lagesse, Brent J

    2011-01-01

    Cloud computing presents an opportunity for pervasive systems to leverage computational and storage resources to accomplish tasks that would not normally be possible on such resource-constrained devices. Cloud computing can enable hardware designers to build lighter systems that last longer and are more mobile. Despite the advantages cloud computing offers to the designers of pervasive systems, there are some limitations of leveraging cloud computing that must be addressed. We take the position that cloud-based pervasive system must be secured holistically and discuss ways this might be accomplished. In this paper, we discuss a pervasive system utilizing cloud computing resources andmore » issues that must be addressed in such a system. In this system, the user's mobile device cannot always have network access to leverage resources from the cloud, so it must make intelligent decisions about what data should be stored locally and what processes should be run locally. As a result of these decisions, the user becomes vulnerable to attacks while interfacing with the pervasive system.« less

  12. System for Computer Automated Typesetting (SCAT) of Computer Authored Texts.

    ERIC Educational Resources Information Center

    Keeler, F. Laurence

    This description of the System for Automated Typesetting (SCAT), an automated system for typesetting text and inserting special graphic symbols in programmed instructional materials created by the computer aided authoring system AUTHOR, provides an outline of the design architecture of the system and an overview including the component…

  13. Reliability model derivation of a fault-tolerant, dual, spare-switching, digital computer system

    NASA Technical Reports Server (NTRS)

    1974-01-01

    A computer based reliability projection aid, tailored specifically for application in the design of fault-tolerant computer systems, is described. Its more pronounced characteristics include the facility for modeling systems with two distinct operational modes, measuring the effect of both permanent and transient faults, and calculating conditional system coverage factors. The underlying conceptual principles, mathematical models, and computer program implementation are presented.

  14. Reliability models for dataflow computer systems

    NASA Technical Reports Server (NTRS)

    Kavi, K. M.; Buckles, B. P.

    1985-01-01

    The demands for concurrent operation within a computer system and the representation of parallelism in programming languages have yielded a new form of program representation known as data flow (DENN 74, DENN 75, TREL 82a). A new model based on data flow principles for parallel computations and parallel computer systems is presented. Necessary conditions for liveness and deadlock freeness in data flow graphs are derived. The data flow graph is used as a model to represent asynchronous concurrent computer architectures including data flow computers.

  15. Exploring the use of I/O nodes for computation in a MIMD multiprocessor

    NASA Technical Reports Server (NTRS)

    Kotz, David; Cai, Ting

    1995-01-01

    As parallel systems move into the production scientific-computing world, the emphasis will be on cost-effective solutions that provide high throughput for a mix of applications. Cost effective solutions demand that a system make effective use of all of its resources. Many MIMD multiprocessors today, however, distinguish between 'compute' and 'I/O' nodes, the latter having attached disks and being dedicated to running the file-system server. This static division of responsibilities simplifies system management but does not necessarily lead to the best performance in workloads that need a different balance of computation and I/O. Of course, computational processes sharing a node with a file-system service may receive less CPU time, network bandwidth, and memory bandwidth than they would on a computation-only node. In this paper we begin to examine this issue experimentally. We found that high performance I/O does not necessarily require substantial CPU time, leaving plenty of time for application computation. There were some complex file-system requests, however, which left little CPU time available to the application. (The impact on network and memory bandwidth still needs to be determined.) For applications (or users) that cannot tolerate an occasional interruption, we recommend that they continue to use only compute nodes. For tolerant applications needing more cycles than those provided by the compute nodes, we recommend that they take full advantage of both compute and I/O nodes for computation, and that operating systems should make this possible.

  16. Cognitive engineering models: A prerequisite to the design of human-computer interaction in complex dynamic systems

    NASA Technical Reports Server (NTRS)

    Mitchell, Christine M.

    1993-01-01

    This chapter examines a class of human-computer interaction applications, specifically the design of human-computer interaction for the operators of complex systems. Such systems include space systems (e.g., manned systems such as the Shuttle or space station, and unmanned systems such as NASA scientific satellites), aviation systems (e.g., the flight deck of 'glass cockpit' airplanes or air traffic control) and industrial systems (e.g., power plants, telephone networks, and sophisticated, e.g., 'lights out,' manufacturing facilities). The main body of human-computer interaction (HCI) research complements but does not directly address the primary issues involved in human-computer interaction design for operators of complex systems. Interfaces to complex systems are somewhat special. The 'user' in such systems - i.e., the human operator responsible for safe and effective system operation - is highly skilled, someone who in human-machine systems engineering is sometimes characterized as 'well trained, well motivated'. The 'job' or task context is paramount and, thus, human-computer interaction is subordinate to human job interaction. The design of human interaction with complex systems, i.e., the design of human job interaction, is sometimes called cognitive engineering.

  17. Models and techniques for evaluating the effectiveness of aircraft computing systems

    NASA Technical Reports Server (NTRS)

    Meyer, J. F.

    1978-01-01

    The development of system models that can provide a basis for the formulation and evaluation of aircraft computer system effectiveness, the formulation of quantitative measures of system effectiveness, and the development of analytic and simulation techniques for evaluating the effectiveness of a proposed or existing aircraft computer are described. Specific topics covered include: system models; performability evaluation; capability and functional dependence; computation of trajectory set probabilities; and hierarchical modeling of an air transport mission.

  18. Method and apparatus of parallel computing with simultaneously operating stream prefetching and list prefetching engines

    DOEpatents

    Boyle, Peter A.; Christ, Norman H.; Gara, Alan; Mawhinney, Robert D.; Ohmacht, Martin; Sugavanam, Krishnan

    2012-12-11

    A prefetch system improves a performance of a parallel computing system. The parallel computing system includes a plurality of computing nodes. A computing node includes at least one processor and at least one memory device. The prefetch system includes at least one stream prefetch engine and at least one list prefetch engine. The prefetch system operates those engines simultaneously. After the at least one processor issues a command, the prefetch system passes the command to a stream prefetch engine and a list prefetch engine. The prefetch system operates the stream prefetch engine and the list prefetch engine to prefetch data to be needed in subsequent clock cycles in the processor in response to the passed command.

  19. Computer validation in toxicology: historical review for FDA and EPA good laboratory practice.

    PubMed

    Brodish, D L

    1998-01-01

    The application of computer validation principles to Good Laboratory Practice is a fairly recent phenomenon. As automated data collection systems have become more common in toxicology facilities, the U.S. Food and Drug Administration and the U.S. Environmental Protection Agency have begun to focus inspections in this area. This historical review documents the development of regulatory guidance on computer validation in toxicology over the past several decades. An overview of the components of a computer life cycle is presented, including the development of systems descriptions, validation plans, validation testing, system maintenance, SOPs, change control, security considerations, and system retirement. Examples are provided for implementation of computer validation principles on laboratory computer systems in a toxicology facility.

  20. Common data buffer system. [communication with computational equipment utilized in spacecraft operations

    NASA Technical Reports Server (NTRS)

    Byrne, F. (Inventor)

    1981-01-01

    A high speed common data buffer system is described for providing an interface and communications medium between a plurality of computers utilized in a distributed computer complex forming part of a checkout, command and control system for space vehicles and associated ground support equipment. The system includes the capability for temporarily storing data to be transferred between computers, for transferring a plurality of interrupts between computers, for monitoring and recording these transfers, and for correcting errors incurred in these transfers. Validity checks are made on each transfer and appropriate error notification is given to the computer associated with that transfer.

Top