ERIC Educational Resources Information Center
Samani, Ebrahim; Baki, Roselan; Razali, Abu Bakar
2014-01-01
Success in implementation of computer-assisted language learning (CALL) programs depends on the teachers' understanding of the roles of CALL programs in education. Consequently, it is also important to understand the barriers teachers face in the use of computer-assisted language learning (CALL) programs. The current study was conducted on 14…
ERIC Educational Resources Information Center
Fukuzawa, Jeannette L.; Lubin, Jan M.
Five computer programs for the Macintosh that are geared for Computer-Assisted Language Learning (CALL) are described. All five programs allow the teacher to input material. The first program allows entry of new vocabulary lists including definition, a sentence in which the exact word is used, a fill-in-the-blank exercise, and the word's phonetics…
Radar target classification studies: Software development and documentation
NASA Astrophysics Data System (ADS)
Kamis, A.; Garber, F.; Walton, E.
1985-09-01
Three computer programs were developed to process and analyze calibrated radar returns. The first program, called DATABASE, was developed to create and manage a random accessed data base. The second program, called FTRAN DB, was developed to process horizontal and vertical polarizations radar returns into different formats (i.e., time domain, circular polarizations and polarization parameters). The third program, called RSSE, was developed to simulate a variety of radar systems and to evaluate their ability to identify radar returns. Complete computer listings are included in the appendix volumes.
Digital computer technique for setup and checkout of an analog computer
NASA Technical Reports Server (NTRS)
Ambaruch, R.
1968-01-01
Computer program technique, called Analog Computer Check-Out Routine Digitally /ACCORD/, generates complete setup and checkout data for an analog computer. In addition, the correctness of the analog program implementation is validated.
A Set of Free Cross-Platform Authoring Programs for Flexible Web-Based CALL Exercises
ERIC Educational Resources Information Center
O'Brien, Myles
2012-01-01
The Mango Suite is a set of three freely downloadable cross-platform authoring programs for flexible network-based CALL exercises. They are Adobe Air applications, so they can be used on Windows, Macintosh, or Linux computers, provided the freely-available Adobe Air has been installed on the computer. The exercises which the programs generate are…
ERIC Educational Resources Information Center
Shaw, Yun
2010-01-01
Many of the commercial Computer-Assisted Language Learning (CALL) programs available today typically take a generic approach. This approach standardizes the program so that it can be used to teach any language merely by translating the content from one language to another. These CALL programs rarely consider the cultural background or preferred…
Computer programs for calculating potential flow in propulsion system inlets
NASA Technical Reports Server (NTRS)
Stockman, N. O.; Button, S. L.
1973-01-01
In the course of designing inlets, particularly for VTOL and STOL propulsion systems, a calculational procedure utilizing three computer programs evolved. The chief program is the Douglas axisymmetric potential flow program called EOD which calculates the incompressible potential flow about arbitrary axisymmetric bodies. The other two programs, original with Lewis, are called SCIRCL AND COMBYN. Program SCIRCL generates input for EOD from various specified analytic shapes for the inlet components. Program COMBYN takes basic solutions output by EOD and combines them into solutions of interest, and applies a compressibility correction.
Institutional computing (IC) information session
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koch, Kenneth R; Lally, Bryan R
2011-01-19
The LANL Institutional Computing Program (IC) will host an information session about the current state of unclassified Institutional Computing at Los Alamos, exciting plans for the future, and the current call for proposals for science and engineering projects requiring computing. Program representatives will give short presentations and field questions about the call for proposals and future planned machines, and discuss technical support available to existing and future projects. Los Alamos has started making a serious institutional investment in open computing available to our science projects, and that investment is expected to increase even more.
Debugging a high performance computing program
Gooding, Thomas M.
2014-08-19
Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.
Debugging a high performance computing program
Gooding, Thomas M.
2013-08-20
Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.
Pedagogy and Related Criteria: The Selection of Software for Computer Assisted Language Learning
ERIC Educational Resources Information Center
Samuels, Jeffrey D.
2013-01-01
Computer-Assisted Language Learning (CALL) is an established field of academic inquiry with distinct applications for second language teaching and learning. Many CALL professionals direct language labs or language resource centers (LRCs) in which CALL software applications and generic software applications support language learning programs and…
NASA Technical Reports Server (NTRS)
Svalbonas, V.; Ogilvie, P.
1975-01-01
A special data debugging package called SAT-1P created for the STARS-2P computer program is described. The program was written exclusively in FORTRAN 4 for the IBM 370-165 computer, and then converted to the UNIVAC 1108.
Computer codes for thermal analysis of a solid rocket motor nozzle
NASA Technical Reports Server (NTRS)
Chauhan, Rajinder Singh
1988-01-01
A number of computer codes are available for performing thermal analysis of solid rocket motor nozzles. Aerotherm Chemical Equilibrium (ACE) computer program can be used to perform one-dimensional gas expansion to determine the state of the gas at each location of a nozzle. The ACE outputs can be used as input to a computer program called Momentum/Energy Integral Technique (MEIT) for predicting boundary layer development development, shear, and heating on the surface of the nozzle. The output from MEIT can be used as input to another computer program called Aerotherm Charring Material Thermal Response and Ablation Program (CMA). This program is used to calculate oblation or decomposition response of the nozzle material. A code called Failure Analysis Nonlinear Thermal and Structural Integrated Code (FANTASTIC) is also likely to be used for performing thermal analysis of solid rocket motor nozzles after the program is duly verified. A part of the verification work on FANTASTIC was done by using one and two dimension heat transfer examples with known answers. An attempt was made to prepare input for performing thermal analysis of the CCT nozzle using the FANTASTIC computer code. The CCT nozzle problem will first be solved by using ACE, MEIT, and CMA. The same problem will then be solved using FANTASTIC. These results will then be compared for verification of FANTASTIC.
DIALOG: An executive computer program for linking independent programs
NASA Technical Reports Server (NTRS)
Glatt, C. R.; Hague, D. S.; Watson, D. A.
1973-01-01
A very large scale computer programming procedure called the DIALOG executive system was developed for the CDC 6000 series computers. The executive computer program, DIALOG, controls the sequence of execution and data management function for a library of independent computer programs. Communication of common information is accomplished by DIALOG through a dynamically constructed and maintained data base of common information. Each computer program maintains its individual identity and is unaware of its contribution to the large scale program. This feature makes any computer program a candidate for use with the DIALOG executive system. The installation and uses of the DIALOG executive system are described.
ERIC Educational Resources Information Center
Lee, Young-Jin
2010-01-01
Teaching computer programming to young children has been considered difficult because of its abstract and complex nature. The objectives of this study are (1) to investigate whether an innovative educational technology tool called Scratch could enable young children to learn abstract knowledge of computer programming while creating multimedia…
ERIC Educational Resources Information Center
Ali, Azad; Smith, David
2014-01-01
This paper presents a debate between two faculty members regarding the teaching of the legacy programming course (COBOL) in a Computer Science (CS) program. Among the two faculty members, one calls for the continuation of teaching this language and the other calls for replacing it with another modern language. Although CS programs are notorious…
A Programming Language Environment for the Unassisted Learner.
ERIC Educational Resources Information Center
Thomas, P. G.; Ince, D. C.
1982-01-01
Describes the computing environment and command language for a new programing language called OUSBASIC which is designed to enable naive users to interact usefully, with little assistance, with a computer system. (Author/CHC)
ERIC Educational Resources Information Center
Dalbey, John; Linn, Marcia
Spider World is an interactive program designed to help individuals with no previous computer experience to learn the fundamentals of programming. The program emphasizes cognitive tasks which are central to programming and provides significant problem-solving opportunities. In Spider World, the user commands a hypothetical robot (called the…
User guide to a command and control system; a part of a prelaunch wind monitoring program
NASA Technical Reports Server (NTRS)
Cowgill, G. R.
1976-01-01
A set of programs called Command and Control System (CCS), intended as a user manual, is described for the operation of CCS by the personnel supporting the wind monitoring portion of the launch mission. Wind data obtained by tracking balloons is sent by electronic means using telephone lines to other locations. Steering commands are computed from a system called ADDJUST for the on-board computer and relays this data. Data are received and automatically stored in a microprocessor, then via a real time program transferred to the UNIVAC 1100/40 computer. At this point the data is available to be used by the Command and Control system.
F-Nets and Software Cabling: Deriving a Formal Model and Language for Portable Parallel Programming
NASA Technical Reports Server (NTRS)
DiNucci, David C.; Saini, Subhash (Technical Monitor)
1998-01-01
Parallel programming is still being based upon antiquated sequence-based definitions of the terms "algorithm" and "computation", resulting in programs which are architecture dependent and difficult to design and analyze. By focusing on obstacles inherent in existing practice, a more portable model is derived here, which is then formalized into a model called Soviets which utilizes a combination of imperative and functional styles. This formalization suggests more general notions of algorithm and computation, as well as insights into the meaning of structured programming in a parallel setting. To illustrate how these principles can be applied, a very-high-level graphical architecture-independent parallel language, called Software Cabling, is described, with many of the features normally expected from today's computer languages (e.g. data abstraction, data parallelism, and object-based programming constructs).
The Advantages and Disadvantages of Computer Technology in Second Language Acquisition
ERIC Educational Resources Information Center
Lai, Cheng-Chieh; Kritsonis, William Allan
2006-01-01
The purpose of this article is to discuss the advantages and disadvantages of computer technology and Computer Assisted Language Learning (CALL) programs for current second language learning. According to the National Clearinghouse for English Language Acquisition & Language Instruction Educational Programs' report (2002), more than nine million…
Integrating Corpus-Based CALL Programs in Teaching English through Children's Literature
ERIC Educational Resources Information Center
Johns, Tim F.; Hsingchin, Lee; Lixun, Wang
2008-01-01
This paper presents particular pedagogical applications of a number of corpus-based CALL (computer assisted language learning) programs such as "CONTEXTS" and "CLOZE," "MATCHUP" and "BILINGUAL SENTENCE SHUFFLER," in the teaching of English through children's literature. An elective course in Taiwan for…
Fast single-pass alignment and variant calling using sequencing data
USDA-ARS?s Scientific Manuscript database
Sequencing research requires efficient computation. Few programs use already known information about DNA variants when aligning sequence data to the reference map. New program findmap.f90 reads the previous variant list before aligning sequence, calling variant alleles, and summing the allele counts...
DIALOG: An executive computer program for linking independent programs
NASA Technical Reports Server (NTRS)
Glatt, C. R.; Hague, D. S.; Watson, D. A.
1973-01-01
A very large scale computer programming procedure called the DIALOG Executive System has been developed for the Univac 1100 series computers. The executive computer program, DIALOG, controls the sequence of execution and data management function for a library of independent computer programs. Communication of common information is accomplished by DIALOG through a dynamically constructed and maintained data base of common information. The unique feature of the DIALOG Executive System is the manner in which computer programs are linked. Each program maintains its individual identity and as such is unaware of its contribution to the large scale program. This feature makes any computer program a candidate for use with the DIALOG Executive System. The installation and use of the DIALOG Executive System are described at Johnson Space Center.
Computer program developed for flowsheet calculations and process data reduction
NASA Technical Reports Server (NTRS)
Alfredson, P. G.; Anastasia, L. J.; Knudsen, I. E.; Koppel, L. B.; Vogel, G. J.
1969-01-01
Computer program PACER-65, is used for flowsheet calculations and easily adapted to process data reduction. Each unit, vessel, meter, and processing operation in the overall flowsheet is represented by a separate subroutine, which the program calls in the order required to complete an overall flowsheet calculation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ecale Zhou, Carol L.
2016-07-05
Compare Gene Calls (CGC) is a Python code used for combining and comparing gene calls from any number of gene callers. A gene caller is a computer program that predicts the extends of open reading frames within genomes of biological organisms.
Young Children and Turtle Graphics Programming: Generating and Debugging Simple Turtle Programs.
ERIC Educational Resources Information Center
Cuneo, Diane O.
Turtle graphics is a popular vehicle for introducing children to computer programming. Children combine simple graphic commands to get a display screen cursor (called a turtle) to draw designs on the screen. The purpose of this study was to examine young children's abilities to function in a simple computer programming environment. Four- and…
ERIC Educational Resources Information Center
McAndrews, Gina M.; Mullen, Russell E.; Chadwick, Scott A.
2005-01-01
Multi-media learning tools were developed to enhance student learning for an introductory agronomy course at Iowa State University. During fall 2002, the new interactive computer program, called Computer Interactive Multimedia Program for Learning Enhancement (CIMPLE) was incorporated into the teaching, learning, and assessment processes of the…
von Arnim, Albrecht G.; Missra, Anamika
2017-01-01
Leading voices in the biological sciences have called for a transformation in graduate education leading to the PhD degree. One area commonly singled out for growth and innovation is cross-training in computational science. In 1998, the University of Tennessee (UT) founded an intercollegiate graduate program called the UT-ORNL Graduate School of Genome Science and Technology in partnership with the nearby Oak Ridge National Laboratory. Here, we report outcome data that attest to the program’s effectiveness in graduating computationally enabled biologists for diverse careers. Among 77 PhD graduates since 2003, the majority came with traditional degrees in the biological sciences, yet two-thirds moved into computational or hybrid (computational–experimental) positions. We describe the curriculum of the program and how it has changed. We also summarize how the program seeks to establish cohesion between computational and experimental biologists. This type of program can respond flexibly and dynamically to unmet training needs. In conclusion, this study from a flagship, state-supported university may serve as a reference point for creating a stable, degree-granting, interdepartmental graduate program in computational biology and allied areas. PMID:29167223
COMETT-CALLIOPE: The Implementation of Call Materials for Business and Industrial Purposes.
ERIC Educational Resources Information Center
Van Elsen, Edwig; And Others
The development of a Computer Assisted Language Learning for Information Organization and Production in Europe (CALLIOPE) program is discussed. CALLIOPE is a program launched by the European Community that is intended to provide computer-based foreign language instruction for the business and industrial environment. Program goals are two-fold: (1)…
ERIC Educational Resources Information Center
Marsch, Lisa A.; Bickel, Warren K.; Badger, Gary J.
2007-01-01
This manuscript reports on the development and evaluation of a computer-based substance abuse prevention program for middle school-aged adolescents, called "HeadOn: Substance Abuse Prevention for Grades 6-8TM". This self-guided program was designed to deliver effective drug abuse prevention science to youth via computer-based educational…
Integrating CALL into the Classroom: The Role of Podcasting in an ESL Listening Strategies Course
ERIC Educational Resources Information Center
O'Brien, Anne; Hegelheimer, Volker
2007-01-01
Despite the increase of teacher preparation programs that emphasize the importance of training teachers to select and develop appropriate computer-assisted language learning (CALL) materials, integration of CALL into classroom settings is still frequently relegated to the use of selected CALL activities to supplement instruction or to provide…
Analyzing Spacecraft Telecommunication Systems
NASA Technical Reports Server (NTRS)
Kordon, Mark; Hanks, David; Gladden, Roy; Wood, Eric
2004-01-01
Multi-Mission Telecom Analysis Tool (MMTAT) is a C-language computer program for analyzing proposed spacecraft telecommunication systems. MMTAT utilizes parameterized input and computational models that can be run on standard desktop computers to perform fast and accurate analyses of telecommunication links. MMTAT is easy to use and can easily be integrated with other software applications and run as part of almost any computational simulation. It is distributed as either a stand-alone application program with a graphical user interface or a linkable library with a well-defined set of application programming interface (API) calls. As a stand-alone program, MMTAT provides both textual and graphical output. The graphs make it possible to understand, quickly and easily, how telecommunication performance varies with variations in input parameters. A delimited text file that can be read by any spreadsheet program is generated at the end of each run. The API in the linkable-library form of MMTAT enables the user to control simulation software and to change parameters during a simulation run. Results can be retrieved either at the end of a run or by use of a function call at any time step.
Computers and Technological Forecasting
ERIC Educational Resources Information Center
Martino, Joseph P.
1971-01-01
Forecasting is becoming increasingly automated, thanks in large measure to the computer. It is now possible for a forecaster to submit his data to a computation center and call for the appropriate program. (No knowledge of statistics is required.) (Author)
Primer on Computer Graphics Programming. Revision
1982-04-01
TEXTO 60 TO 4 3 CALL UWRITl C’Ai’,’TEXT 4 CONTINUE «.«. ^^^^ef%,xN...CX.Y.’NOO mm^^ CALL UPRNTl CTTTLECO,’ TEXTO CALL UPRNTJ CX.OPTIONCI33 CALL UPRNTJ CTITLEC25.’ TEXTO CALL UPRNTl CY,OPTIONCli3 CALL UMOVE OC.Y5...CALL USET (’TEXT’) CALL UPRINT (-1.0,-1.05,’SIDES;’) CALL USET (’INTEGER’) CALL UPRINT (0.9,-1.05,S! DES ) 1 CONTINUE CALLUEND STOP
Prins, Pjotr; Goto, Naohisa; Yates, Andrew; Gautier, Laurent; Willis, Scooter; Fields, Christopher; Katayama, Toshiaki
2012-01-01
Open-source software (OSS) encourages computer programmers to reuse software components written by others. In evolutionary bioinformatics, OSS comes in a broad range of programming languages, including C/C++, Perl, Python, Ruby, Java, and R. To avoid writing the same functionality multiple times for different languages, it is possible to share components by bridging computer languages and Bio* projects, such as BioPerl, Biopython, BioRuby, BioJava, and R/Bioconductor. In this chapter, we compare the two principal approaches for sharing software between different programming languages: either by remote procedure call (RPC) or by sharing a local call stack. RPC provides a language-independent protocol over a network interface; examples are RSOAP and Rserve. The local call stack provides a between-language mapping not over the network interface, but directly in computer memory; examples are R bindings, RPy, and languages sharing the Java Virtual Machine stack. This functionality provides strategies for sharing of software between Bio* projects, which can be exploited more often. Here, we present cross-language examples for sequence translation, and measure throughput of the different options. We compare calling into R through native R, RSOAP, Rserve, and RPy interfaces, with the performance of native BioPerl, Biopython, BioJava, and BioRuby implementations, and with call stack bindings to BioJava and the European Molecular Biology Open Software Suite. In general, call stack approaches outperform native Bio* implementations and these, in turn, outperform RPC-based approaches. To test and compare strategies, we provide a downloadable BioNode image with all examples, tools, and libraries included. The BioNode image can be run on VirtualBox-supported operating systems, including Windows, OSX, and Linux.
The Ghost in the Machine: Are "Teacherless" CALL Programs Really Possible?
ERIC Educational Resources Information Center
Davies, Ted; Williamson, Rodney
1998-01-01
Reflects critically on pedagogical issues in the production of computer-assisted language learning (CALL) courseware and ways CALL has affected the practice of language learning. Concludes that if CALL is to reach full potential, it must be more than a simple medium of information; it should provide a teaching/learning process, with the real…
A Generalized-Compliant-Motion Primitive
NASA Technical Reports Server (NTRS)
Backes, Paul G.
1993-01-01
Computer program bridges gap between planning and execution of compliant robotic motions developed and installed in control system of telerobot. Called "generalized-compliant-motion primitive," one of several task-execution-primitive computer programs, which receives commands from higher-level task-planning programs and executes commands by generating required trajectories and applying appropriate control laws. Program comprises four parts corresponding to nominal motion, compliant motion, ending motion, and monitoring. Written in C language.
Adaptation of a program for nonlinear finite element analysis to the CDC STAR 100 computer
NASA Technical Reports Server (NTRS)
Pifko, A. B.; Ogilvie, P. L.
1978-01-01
The conversion of a nonlinear finite element program to the CDC STAR 100 pipeline computer is discussed. The program called DYCAST was developed for the crash simulation of structures. Initial results with the STAR 100 computer indicated that significant gains in computation time are possible for operations on gloval arrays. However, for element level computations that do not lend themselves easily to long vector processing, the STAR 100 was slower than comparable scalar computers. On this basis it is concluded that in order for pipeline computers to impact the economic feasibility of large nonlinear analyses it is absolutely essential that algorithms be devised to improve the efficiency of element level computations.
Computer programs for generation and evaluation of near-optimum vertical flight profiles
NASA Technical Reports Server (NTRS)
Sorensen, J. A.; Waters, M. H.; Patmore, L. C.
1983-01-01
Two extensive computer programs were developed. The first, called OPTIM, generates a reference near-optimum vertical profile, and it contains control options so that the effects of various flight constraints on cost performance can be examined. The second, called TRAGEN, is used to simulate an aircraft flying along an optimum or any other vertical reference profile. TRAGEN is used to verify OPTIM's output, examine the effects of uncertainty in the values of parameters (such as prevailing wind) which govern the optimum profile, or compare the cost performance of profiles generated by different techniques. A general description of these programs, the efforts to add special features to them, and sample results of their usage are presented.
NASA Technical Reports Server (NTRS)
Goltz, G.; Kaiser, L. M.; Weiner, H.
1977-01-01
A computer program has been developed for designing and analyzing the performance of solar array/battery power systems for the U.S. Coast Guard Navigational Aids. This program is called the Design Synthesis/Performance Analysis (DSPA) Computer Program. The basic function of the Design Synthesis portion of the DSPA program is to evaluate functional and economic criteria to provide specifications for viable solar array/battery power systems. The basic function of the Performance Analysis portion of the DSPA program is to simulate the operation of solar array/battery power systems under specific loads and environmental conditions. This document establishes the software requirements for the DSPA computer program, discusses the processing that occurs within the program, and defines the necessary interfaces for operation.
A DNA sequence analysis package for the IBM personal computer.
Lagrimini, L M; Brentano, S T; Donelson, J E
1984-01-01
We present here a collection of DNA sequence analysis programs, called "PC Sequence" (PCS), which are designed to run on the IBM Personal Computer (PC). These programs are written in IBM PC compiled BASIC and take full advantage of the IBM PC's speed, error handling, and graphics capabilities. For a modest initial expense in hardware any laboratory can use these programs to quickly perform computer analysis on DNA sequences. They are written with the novice user in mind and require very little training or previous experience with computers. Also provided are a text editing program for creating and modifying DNA sequence files and a communications program which enables the PC to communicate with and collect information from mainframe computers and DNA sequence databases. PMID:6546433
Grow--a computer subroutine that projects the growth of trees in the Lake States' forests.
Gary J. Brand
1981-01-01
A computer subroutine, Grow, has been written in 1977 Standard FORTRAN to implement a distance-independent, individual tree growth model for Lake States' forests. Grow is a small and easy-to-use version of the growth model. All the user has to do is write a calling program to read initial conditions, call Grow, and summarize the results.
NERVA dynamic analysis methodology, SPRVIB
NASA Technical Reports Server (NTRS)
Vronay, D. F.
1972-01-01
The general dynamic computer code called SPRVIB (Spring Vib) developed in support of the NERVA (nuclear engine for rocket vehicle application) program is described. Using normal mode techniques, the program computes kinematical responses of a structure caused by various combinations of harmonic and elliptic forcing functions or base excitations. Provision is made for a graphical type of force or base excitation input to the structure. A description of the required input format and a listing of the program are presented, along with several examples illustrating the use of the program. SPRVIB is written in FORTRAN 4 computer language for use on the CDC 6600 or the IBM 360/75 computers.
Analysis and selection of optimal function implementations in massively parallel computer
Archer, Charles Jens [Rochester, MN; Peters, Amanda [Rochester, MN; Ratterman, Joseph D [Rochester, MN
2011-05-31
An apparatus, program product and method optimize the operation of a parallel computer system by, in part, collecting performance data for a set of implementations of a function capable of being executed on the parallel computer system based upon the execution of the set of implementations under varying input parameters in a plurality of input dimensions. The collected performance data may be used to generate selection program code that is configured to call selected implementations of the function in response to a call to the function under varying input parameters. The collected performance data may be used to perform more detailed analysis to ascertain the comparative performance of the set of implementations of the function under the varying input parameters.
Formal and Informal CALL Preparation and Teacher Attitude toward Technology
ERIC Educational Resources Information Center
Kessler, Greg
2007-01-01
Recent research suggests that there is a general lack of a computer-assisted language learning (CALL) presence in teacher preparation programs. There is also evidence that teachers obtain a majority of their CALL knowledge from informal sources and personal experience rather than through formalized preparation. Further, graduates of these programs…
NASA Technical Reports Server (NTRS)
Spuler, Linda M.; Ford, Patricia K.; Skeete, Darren C.; Hershman, Scot; Raviprakash, Pushpa; Arnold, John W.; Tran, Victor; Haenze, Mary Alice
2005-01-01
"Close Call Action Log Form" ("CCALF") is the name of both a computer program and a Web-based service provided by the program for creating an enhanced database of close calls (in the colloquial sense of mishaps that were avoided by small margins) assigned to the Center Operations Directorate (COD) at Johnson Space Center. CCALF provides a single facility for on-line collaborative review of close calls. Through CCALF, managers can delegate responses to employees. CCALF utilizes a pre-existing e-mail system to notify managers that there are close calls to review, but eliminates the need for the prior practices of passing multiple e-mail messages around the COD, then collecting and consolidating them into final responses: CCALF now collects comments from all responders for incorporation into reports that it generates. Also, whereas it was previously necessary to manually calculate metrics (e.g., numbers of maintenance-work orders necessitated by close calls) for inclusion in the reports, CCALF now computes the metrics, summarizes them, and displays them in graphical form. The reports and all pertinent information used to generate the reports are logged, tracked, and retained by CCALF for historical purposes.
The engineering design integration (EDIN) system. [digital computer program complex
NASA Technical Reports Server (NTRS)
Glatt, C. R.; Hirsch, G. N.; Alford, G. E.; Colquitt, W. N.; Reiners, S. J.
1974-01-01
A digital computer program complex for the evaluation of aerospace vehicle preliminary designs is described. The system consists of a Univac 1100 series computer and peripherals using the Exec 8 operating system, a set of demand access terminals of the alphanumeric and graphics types, and a library of independent computer programs. Modification of the partial run streams, data base maintenance and construction, and control of program sequencing are provided by a data manipulation program called the DLG processor. The executive control of library program execution is performed by the Univac Exec 8 operating system through a user established run stream. A combination of demand and batch operations is employed in the evaluation of preliminary designs. Applications accomplished with the EDIN system are described.
Computer Code for Transportation Network Design and Analysis
DOT National Transportation Integrated Search
1977-01-01
This document describes the results of research into the application of the mathematical programming technique of decomposition to practical transportation network problems. A computer code called Catnap (for Control Analysis Transportation Network A...
NASA Technical Reports Server (NTRS)
Mullins, N. E.; Dao, N. C.; Martin, T. V.; Goad, C. C.; Boulware, N. L.; Chin, M. M.
1972-01-01
A computer program for executive control routine for orbit integration of artificial satellites is presented. At the beginning of each arc, the program initiates required constants as well as the variational partials at epoch. If epoch needs to be reset to a previous time, the program negates the stepsize, and calls for integration backward to the desired time. After backward integration is completed, the program resets the stepsize to the proper positive quantity.
Instructional Design: Its Relevance for CALL.
ERIC Educational Resources Information Center
England, Elaine
1989-01-01
Describes an interdisciplinary (language and educational technology departments) instructional design program that is intended to develop back-up computer programs for students taking supplementary English as a second language classes. The program encompasses training programs, the psychology of screen reading, task analysis, and color cueing.…
A Method of Characteristics Computer Program for Three-Dimensional Supersonic Internal Flows
1979-01-01
t s a r e i n good a g r e e m e n t w i t h t h e r e s u l t s f r o m a w e l l - e s t a b l i s h e d c o m p u t e r p r o g r a m...of the Lockheed axlsymmetrlc MOC computer program (Ref. 6) which Is well verified and widely used. The results from the two programs are in good ...F ( INE IGHoEOe 2) RETURN 99 CON’f |NUE VlR[ TE( 61 7) STOP END CALL RNEXG CALL REAONE 54 AEDC-TR-78-68 21 SUSROUTI NE NEIGH
ERIC Educational Resources Information Center
von Arnim, Albrecht G.; Missra, Anamika
2017-01-01
Leading voices in the biological sciences have called for a transformation in graduate education leading to the PhD degree. One area commonly singled out for growth and innovation is cross-training in computational science. In 1998, the University of Tennessee (UT) founded an intercollegiate graduate program called the UT-ORNL Graduate School of…
Computer Language For Optimization Of Design
NASA Technical Reports Server (NTRS)
Scotti, Stephen J.; Lucas, Stephen H.
1991-01-01
SOL is computer language geared to solution of design problems. Includes mathematical modeling and logical capabilities of computer language like FORTRAN; also includes additional power of nonlinear mathematical programming methods at language level. SOL compiler takes SOL-language statements and generates equivalent FORTRAN code and system calls. Provides syntactic and semantic checking for recovery from errors and provides detailed reports containing cross-references to show where each variable used. Implemented on VAX/VMS computer systems. Requires VAX FORTRAN compiler to produce executable program.
Flight Training for a Pilot Program.
ERIC Educational Resources Information Center
Gunter, Mary
1995-01-01
A computer-based curriculum program called Computers Helping Instruction and Learning Development (Project CHILD) has been tested in 82 classrooms in 10 elementary schools in Okaloosa County, Florida. As part of a sixth-grade follow-up study, students in Project CHILD had a B average in math and language arts versus a C average for students in a…
ERIC Educational Resources Information Center
Sussex, Roland
1991-01-01
Considers how the effectiveness of computer-assisted language learning (CALL) has been hampered by language teachers who lack programing and software engineering expertise, and explores the limitations and potential contributions of author languages, programs, and environments in increasing the range of options for language teachers who are not…
The ZAP Project: Designing Interactive Computer Tools for Learning Psychology
ERIC Educational Resources Information Center
Hulshof, Casper; Eysink, Tessa; de Jong, Ton
2006-01-01
In the ZAP project, a set of interactive computer programs called "ZAPs" was developed. The programs were designed in such a way that first-year students experience psychological phenomena in a vivid and self-explanatory way. Students can either take the role of participant in a psychological experiment, they can experience phenomena themselves,…
Computer Technology for Industry
NASA Technical Reports Server (NTRS)
1982-01-01
Shell Oil Company used a COSMIC program, called VISCEL to insure the accuracy of the company's new computer code for analyzing polymers, and chemical compounds. Shell reported that there were no other programs available that could provide the necessary calculations. Shell produces chemicals for plastic products used in the manufacture of automobiles, housewares, appliances, film, textiles, electronic equipment and furniture.
von Arnim, Albrecht G; Missra, Anamika
2017-01-01
Leading voices in the biological sciences have called for a transformation in graduate education leading to the PhD degree. One area commonly singled out for growth and innovation is cross-training in computational science. In 1998, the University of Tennessee (UT) founded an intercollegiate graduate program called the UT-ORNL Graduate School of Genome Science and Technology in partnership with the nearby Oak Ridge National Laboratory. Here, we report outcome data that attest to the program's effectiveness in graduating computationally enabled biologists for diverse careers. Among 77 PhD graduates since 2003, the majority came with traditional degrees in the biological sciences, yet two-thirds moved into computational or hybrid (computational-experimental) positions. We describe the curriculum of the program and how it has changed. We also summarize how the program seeks to establish cohesion between computational and experimental biologists. This type of program can respond flexibly and dynamically to unmet training needs. In conclusion, this study from a flagship, state-supported university may serve as a reference point for creating a stable, degree-granting, interdepartmental graduate program in computational biology and allied areas. © 2017 A. G. von Arnim and A. Missra. CBE—Life Sciences Education © 2017 The American Society for Cell Biology. This article is distributed by The American Society for Cell Biology under license from the author(s). It is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).
ERIC Educational Resources Information Center
Zillesen, P. G. van Schaick; And Others
Instructional feedback given to the learners during computer simulation sessions may be greatly improved by integrating educational computer simulation programs with hypermedia-based computer-assisted learning (CAL) materials. A prototype of a learning environment of this type called BRINE PURIFICATION was developed for use in corporate training…
ERIC Educational Resources Information Center
Goodgion, Laurel; And Others
1986-01-01
Eight articles in special supplement to "Library Journal" and "School Library Journal" cover a computer program called "Byte into Books"; microcomputers and the small library; creating databases with students; online searching with a microcomputer; quality automation software; Meckler Publishing Company's…
Bayram, Tuncay; Sönmez, Bircan
2012-04-01
In this study, we aimed to make a computer program that calculates approximate radiation dose received by embryo/fetus in nuclear medicine applications. Radiation dose values per MBq-1 received by embryo/fetus in nuclear medicine applications were gathered from literature for various stages of pregnancy. These values were embedded in the computer code, which was written in Fortran 90 program language. The computer program called nmfdose covers almost all radiopharmaceuticals used in nuclear medicine applications. Approximate radiation dose received by embryo/fetus can be calculated easily at a few steps using this computer program. Although there are some constraints on using the program for some special cases, nmfdose is useful and it provides practical solution for calculation of approximate dose to embryo/fetus in nuclear medicine applications. None declared.
NASA Technical Reports Server (NTRS)
Wrenn, Gregory A.
2005-01-01
This report describes a database routine called DB90 which is intended for use with scientific and engineering computer programs. The software is written in the Fortran 90/95 programming language standard with file input and output routines written in the C programming language. These routines should be completely portable to any computing platform and operating system that has Fortran 90/95 and C compilers. DB90 allows a program to supply relation names and up to 5 integer key values to uniquely identify each record of each relation. This permits the user to select records or retrieve data in any desired order.
NASA Technical Reports Server (NTRS)
1995-01-01
As a Jet Propulsion Laboratory astronomer, John D. Callahan developed a computer program called Multimission Interactive Planner (MIP) to help astronomers analyze scientific and optical data collected on the Voyager's Grand Tour. The commercial version of the program called XonVu is published by XonTech, Inc. Callahan has since developed two more advanced programs based on MIP technology, Grand Tour and Jovian Traveler, which simulate Voyager and Giotto missions. The software allows astronomers and space novices to view the objects seen by the spacecraft, manipulating perspective, distance and field of vision.
NASA Technical Reports Server (NTRS)
Horn, W. J.; Carlson, L. A.
1983-01-01
A FORTRAN computer program called THERMTRAJ is presented which can be used to compute the trajectory of high altitude scientific zero pressure balloons from launch through all subsequent phases of the balloon flight. In addition, balloon gas and film temperatures can be computed at every point of the flight. The program has the ability to account for ballasting, changes in cloud cover, variable atmospheric temperature profiles, and both unconditional valving and scheduled valving of the balloon gas. The program was verified for an extensive range of balloon sizes (from 0.5 to 41.47 million cubic feet). Instructions on program usage, listing of the program source deck, input data and printed and plotted output for a verification case are included.
NASA Technical Reports Server (NTRS)
Amling, G. E.; Holms, A. G.
1973-01-01
A computer program is described that performs a statistical multiple-decision procedure called chain pooling. It uses a number of mean squares assigned to error variance that is conditioned on the relative magnitudes of the mean squares. The model selection is done according to user-specified levels of type 1 or type 2 error probabilities.
Computer Augmented Learning; A Survey.
ERIC Educational Resources Information Center
Kindred, J.
The report contains a description and summary of computer augmented learning devices and systems. The devices are of two general types programed instruction systems based on the teaching machines pioneered by Pressey and developed by Skinner, and the so-called "docile" systems that permit greater user-direction with the computer under student…
Implications of Windowing Techniques for CAI.
ERIC Educational Resources Information Center
Heines, Jesse M.; Grinstein, Georges G.
This paper discusses the use of a technique called windowing in computer assisted instruction to allow independent control of functional areas in complex CAI displays and simultaneous display of output from a running computer program and coordinated instructional material. Two obstacles to widespread use of CAI in computer science courses are…
Applications of Geocoding and Mapping.
ERIC Educational Resources Information Center
Costa, Crist H.
The application of computer programing to construction of maps and geographic distributions of data has been called geocoding. This new use of the computer allows much more rapid analysis of various demographic characteristics. In particular, this paper describes the use of computer geocoding in the development of a plot of student density in…
VENVAL : a plywood mill cost accounting program
Henry Spelter
1991-01-01
This report documents a package of computer programs called VENVAL. These programs prepare plywood mill data for a linear programming (LP) model that, in turn, calculates the optimum mix of products to make, given a set of technologies and market prices. (The software to solve a linear program is not provided and must be obtained separately.) Linear programming finds...
Designing Templates for Interactive Tasks in CALL Tutorials.
ERIC Educational Resources Information Center
Ruhlmann, Felicitas
The development of templates for computer-assisted language learning (CALL) is discussed, based on experiences with primarily linear multimedia tutorial programs. Design of templates for multiple-choice questions and interactive tasks in a prototype module is described. Possibilities of enhancing interactivity by introducing problem-oriented…
ERIC Educational Resources Information Center
Ates, Alev; Altunay, Ugur; Altun, Eralp
2006-01-01
The aim of this research was to discern the effects of computer assisted English instruction on English language preparatory students' attitudes towards computers and English in a Turkish-medium high school with an intensive English program. A quasi-experimental time series research design, also called "before-after" or "repeated…
Generalized dynamic engine simulation techniques for the digital computer
NASA Technical Reports Server (NTRS)
Sellers, J.; Teren, F.
1974-01-01
Recently advanced simulation techniques have been developed for the digital computer and used as the basis for development of a generalized dynamic engine simulation computer program, called DYNGEN. This computer program can analyze the steady state and dynamic performance of many kinds of aircraft gas turbine engines. Without changes to the basic program, DYNGEN can analyze one- or two-spool turbofan engines. The user must supply appropriate component performance maps and design-point information. Examples are presented to illustrate the capabilities of DYNGEN in the steady state and dynamic modes of operation. The analytical techniques used in DYNGEN are briefly discussed, and its accuracy is compared with a comparable simulation using the hybrid computer. The impact of DYNGEN and similar all-digital programs on future engine simulation philosophy is also discussed.
Generalized dynamic engine simulation techniques for the digital computer
NASA Technical Reports Server (NTRS)
Sellers, J.; Teren, F.
1974-01-01
Recently advanced simulation techniques have been developed for the digital computer and used as the basis for development of a generalized dynamic engine simulation computer program, called DYNGEN. This computer program can analyze the steady state and dynamic performance of many kinds of aircraft gas turbine engines. Without changes to the basic program DYNGEN can analyze one- or two-spool turbofan engines. The user must supply appropriate component performance maps and design-point information. Examples are presented to illustrate the capabilities of DYNGEN in the steady state and dynamic modes of operation. The analytical techniques used in DYNGEN are briefly discussed, and its accuracy is compared with a comparable simulation using the hybrid computer. The impact of DYNGEN and similar all-digital programs on future engine simulation philosophy is also discussed.
Generalized dynamic engine simulation techniques for the digital computers
NASA Technical Reports Server (NTRS)
Sellers, J.; Teren, F.
1975-01-01
Recently advanced simulation techniques have been developed for the digital computer and used as the basis for development of a generalized dynamic engine simulation computer program, called DYNGEN. This computer program can analyze the steady state and dynamic performance of many kinds of aircraft gas turbine engines. Without changes to the basic program, DYNGEN can analyze one- or two-spool turbofan engines. The user must supply appropriate component performance maps and design point information. Examples are presented to illustrate the capabilities of DYNGEN in the steady state and dynamic modes of operation. The analytical techniques used in DYNGEN are briefly discussed, and its accuracy is compared with a comparable simulation using the hybrid computer. The impact of DYNGEN and similar digital programs on future engine simulation philosophy is also discussed.
An Integrated Development Environment for Adiabatic Quantum Programming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humble, Travis S; McCaskey, Alex; Bennink, Ryan S
2014-01-01
Adiabatic quantum computing is a promising route to the computational power afforded by quantum information processing. The recent availability of adiabatic hardware raises the question of how well quantum programs perform. Benchmarking behavior is challenging since the multiple steps to synthesize an adiabatic quantum program are highly tunable. We present an adiabatic quantum programming environment called JADE that provides control over all the steps taken during program development. JADE captures the workflow needed to rigorously benchmark performance while also allowing a variety of problem types, programming techniques, and processor configurations. We have also integrated JADE with a quantum simulation enginemore » that enables program profiling using numerical calculation. The computational engine supports plug-ins for simulation methodologies tailored to various metrics and computing resources. We present the design, integration, and deployment of JADE and discuss its use for benchmarking adiabatic quantum programs.« less
Pan Air Geometry Management System (PAGMS): A data-base management system for PAN AIR geometry data
NASA Technical Reports Server (NTRS)
Hall, J. F.
1981-01-01
A data-base management system called PAGMS was developed to facilitate the data transfer in applications computer programs that create, modify, plot or otherwise manipulate PAN AIR type geometry data in preparation for input to the PAN AIR system of computer programs. PAGMS is composed of a series of FORTRAN callable subroutines which can be accessed directly from applications programs. Currently only a NOS version of PAGMS has been developed.
Nofre, David; Priestley, Mark; Alberts, Gerard
2014-01-01
Language is one of the central metaphors around which the discipline of computer science has been built. The language metaphor entered modern computing as part of a cybernetic discourse, but during the second half of the 1950s acquired a more abstract meaning, closely related to the formal languages of logic and linguistics. The article argues that this transformation was related to the appearance of the commercial computer in the mid-1950s. Managers of computing installations and specialists on computer programming in academic computer centers, confronted with an increasing variety of machines, called for the creation of "common" or "universal languages" to enable the migration of computer code from machine to machine. Finally, the article shows how the idea of a universal language was a decisive step in the emergence of programming languages, in the recognition of computer programming as a proper field of knowledge, and eventually in the way we think of the computer.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goltz, G.; Weiner, H.
A computer program has been developed for designing and analyzing the performance of solar array/battery power systems for the U.S. Coast Guard Navigational Aids. This program is called the Design Synthesis/Performance Analysis (DSPA) Computer Program. The basic function of the Design Synthesis portion of the DSPA program is to evaluate functional and economic criteria to provide specifications for viable solar array/battery power systems. The basic function of the Performance Analysis portion of the DSPA program is to simulate the operation of solar array/battery power systems under specific loads and environmental conditions. This document provides a detailed description of the DSPAmore » Computer Program system and its subprograms. This manual will assist the programmer in revising or updating the several subprograms.« less
NASA Technical Reports Server (NTRS)
Everhart, J. L.
1983-01-01
A program called FLEXWAL for calculating wall modifications for solid, adaptive-wall wind tunnels is presented. The method used is the iterative technique of NASA TP-2081 and is applicable to subsonic and transonic test conditions. The program usage, program listing, and a sample case are given.
ERIC Educational Resources Information Center
Little, Joyce Currie
Academic computer departments, whether called by this name or by others such as the department of computer science or data programing, can be of great assistance to other departments in the two-year college. Faculty in other departments need to know about computer applications in their fields, require assistance in the development of curriculum…
NASA Technical Reports Server (NTRS)
Mathur, F. P.
1972-01-01
Description of an on-line interactive computer program called CARE (Computer-Aided Reliability Estimation) which can model self-repair and fault-tolerant organizations and perform certain other functions. Essentially CARE consists of a repository of mathematical equations defining the various basic redundancy schemes. These equations, under program control, are then interrelated to generate the desired mathematical model to fit the architecture of the system under evaluation. The mathematical model is then supplied with ground instances of its variables and is then evaluated to generate values for the reliability-theoretic functions applied to the model.
Computer aided design of monolithic microwave and millimeter wave integrated circuits and subsystems
NASA Astrophysics Data System (ADS)
Ku, Walter H.; Gang, Guan-Wan; He, J. Q.; Ichitsubo, I.
1988-05-01
This final technical report presents results on the computer aided design of monolithic microwave and millimeter wave integrated circuits and subsystems. New results include analytical and computer aided device models of GaAs MESFETs and HEMTs or MODFETs, new synthesis techniques for monolithic feedback and distributed amplifiers and a new nonlinear CAD program for MIMIC called CADNON. This program incorporates the new MESFET and HEMT model and has been successfully applied to the design of monolithic millimeter-wave mixers.
Proceduracy: Computer Code Writing in the Continuum of Literacy
ERIC Educational Resources Information Center
Vee, Annette
2010-01-01
This dissertation looks at computer programming through the lens of literacy studies, building from the concept of code as a written text with expressive and rhetorical power. I focus on the intersecting technological and social factors of computer code writing as a literacy--a practice I call "proceduracy". Like literacy, proceduracy is a human…
Performance Analysis of an Actor-Based Distributed Simulation
NASA Technical Reports Server (NTRS)
Schoeffler, James D.
1998-01-01
Object-oriented design of simulation programs appears to be very attractive because of the natural association of components in the simulated system with objects. There is great potential in distributing the simulation across several computers for the purpose of parallel computation and its consequent handling of larger problems in less elapsed time. One approach to such a design is to use "actors", that is, active objects with their own thread of control. Because these objects execute concurrently, communication is via messages. This is in contrast to an object-oriented design using passive objects where communication between objects is via method calls (direct calls when they are in the same address space and remote procedure calls when they are in different address spaces or different machines). This paper describes a performance analysis program for the evaluation of a design for distributed simulations based upon actors.
Preliminary structural sizing of a Mach 3.0 high-speed civil transport model
NASA Technical Reports Server (NTRS)
Blackburn, Charles L.
1992-01-01
An analysis has been performed pertaining to the structural resizing of a candidate Mach 3.0 High Speed Civil Transport (HSCT) conceptual design using a computer program called EZDESIT. EZDESIT is a computer program which integrates the PATRAN finite element modeling program to the COMET finite element analysis program for the purpose of calculating element sizes or cross sectional dimensions. The purpose of the present report is to document the procedure used in accomplishing the preliminary structural sizing and to present the corresponding results.
The TeraShake Computational Platform for Large-Scale Earthquake Simulations
NASA Astrophysics Data System (ADS)
Cui, Yifeng; Olsen, Kim; Chourasia, Amit; Moore, Reagan; Maechling, Philip; Jordan, Thomas
Geoscientific and computer science researchers with the Southern California Earthquake Center (SCEC) are conducting a large-scale, physics-based, computationally demanding earthquake system science research program with the goal of developing predictive models of earthquake processes. The computational demands of this program continue to increase rapidly as these researchers seek to perform physics-based numerical simulations of earthquake processes for larger meet the needs of this research program, a multiple-institution team coordinated by SCEC has integrated several scientific codes into a numerical modeling-based research tool we call the TeraShake computational platform (TSCP). A central component in the TSCP is a highly scalable earthquake wave propagation simulation program called the TeraShake anelastic wave propagation (TS-AWP) code. In this chapter, we describe how we extended an existing, stand-alone, wellvalidated, finite-difference, anelastic wave propagation modeling code into the highly scalable and widely used TS-AWP and then integrated this code into the TeraShake computational platform that provides end-to-end (initialization to analysis) research capabilities. We also describe the techniques used to enhance the TS-AWP parallel performance on TeraGrid supercomputers, as well as the TeraShake simulations phases including input preparation, run time, data archive management, and visualization. As a result of our efforts to improve its parallel efficiency, the TS-AWP has now shown highly efficient strong scaling on over 40K processors on IBM’s BlueGene/L Watson computer. In addition, the TSCP has developed into a computational system that is useful to many members of the SCEC community for performing large-scale earthquake simulations.
NASA Technical Reports Server (NTRS)
Smith, R. E.; Pitts, J. I.; Lambiotte, J. J., Jr.
1978-01-01
The computer program FLO-22 for analyzing inviscid transonic flow past 3-D swept-wing configurations was modified to use vector operations and run on the STAR-100 computer. The vectorized version described herein was called FLO-22-V1. Vector operations were incorporated into Successive Line Over-Relaxation in the transformed horizontal direction. Vector relational operations and control vectors were used to implement upwind differencing at supersonic points. A high speed of computation and extended grid domain were characteristics of FLO-22-V1. The new program was not the optimal vectorization of Successive Line Over-Relaxation applied to transonic flow; however, it proved that vector operations can readily be implemented to increase the computation rate of the algorithm.
1988-09-01
software programs capable of being used on a microcomputer will be considered for analysis. No software intended for use on a miniframe or mainframe...Dial-A-Log consists of a program written in a computer language called L-10 that is run on a DEC-20 miniframe . The combination of the specific...proliferation of software dealing with microcomputers. Instead, they were geared more towards managing the use of miniframe or mainframe computer
Attitude determination using digital earth pictures
NASA Technical Reports Server (NTRS)
Gunshol, L. P.
1975-01-01
A computer program called PICATT is reported, which stands for picture attitude determination. Described are the particular satellite to which this technique of attitude determination has been applied, the method of solution, and the results that have been attained using the PICATT program.
ALCF Data Science Program: Productive Data-centric Supercomputing
NASA Astrophysics Data System (ADS)
Romero, Nichols; Vishwanath, Venkatram
The ALCF Data Science Program (ADSP) is targeted at big data science problems that require leadership computing resources. The goal of the program is to explore and improve a variety of computational methods that will enable data-driven discoveries across all scientific disciplines. The projects will focus on data science techniques covering a wide area of discovery including but not limited to uncertainty quantification, statistics, machine learning, deep learning, databases, pattern recognition, image processing, graph analytics, data mining, real-time data analysis, and complex and interactive workflows. Project teams will be among the first to access Theta, ALCFs forthcoming 8.5 petaflops Intel/Cray system. The program will transition to the 200 petaflop/s Aurora supercomputing system when it becomes available. In 2016, four projects have been selected to kick off the ADSP. The selected projects span experimental and computational sciences and range from modeling the brain to discovering new materials for solar-powered windows to simulating collision events at the Large Hadron Collider (LHC). The program will have a regular call for proposals with the next call expected in Spring 2017.http://www.alcf.anl.gov/alcf-data-science-program This research used resources of the ALCF, which is a DOE Office of Science User Facility supported under Contract DE-AC02-06CH11357.
An application of artificial intelligence to the interpretation of mass spectra.
NASA Technical Reports Server (NTRS)
Buchanan, B. G.; Duffield, A. M.; Robertson, A. V.
1971-01-01
Description of the DENDRAL (Dendritic Algorithm) project, the objectives of which were to base the computer program on an alogorithm that generates an exhaustive, nonredundant list of all the structural isomers of a given chemical composition, and to devise a computer program that would perform an organic structure determination, given a molecular formula and a mass spectrum. This program is called 'Heuristic DENDRAL' and it operates by using the known structure/spectrum correlations to constrain the DENDRAL isomer generator to produce a single isomer for that composition. The collaboration of chemists and computer scientists has produced a tool of some practical utility from the chemical viewpoint, and an interesting program from the viewpoint of artificial intelligence.
Teaching Differential Diagnosis by Computer: A Pathophysiological Approach
ERIC Educational Resources Information Center
Goroll, Allan H.; And Others
1977-01-01
An interactive, computer-based teaching exercise in diagnosis that emphasizes pathophysiology in the analysis of clinical data is described. Called the Jaundice Program, its objective is to simplify the pattern recognition problem by relating clinical findings to diagnosis via reference to disease mechanisms. (LBH)
Have Your Computer Call My Computer.
ERIC Educational Resources Information Center
Carabi, Peter
1992-01-01
As more school systems adopt site-based management, local decision makers need greater access to all kinds of information. Microcomputer-based networks can help with classroom management, scheduling, student program design, counselor recommendations, and financial reporting operations. Administrators are provided with planning tips and a sample…
Mentat: An object-oriented macro data flow system
NASA Technical Reports Server (NTRS)
Grimshaw, Andrew S.; Liu, Jane W. S.
1988-01-01
Mentat, an object-oriented macro data flow system designed to facilitate parallelism in distributed systems, is presented. The macro data flow model is a model of computation similar to the data flow model with two principal differences: the computational complexity of the actors is much greater than in traditional data flow systems, and there are persistent actors that maintain state information between executions. Mentat is a system that combines the object-oriented programming paradigm and the macro data flow model of computation. Mentat programs use a dynamic structure called a future list to represent the future of computations.
Computing Education in Children's Early Years: A Call for Debate
ERIC Educational Resources Information Center
Manches, Andrew; Plowman, Lydia
2017-01-01
International changes in policy and curricula (notably recent developments in England) have led to a focus on the role of computing education in the early years. As interest in the potential of computing education has increased, there has been a proliferation of programming tools designed for young children. While these changes are broadly to be…
Interactive algebraic grid-generation technique
NASA Technical Reports Server (NTRS)
Smith, R. E.; Wiese, M. R.
1986-01-01
An algebraic grid generation technique and use of an associated interactive computer program are described. The technique, called the two boundary technique, is based on Hermite cubic interpolation between two fixed, nonintersecting boundaries. The boundaries are referred to as the bottom and top, and they are defined by two ordered sets of points. Left and right side boundaries which intersect the bottom and top boundaries may also be specified by two ordered sets of points. when side boundaries are specified, linear blending functions are used to conform interior interpolation to the side boundaries. Spacing between physical grid coordinates is determined as a function of boundary data and uniformly space computational coordinates. Control functions relating computational coordinates to parametric intermediate variables that affect the distance between grid points are embedded in the interpolation formulas. A versatile control function technique with smooth-cubic-spline functions is presented. The technique works best in an interactive graphics environment where computational displays and user responses are quickly exchanged. An interactive computer program based on the technique and called TBGG (two boundary grid generation) is also described.
ERIC Educational Resources Information Center
Lee, Young-Jin
2011-01-01
This study investigates whether a visual programming environment called Etoys could enable teachers to create software applications meeting their own instructional needs. Twenty-four teachers who participated in the study successfully developed their own educational computer programs in the educational technology course employing cognitive…
Constructing linkage maps in the genomics era with MapDisto 2.0.
Heffelfinger, Christopher; Fragoso, Christopher A; Lorieux, Mathias
2017-07-15
Genotyping by sequencing (GBS) generates datasets that are challenging to handle by current genetic mapping software with graphical interface. Geneticists need new user-friendly computer programs that can analyze GBS data on desktop computers. This requires improvements in computation efficiency, both in terms of speed and use of random-access memory (RAM). MapDisto v.2.0 is a user-friendly computer program for construction of genetic linkage maps. It includes several new major features: (i) handling of very large genotyping datasets like the ones generated by GBS; (ii) direct importation and conversion of Variant Call Format (VCF) files; (iii) detection of linkage, i.e. construction of linkage groups in case of segregation distortion; (iv) data imputation on VCF files using a new approach, called LB-Impute. Features i to iv operate through inclusion of new Java modules that are used transparently by MapDisto; (v) QTL detection via a new R/qtl graphical interface. The program is available free of charge at mapdisto.free.fr. mapdisto@gmail.com. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
NASA Technical Reports Server (NTRS)
1987-01-01
Philip Morris research center scientists use a computer program called CECTRP, for Chemical Equilibrium Composition and Transport Properties, to gain insight into the behavior of atoms as they progress along the reaction pathway. Use of the program lets the scientist accurately predict the behavior of a given molecule or group of molecules. Computer generated data must be checked by laboratory experiment, but the use of CECTRP saves the researchers hundreds of hours of laboratory time since experiments must run only to validate the computer's prediction. Philip Morris estimates that had CECTRP not been available, at least two man years would have been required to develop a program to perform similar free energy calculations.
Multimedia CALLware: The Developer's Responsibility.
ERIC Educational Resources Information Center
Dodigovic, Marina
The early computer-assisted-language-learning (CALL) programs were silent and mostly limited to screen or printer supported written text as the prevailing communication resource. The advent of powerful graphics, sound and video combined with AI-based parsers and sound recognition devices gradually turned the computer into a rather anthropomorphic…
NASA Technical Reports Server (NTRS)
Charlesworth, Arthur
1990-01-01
The nondeterministic divide partitions a vector into two non-empty slices by allowing the point of division to be chosen nondeterministically. Support for high-level divide-and-conquer programming provided by the nondeterministic divide is investigated. A diva algorithm is a recursive divide-and-conquer sequential algorithm on one or more vectors of the same range, whose division point for a new pair of recursive calls is chosen nondeterministically before any computation is performed and whose recursive calls are made immediately after the choice of division point; also, access to vector components is only permitted during activations in which the vector parameters have unit length. The notion of diva algorithm is formulated precisely as a diva call, a restricted call on a sequential procedure. Diva calls are proven to be intimately related to associativity. Numerous applications of diva calls are given and strategies are described for translating a diva call into code for a variety of parallel computers. Thus diva algorithms separate logical correctness concerns from implementation concerns.
The Adam language: Ada extended with support for multiway activities
NASA Technical Reports Server (NTRS)
Charlesworth, Arthur
1993-01-01
The Adam language is an extension of Ada that supports multiway activities, which are cooperative activities involving two or more processes. This support is provided by three new constructs: diva procedures, meet statements, and multiway accept statements. Diva procedures are recursive generic procedures having a particular restrictive syntax that facilitates translation for parallel computers. Meet statements and multiway accept statements provide two ways to express a multiway rendezvous, which is an n-way rendezvous generalizing Ada's 2-way rendezvous. While meet statements tend to have simpler rules than multiway accept statements, the latter approach is a more straightforward extension of Ada. The only nonnull statements permitted within meet statements and multiway accept statements are calls on instantiated diva procedures. A call on an instantiated diva procedure is also permitted outside a multiway rendezvous; thus sequential Adam programs using diva procedures can be written. Adam programs are translated into Ada programs appropriate for use on parallel computers.
Phase Calibration for the Block 1 VLBI System
NASA Technical Reports Server (NTRS)
Roth, M. G.; Runge, T. F.
1983-01-01
Very Long Baseline Interferometry (VLBI) in the DSN provides support for spacecraft navigation, Earth orientation measurements, and synchronization of network time and frequency standards. An improved method for calibrating instrumental phase shifts has recently been implemented as a computer program in the Block 1 system. The new calibration program, called PRECAL, performs calibrations over intervals as small as 0.4 seconds and greatly reduces the amount of computer processing required to perform phase calibration.
NASA Technical Reports Server (NTRS)
1979-01-01
Eastman Kodak Company, Rochester, New York is a broad-based firm which produces photographic apparatus and supplies, fibers, chemicals and vitamin concentrates. Much of the company's research and development effort is devoted to photographic science and imaging technology, including laser technology. Eastman Kodak is using a COSMIC computer program called LACOMA in the analysis of laser optical systems and camera design studies. The company reports that use of the program has provided development time savings and reduced computer service fees.
Circus: A Replicated Procedure Call Facility
1984-08-01
Computer Science Laboratory, Xerox PARC, July 1082 . [24) Bruce Ja.y Nelson. Remote Procedure Ctdl. Ph.D. dissertation, Computer Science Department...t. Ph.D. dissertation, Computer Science Division, University of California, Berkeley, Xerox PARC report number CSIF 82-7, December 1082 . [30...Tandem Computers Inc. GUARDIAN Opet’ating Sy•tem Programming Mt~nulll, Volumu 1 11nd 2. C upertino, California, 1082 . [31) R. H. Thoma.s. A majority
[AERA. Dream machines and computing practices at the Mathematical Center].
Alberts, Gerard; De Beer, Huub T
2008-01-01
Dream machines may be just as effective as the ones materialised. Their symbolic thrust can be quite powerful. The Amsterdam 'Mathematisch Centrum' (Mathematical Center), founded February 11, 1946, created a Computing Department in an effort to realise its goal of serving society. When Aad van Wijngaarden was appointed as head of the Computing Department, however, he claimed space for scientific research and computer construction, next to computing as a service. Still, the computing service following the five stage style of Hartree's numerical analysis remained a dominant characteristic of the work of the Computing Department. The high level of ambition held by Aad van Wijngaarden lead to ever renewed projections of big automatic computers, symbolised by the never-built AERA. Even a machine that was actually constructed, the ARRA which followed A.D. Booth's design of the ARC, never made it into real operation. It did serve Van Wijngaarden to bluff his way into the computer age by midsummer 1952. Not until January 1954 did the computing department have a working stored program computer, which for reasons of policy went under the same name: ARRA. After just one other machine, the ARMAC, had been produced, a separate company, Electrologica, was set up for the manufacture of computers, which produced the rather successful X1 computer. The combination of ambition and absence of a working machine lead to a high level of work on programming, way beyond the usual ideas of libraries of subroutines. Edsger W. Dijkstra in particular led the way to an emphasis on the duties of the programmer within the pattern of numerical analysis. Programs generating programs, known elsewhere as autocoding systems, were at the 'Mathematisch Centrum' called 'superprograms'. Practical examples were usually called a 'complex', in Dutch, where in English one might say 'system'. Historically, this is where software begins. Dekker's matrix complex, Dijkstra's interrupt system, Dijkstra and Zonneveld's ALGOL compiler--which for housekeeping contained 'the complex'--were actual examples of such super programs. In 1960 this compiler gave the Mathematical Center a leading edge in the early development of software.
NASA Technical Reports Server (NTRS)
Reznick, Steve
1988-01-01
Transonic Euler/Navier-Stokes computations are accomplished for wing-body flow fields using a computer program called Transonic Navier-Stokes (TNS). The wing-body grids are generated using a program called ZONER, which subdivides a coarse grid about a fighter-like aircraft configuration into smaller zones, which are tailored to local grid requirements. These zones can be either finely clustered for capture of viscous effects, or coarsely clustered for inviscid portions of the flow field. Different equation sets may be solved in the different zone types. This modular approach also affords the opportunity to modify a local region of the grid without recomputing the global grid. This capability speeds up the design optimization process when quick modifications to the geometry definition are desired. The solution algorithm embodied in TNS is implicit, and is capable of capturing pressure gradients associated with shocks. The algebraic turbulence model employed has proven adequate for viscous interactions with moderate separation. Results confirm that the TNS program can successfully be used to simulate transonic viscous flows about complicated 3-D geometries.
Exploring Cloud Computing for Distance Learning
ERIC Educational Resources Information Center
He, Wu; Cernusca, Dan; Abdous, M'hammed
2011-01-01
The use of distance courses in learning is growing exponentially. To better support faculty and students for teaching and learning, distance learning programs need to constantly innovate and optimize their IT infrastructures. The new IT paradigm called "cloud computing" has the potential to transform the way that IT resources are utilized and…
Hamlet on the Macintosh: An Experimental Seminar That Worked.
ERIC Educational Resources Information Center
Strange, William C.
1987-01-01
Describes experimental college Shakespeare seminar that used Macintosh computers and software called ELIZA and ADVENTURE to develop character dialogs and adventure games based on Hamlet's characters and plots. Programming languages are examined, particularly their relationship to metaphor, and the use of computers in humanities is discussed. (LRW)
Apollo experience report: Real-time auxiliary computing facility development
NASA Technical Reports Server (NTRS)
Allday, C. E.
1972-01-01
The Apollo real time auxiliary computing function and facility were an extension of the facility used during the Gemini Program. The facility was expanded to include support of all areas of flight control, and computer programs were developed for mission and mission-simulation support. The scope of the function was expanded to include prime mission support functions in addition to engineering evaluations, and the facility became a mandatory mission support facility. The facility functioned as a full scale mission support activity until after the first manned lunar landing mission. After the Apollo 11 mission, the function and facility gradually reverted to a nonmandatory, offline, on-call operation because the real time program flexibility was increased and verified sufficiently to eliminate the need for redundant computations. The evaluation of the facility and function and recommendations for future programs are discussed in this report.
CALL Vocabulary Learning in Japanese: Does Romaji Help Beginners Learn More Words?
ERIC Educational Resources Information Center
Okuyama, Yoshiko
2007-01-01
This study investigated the effects of using Romanized spellings on beginner-level Japanese vocabulary learning. Sixty-one first-semester students at two universities in Arizona were both taught and tested on 40 Japanese content words in a computer-assisted language learning (CALL) program. The primary goal of the study was to examine whether the…
"Extreme Programming" in a Bioinformatics Class
ERIC Educational Resources Information Center
Kelley, Scott; Alger, Christianna; Deutschman, Douglas
2009-01-01
The importance of Bioinformatics tools and methodology in modern biological research underscores the need for robust and effective courses at the college level. This paper describes such a course designed on the principles of cooperative learning based on a computer software industry production model called "Extreme Programming" (EP).…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goltz, G.; Weiner, H.
A computer program has been developed for designing and analyzing the performance of solar array/battery power systems for the U. S. Coast Guard Navigational Aids. This program is called the Design Synthesis/Performance Analysis (DSPA) Computer Program. The basic function of the Design Synthesis portion of the DSPA program is to evaluate functional and economic criteria to provide specifications for viable solar array/battery power systems. The basic function of the Performance Analysis portion of the DSPA program is to simulate the operation of solar array/battery power systems under specific loads and environmental conditions. This document provides all the information necessary tomore » access the DSPA programs, to input required data and to generate appropriate Design Synthesis or Performance Analysis Output.« less
NASA Technical Reports Server (NTRS)
Yanosy, James L.
1988-01-01
This manual describes how to use the Emulation Simulation Computer Model (ESCM). Based on G189A, ESCM computes the transient performance of a Space Station atmospheric revitalization subsystem (ARS) with CO2 removal provided by a solid amine water desorbed subsystem called SAWD. Many performance parameters are computed some of which are cabin CO2 partial pressure, relative humidity, temperature, O2 partial pressure, and dew point. The program allows the user to simulate various possible combinations of man loading, metabolic profiles, cabin volumes and certain hypothesized failures that could occur.
Applications of automatic differentiation in computational fluid dynamics
NASA Technical Reports Server (NTRS)
Green, Lawrence L.; Carle, A.; Bischof, C.; Haigler, Kara J.; Newman, Perry A.
1994-01-01
Automatic differentiation (AD) is a powerful computational method that provides for computing exact sensitivity derivatives (SD) from existing computer programs for multidisciplinary design optimization (MDO) or in sensitivity analysis. A pre-compiler AD tool for FORTRAN programs called ADIFOR has been developed. The ADIFOR tool has been easily and quickly applied by NASA Langley researchers to assess the feasibility and computational impact of AD in MDO with several different FORTRAN programs. These include a state-of-the-art three dimensional multigrid Navier-Stokes flow solver for wings or aircraft configurations in transonic turbulent flow. With ADIFOR the user specifies sets of independent and dependent variables with an existing computer code. ADIFOR then traces the dependency path throughout the code, applies the chain rule to formulate derivative expressions, and generates new code to compute the required SD matrix. The resulting codes have been verified to compute exact non-geometric and geometric SD for a variety of cases. in less time than is required to compute the SD matrix using centered divided differences.
NASA Technical Reports Server (NTRS)
1983-01-01
Drones, subscale vehicles like the Firebees, and full scale retired military aircraft are used to test air defense missile systems. The DFCS (Drone Formation Control System) computer, developed by IBM (International Business Machines) Federal Systems Division, can track ten drones at once. A program called ORACLS is used to generate software to track and control Drones. It was originally developed by Langley and supplied by COSMIC (Computer Software Management and Information Center). The program saved the company both time and money.
Program Synthesizes UML Sequence Diagrams
NASA Technical Reports Server (NTRS)
Barry, Matthew R.; Osborne, Richard N.
2006-01-01
A computer program called "Rational Sequence" generates Universal Modeling Language (UML) sequence diagrams of a target Java program running on a Java virtual machine (JVM). Rational Sequence thereby performs a reverse engineering function that aids in the design documentation of the target Java program. Whereas previously, the construction of sequence diagrams was a tedious manual process, Rational Sequence generates UML sequence diagrams automatically from the running Java code.
Program to compute the positions of the aircraft and of the aircraft sensor footprints
NASA Technical Reports Server (NTRS)
Paris, J. F. (Principal Investigator)
1982-01-01
The positions of the ground track of the aircraft and of the aircraft sensor footprints, in particular the metric camera and the radar scatterometer on the C-130 aircraft, are estimated by a program called ACTRK. The program uses the altitude, speed, and attitude informaton contained in the radar scatterometer data files to calculate the positions. The ACTRK program is documented.
A spacecraft computer repairable via command.
NASA Technical Reports Server (NTRS)
Fimmel, R. O.; Baker, T. E.
1971-01-01
The MULTIPAC is a central data system developed for deep-space probes with the distinctive feature that it may be repaired during flight via command and telemetry links by reprogramming around the failed unit. The computer organization uses pools of identical modules which the program organizes into one or more computers called processors. The interaction of these modules is dynamically controlled by the program rather than hardware. In the event of a failure, new programs are entered which reorganize the central data system with a somewhat reduced total processing capability aboard the spacecraft. Emphasis is placed on the evolution of the system architecture and the final overall system design rather than the specific logic design.
Plasmid mapping computer program.
Nolan, G P; Maina, C V; Szalay, A A
1984-01-01
Three new computer algorithms are described which rapidly order the restriction fragments of a plasmid DNA which has been cleaved with two restriction endonucleases in single and double digestions. Two of the algorithms are contained within a single computer program (called MPCIRC). The Rule-Oriented algorithm, constructs all logical circular map solutions within sixty seconds (14 double-digestion fragments) when used in conjunction with the Permutation method. The program is written in Apple Pascal and runs on an Apple II Plus Microcomputer with 64K of memory. A third algorithm is described which rapidly maps double digests and uses the above two algorithms as adducts. Modifications of the algorithms for linear mapping are also presented. PMID:6320105
Reduce dimension costs by using WALNUT
David G. Martens; David G. Martens
1986-01-01
A computer program called WALNUT is described that determines the leastcost combination of lumber grades required to produce a given cutting order of furniture dimension parts. If the least-cost mix is not available, WALNUT can be used to determine the next best alternative. The steps involved in using the program are described.
Kmonodium, a Program for the Numerical Solution of the One-Dimensional Schrodinger Equation
ERIC Educational Resources Information Center
Angeli, Celestino; Borini, Stefano; Cimiraglia, Renzo
2005-01-01
A very simple strategy for the solution of the Schrodinger equation of a particle moving in one dimension subjected to a generic potential is presented. This strategy is implemented in a computer program called Kmonodium, which is free and distributed under the General Public License (GPL).
Piette, John D; Mendoza-Avelares, Milton O; Ganser, Martha; Mohamed, Muhima; Marinec, Nicolle; Krishnan, Sheila
2011-06-01
Although interactive voice response (IVR) calls can be an effective tool for chronic disease management, many regions of the world lack the infrastructure to provide these services. This study evaluated the feasibility and potential impact of an IVR program using a cloud-computing model to improve diabetes management in Honduras. A single-group, pre-post study was conducted between June and August 2010. The telecommunications infrastructure was maintained on a U.S. server, and calls were directed to patients' cell phones using VoIP. Eighty-five diabetes patients in Honduras received weekly IVR disease management calls for 6 weeks, with automated follow-up e-mails to clinicians, and voicemail reports to family caregivers. Patients completed interviews at enrollment and a 6-week follow-up. Other measures included patients' glycemic control (HbA1c) and data from the IVR calling system. A total of 53% of participants completed at least half of their IVR calls and 23% of participants completed 80% or more. Higher baseline blood pressures, greater diabetes burden, greater distance from the clinic, and better medication adherence were related to higher call completion rates. Nearly all participants (98%) reported that because of the program, they improved in aspects of diabetes management such as glycemic control (56%) or foot care (89%). Mean HbA1c's decreased from 10.0% at baseline to 8.9% at follow-up (p<0.01). Most participants (92%) said that if the service were available in their clinic they would use it again. Cloud computing is a feasible strategy for providing IVR services globally. IVR self-care support may improve self-care and glycemic control for patients in underdeveloped countries. Published by Elsevier Inc.
Piette, John D.; Mendoza-Avelares, Milton O.; Ganser, Martha; Mohamed, Muhima; Marinec, Nicolle; Krishnan, Sheila
2013-01-01
Background Although interactive voice response (IVR) calls can be an effective tool for chronic disease management, many regions of the world lack the infrastructure to provide these services. Objective This study evaluated the feasibility and potential impact of an IVR program using a cloud-computing model to improve diabetes management in Honduras. Methods A single group, pre-post study was conducted between June and August 2010. The telecommunications infrastructure was maintained on a U.S. server, and calls were directed to patients’ cell phones using VoIP. Eighty-five diabetes patients in Honduras received weekly IVR disease management calls for six weeks, with automated follow-up emails to clinicians, and voicemail reports to family caregivers. Patients completed interviews at enrollment and a six week follow-up. Other measures included patients’ glycemic control (A1c) and data from the IVR calling system. Results 55% of participants completed the majority of their IVR calls and 33% completed 80% or more. Higher baseline blood pressures, greater diabetes burden, greater distance from the clinic, and better adherence were related to higher call completion rates. Nearly all participants (98%) reported that because of the program, they improved in aspects of diabetes management such as glycemic control (56%) or foot care (89%). Mean A1c’s decreased from 10.0% at baseline to 8.9% at follow-up (p<.01). Most participants (92%) said that if the service were available in their clinic they would use it again. Conclusions Cloud computing is a feasible strategy for providing IVR services globally. IVR self-care support may improve self-care and glycemic control for patients in under-developed countries. PMID:21565655
MPI_XSTAR: MPI-based Parallelization of the XSTAR Photoionization Program
NASA Astrophysics Data System (ADS)
Danehkar, Ashkbiz; Nowak, Michael A.; Lee, Julia C.; Smith, Randall K.
2018-02-01
We describe a program for the parallel implementation of multiple runs of XSTAR, a photoionization code that is used to predict the physical properties of an ionized gas from its emission and/or absorption lines. The parallelization program, called MPI_XSTAR, has been developed and implemented in the C++ language by using the Message Passing Interface (MPI) protocol, a conventional standard of parallel computing. We have benchmarked parallel multiprocessing executions of XSTAR, using MPI_XSTAR, against a serial execution of XSTAR, in terms of the parallelization speedup and the computing resource efficiency. Our experience indicates that the parallel execution runs significantly faster than the serial execution, however, the efficiency in terms of the computing resource usage decreases with increasing the number of processors used in the parallel computing.
ERIC Educational Resources Information Center
Marshall, Neil; Buteau, Chantal
2014-01-01
As part of their undergraduate mathematics curriculum, students at Brock University learn to create and use computer-based tools with dynamic, visual interfaces, called Exploratory Objects, developed for the purpose of conducting pure or applied mathematical investigations. A student's Development Process Model of creating and using an Exploratory…
NASA Technical Reports Server (NTRS)
Deffenbaugh, F. D.; Vitz, J. F.
1979-01-01
The users manual for the Discrete Vortex Cross flow Evaluator (DIVORCE) computer program is presented. DIVORCE was developed in FORTRAN 4 for the DCD 6600 and CDC 7600 machines. Optimal calls to a NASA vector subroutine package are provided for use with the CDC 7600.
Computation of high Reynolds number internal/external flows
NASA Technical Reports Server (NTRS)
Cline, M. C.; Wilmoth, R. G.
1981-01-01
A general, user oriented computer program, called VNAP2, was developed to calculate high Reynolds number, internal/ external flows. The VNAP2 program solves the two dimensional, time dependent Navier-Stokes equations. The turbulence is modeled with either a mixing-length, a one transport equation, or a two transport equation model. Interior grid points are computed using the explicit MacCormack Scheme with special procedures to speed up the calculation in the fine grid. All boundary conditions are calculated using a reference plane characteristic scheme with the viscous terms treated as source terms. Several internal, external, and internal/external flow calculations are presented.
Computation of high Reynolds number internal/external flows
NASA Technical Reports Server (NTRS)
Cline, M. C.; Wilmoth, R. G.
1981-01-01
A general, user oriented computer program, called VNAF2, developed to calculate high Reynolds number internal/external flows is described. The program solves the two dimensional, time dependent Navier-Stokes equations. Turbulence is modeled with either a mixing length, a one transport equation, or a two transport equation model. Interior grid points are computed using the explicit MacCormack scheme with special procedures to speed up the calculation in the fine grid. All boundary conditions are calculated using a reference plane characteristic scheme with the viscous terms treated as source terms. Several internal, external, and internal/external flow calculations are presented.
Closed-loop bird-computer interactions: a new method to study the role of bird calls.
Lerch, Alexandre; Roy, Pierre; Pachet, François; Nagle, Laurent
2011-03-01
In the field of songbird research, many studies have shown the role of male songs in territorial defense and courtship. Calling, another important acoustic communication signal, has received much less attention, however, because calls are assumed to contain less information about the emitter than songs do. Birdcall repertoire is diverse, and the role of calls has been found to be significant in the area of social interaction, for example, in pair, family, and group cohesion. However, standard methods for studying calls do not allow precise and systematic study of their role in communication. We propose herein a new method to study bird vocal interaction. A closed-loop computer system interacts with canaries, Serinus canaria, by (1) automatically classifying two basic types of canary vocalization, single versus repeated calls, as they are produced by the subject, and (2) responding with a preprogrammed call type recorded from another bird. This computerized animal-machine interaction requires no human interference. We show first that the birds do engage in sustained interactions with the system, by studying the rate of single and repeated calls for various programmed protocols. We then show that female canaries differentially use single and repeated calls. First, they produce significantly more single than repeated calls, and second, the rate of single calls is associated with the context in which they interact, whereas repeated calls are context independent. This experiment is the first illustration of how closed-loop bird-computer interaction can be used productively to study social relationships. © Springer-Verlag 2010
NASA Technical Reports Server (NTRS)
Carter, J. E.
1977-01-01
A computer program called STAYLAM is presented for the computation of the compressible laminar boundary-layer flow over a yawed infinite wing including distributed suction. This program is restricted to the transonic speed range or less due to the approximate treatment of the compressibility effects. The prescribed suction distribution is permitted to change discontinuously along the chord measured perpendicular to the wing leading edge. Estimates of transition are made by considering leading edge contamination, cross flow instability, and instability of the Tollmien-Schlichting type. A program listing is given in addition to user instructions and a sample case.
Combining Thermal And Structural Analyses
NASA Technical Reports Server (NTRS)
Winegar, Steven R.
1990-01-01
Computer code makes programs compatible so stresses and deformations calculated. Paper describes computer code combining thermal analysis with structural analysis. Called SNIP (for SINDA-NASTRAN Interfacing Program), code provides interface between finite-difference thermal model of system and finite-element structural model when no node-to-element correlation between models. Eliminates much manual work in converting temperature results of SINDA (Systems Improved Numerical Differencing Analyzer) program into thermal loads for NASTRAN (NASA Structural Analysis) program. Used to analyze concentrating reflectors for solar generation of electric power. Large thermal and structural models needed to predict distortion of surface shapes, and SNIP saves considerable time and effort in combining models.
Handheld Computer Use in U.S. Family Practice Residency Programs
Criswell, Dan F.; Parchman, Michael L.
2002-01-01
Objective: The purpose of the study was to evaluate the uses of handheld computers (also called personal digital assistants, or PDAs) in family practice residency programs in the United States. Study Design: In November 2000, the authors mailed a questionnaire to the program directors of all American Academy of Family Physicians (AAFP) and American College of Osteopathic Family Practice (ACOFP) residency programs in the United States. Measurements: Data and patterns of the use and non-use of handheld computers were identified. Results: Approximately 50 percent (306 of 610) of the programs responded to the survey. Two thirds of the programs reported that handheld computers were used in their residencies, and an additional 14 percent had plans for implementation within 24 months. Both the Palm and the Windows CE operating systems were used, with the Palm operating system the most common. Military programs had the highest rate of use (8 of 10 programs, 80 percent), and osteopathic programs had the lowest (23 of 55 programs, 42 percent). Of programs that reported handheld computer use, 45 percent had required handheld computer applications that are used uniformly by all users. Funding for handheld computers and related applications was non-budgeted in 76percent of the programs in which handheld computers were used. In programs providing a budget for handheld computers, the average annual budget per user was $461.58. Interested faculty or residents, rather than computer information services personnel, performed upkeep and maintenance of handheld computers in 72 percent of the programs in which the computers are used. In addition to the installed calendar, memo pad, and address book, the most common clinical uses of handheld computers in the programs were as medication reference tools, electronic textbooks, and clinical computational or calculator-type programs. Conclusions: Handheld computers are widely used in family practice residency programs in the United States. Although handheld computers were designed as electronic organizers, in family practice residencies they are used as medication reference tools, electronic textbooks, and clinical computational programs and to track activities that were previously associated with desktop database applications. PMID:11751806
Handheld computer use in U.S. family practice residency programs.
Criswell, Dan F; Parchman, Michael L
2002-01-01
The purpose of the study was to evaluate the uses of handheld computers (also called personal digital assistants, or PDAs) in family practice residency programs in the United States. In November 2000, the authors mailed a questionnaire to the program directors of all American Academy of Family Physicians (AAFP) and American College of Osteopathic Family Practice (ACOFP) residency programs in the United States. Data and patterns of the use and non-use of handheld computers were identified. Approximately 50 percent (306 of 610) of the programs responded to the survey. Two thirds of the programs reported that handheld computers were used in their residencies, and an additional 14 percent had plans for implementation within 24 months. Both the Palm and the Windows CE operating systems were used, with the Palm operating system the most common. Military programs had the highest rate of use (8 of 10 programs, 80 percent), and osteopathic programs had the lowest (23 of 55 programs, 42 percent). Of programs that reported handheld computer use, 45 percent had required handheld computer applications that are used uniformly by all users. Funding for handheld computers and related applications was non-budgeted in 76percent of the programs in which handheld computers were used. In programs providing a budget for handheld computers, the average annual budget per user was 461.58 dollars. Interested faculty or residents, rather than computer information services personnel, performed upkeep and maintenance of handheld computers in 72 percent of the programs in which the computers are used. In addition to the installed calendar, memo pad, and address book, the most common clinical uses of handheld computers in the programs were as medication reference tools, electronic textbooks, and clinical computational or calculator-type programs. Handheld computers are widely used in family practice residency programs in the United States. Although handheld computers were designed as electronic organizers, in family practice residencies they are used as medication reference tools, electronic textbooks, and clinical computational programs and to track activities that were previously associated with desktop database applications.
Integrating CALL and Genre Theory: A Proposal to Increase Students' Literacy
ERIC Educational Resources Information Center
Lirola, Maria Martinez; Cuevas, Maria Tabuenca
2008-01-01
The use of computer programs that can be used to correct and assess students' written work in the EFL classroom has become more commonplace within the last decade. This paper discusses the role of CALL in the process of data collection, standardisation of assessment criteria and compilation of the number of errors in the areas of grammar learning…
A Public + Private Mashup for Computer Science Education
ERIC Educational Resources Information Center
Wang, Kevin
2013-01-01
Getting called into the boss's office isn't always fun. Memories of trips to the school principal's office flash through one's mind. But the day last year that the author was called in to meet with their division vice president turned out to be a very good day. Executives at his company, Microsoft, had noticed the program he created in his spare…
A Model for High Frequency Radar Auroral Clutter
1980-03-01
PROGRAM ELEMENT PROJECT. TA9K Deputy for Electronic Technology (HADCEEP) AREA A WORK UNIT Hansco AF13 02Fl II CONTROLLING OFFICE NAME AND ADDRESS W...region is sometimes referred to as the "polar cavity." 16 I wf, 2. RKlIO PROPA(ArION CONSIDERATIONS An essential element in computing the incidence and...called liaselgrove equations. 8 This is :iccomplished nume’rically by use of a computer program originally developed by ,lTnes .11rd later modified
ERIC Educational Resources Information Center
Congress of the U.S., Washington, DC. House Committee on Science, Space and Technology.
This document presents witness testimony and supplemental materials from a Congressional hearing called to evaluate the progress of the High Performance Computing and Communications program in light of budget requests, to examine the appropriate role for the government in such a project, and to see demonstrations of the World Wide Web and related…
NASA Technical Reports Server (NTRS)
Platt, M. E.; Lewis, E. E.; Boehm, F.
1991-01-01
A Monte Carlo Fortran computer program was developed that uses two variance reduction techniques for computing system reliability applicable to solving very large highly reliable fault-tolerant systems. The program is consistent with the hybrid automated reliability predictor (HARP) code which employs behavioral decomposition and complex fault-error handling models. This new capability is called MC-HARP which efficiently solves reliability models with non-constant failures rates (Weibull). Common mode failure modeling is also a specialty.
Reinforcement learning for resource allocation in LEO satellite networks.
Usaha, Wipawee; Barria, Javier A
2007-06-01
In this paper, we develop and assess online decision-making algorithms for call admission and routing for low Earth orbit (LEO) satellite networks. It has been shown in a recent paper that, in a LEO satellite system, a semi-Markov decision process formulation of the call admission and routing problem can achieve better performance in terms of an average revenue function than existing routing methods. However, the conventional dynamic programming (DP) numerical solution becomes prohibited as the problem size increases. In this paper, two solution methods based on reinforcement learning (RL) are proposed in order to circumvent the computational burden of DP. The first method is based on an actor-critic method with temporal-difference (TD) learning. The second method is based on a critic-only method, called optimistic TD learning. The algorithms enhance performance in terms of requirements in storage, computational complexity and computational time, and in terms of an overall long-term average revenue function that penalizes blocked calls. Numerical studies are carried out, and the results obtained show that the RL framework can achieve up to 56% higher average revenue over existing routing methods used in LEO satellite networks with reasonable storage and computational requirements.
One Dimensional Analysis of Inertially Confined Plasmas.
1982-03-01
Confinement Fuel Pellet’ - 3 2 General Flowchart for Program MOXNEX 8 3 General Program Organization of Subroutine ALPHA1 - 1J- 4 Values of <ov...is dumped in the current cell. Subprogram ALPHA1 calls 14 other subroutines to complete its tasks. General program organization is seen in Figure 3...OEROSITION T Figure 3. General Program Organization of Subroutine ALPHA1 6. Subroutine HTFLX. This subroutine computes the energy transfer
Computing Lives And Reliabilities Of Turboprop Transmissions
NASA Technical Reports Server (NTRS)
Coy, J. J.; Savage, M.; Radil, K. C.; Lewicki, D. G.
1991-01-01
Computer program PSHFT calculates lifetimes of variety of aircraft transmissions. Consists of main program, series of subroutines applying to specific configurations, generic subroutines for analysis of properties of components, subroutines for analysis of system, and common block. Main program selects routines used in analysis and causes them to operate in desired sequence. Series of configuration-specific subroutines put in configuration data, perform force and life analyses for components (with help of generic component-property-analysis subroutines), fill property array, call up system-analysis routines, and finally print out results of analysis for system and components. Written in FORTRAN 77(IV).
Sizing of complex structure by the integration of several different optimal design algorithms
NASA Technical Reports Server (NTRS)
Sobieszczanski, J.
1974-01-01
Practical design of large-scale structures can be accomplished with the aid of the digital computer by bringing together in one computer program algorithms of nonlinear mathematical programing and optimality criteria with weight-strength and other so-called engineering methods. Applications of this approach to aviation structures are discussed with a detailed description of how the total problem of structural sizing can be broken down into subproblems for best utilization of each algorithm and for efficient organization of the program into iterative loops. Typical results are examined for a number of examples.
Rotordynamics on the PC: Transient Analysis With ARDS
NASA Technical Reports Server (NTRS)
Fleming, David P.
1997-01-01
Personal computers can now do many jobs that formerly required a large mainframe computer. An example is NASA Lewis Research Center's program Analysis of RotorDynamic Systems (ARDS), which uses the component mode synthesis method to analyze the dynamic motion of up to five rotating shafts. As originally written in the early 1980's, this program was considered large for the mainframe computers of the time. ARDS, which was written in Fortran 77, has been successfully ported to a 486 personal computer. Plots appear on the computer monitor via calls programmed for the original CALCOMP plotter; plots can also be output on a standard laser printer. The executable code, which uses the full array sizes of the mainframe version, easily fits on a high-density floppy disk. The program runs under DOS with an extended memory manager. In addition to transient analysis of blade loss, step turns, and base acceleration, with simulation of squeeze-film dampers and rubs, ARDS calculates natural frequencies and unbalance response.
Automatic computer subprogram selection from application program libraries
NASA Technical Reports Server (NTRS)
Drozdowski, J. M.
1972-01-01
The program ALTLIB (ALTernate LIBrary) which allows a user access to an alternate subprogram library with a minimum effort is discussed. The ALTLIB program selects subprograms from an alternate library file and merges them with the user's program load file. Only subprograms that are called for (directly or indirectly) by the user's programs and that are available on the alternate library file will be selected. ALTLIB eliminates the need for elaborate control-card manipulations to add subprograms from a subprogram file. ALTLIB returns to the user his binary file and the selected subprograms in correct order for a call to the loader. The user supplies the alternate library file. Subprogram requests which are not satisfied from the alternate library file will be satisfied at load time from the system library.
Introducing Computer Simulation into the High School: An Applied Mathematics Curriculum.
ERIC Educational Resources Information Center
Roberts, Nancy
1981-01-01
A programing language called DYNAMO, developed especially for writing simulation models, is promoted. Details of six, self-teaching curriculum packages recently developed for simulation-oriented instruction are provided. (MP)
Ascent/descent ancillary data production user's guide
NASA Technical Reports Server (NTRS)
Brans, H. R.; Seacord, A. W., II; Ulmer, J. W.
1986-01-01
The Ascent/Descent Ancillary Data Product, also called the A/D BET because it contains a Best Estimate of the Trajectory (BET), is a collection of trajectory, attitude, and atmospheric related parameters computed for the ascent and descent phases of each Shuttle Mission. These computations are executed shortly after the event in a post-flight environment. A collection of several routines including some stand-alone routines constitute what is called the Ascent/Descent Ancillary Data Production Program. A User's Guide for that program is given. It is intended to provide the reader with all the information necessary to generate an Ascent or a Descent Ancillary Data Product. It includes descriptions of the input data and output data for each routine, and contains explicit instructions on how to run each routine. A description of the final output product is given.
“Kicking the Tires” of the energy balance routine within the CROPGRO crop growth models of DSSAT
USDA-ARS?s Scientific Manuscript database
Two decades ago a routine called ETPHOT was written to compute evaporation, transpiration, and photosynthesis in the CROPGRO crop simulation programs for grain legumes such as soybean. These programs are part of the DSSAT (Decision Support System of Agrotechnology Transfer), which has been widely us...
Inventorying national forest resources...for planning-programing-budgeting system
Miles R. Hill; Elliot L. Amidon
1968-01-01
New systems for analyzing resource management problems, such as Planning-Programing-Budgeting, will require automated procedures to collect and assemble resource inventory data. A computer - oriented system called Map Information Assembly and Display System developed for this purpose was tested on a National Forest in California. It provided information on eight forest...
An innovative approach to compensator design
NASA Technical Reports Server (NTRS)
Mitchell, J. R.; Mcdaniel, W. L., Jr.
1973-01-01
The design is considered of a computer-aided-compensator for a control system from a frequency domain point of view. The design technique developed is based on describing the open loop frequency response by n discrete frequency points which result in n functions of the compensator coefficients. Several of these functions are chosen so that the system specifications are properly portrayed; then mathematical programming is used to improve all of these functions which have values below minimum standards. To do this, several definitions in regard to measuring the performance of a system in the frequency domain are given, e.g., relative stability, relative attenuation, proper phasing, etc. Next, theorems which govern the number of compensator coefficients necessary to make improvements in a certain number of functions are proved. After this a mathematical programming tool for aiding in the solution of the problem is developed. This tool is called the constraint improvement algorithm. Then for applying the constraint improvement algorithm generalized, gradients for the constraints are derived. Finally, the necessary theory is incorporated in a Computer program called CIP (compensator Improvement Program). The practical usefulness of CIP is demonstrated by two large system examples.
Nakato, Ryuichiro; Itoh, Tahehiko; Shirahige, Katsuhiko
2013-07-01
Chromatin immunoprecipitation with high-throughput sequencing (ChIP-seq) can identify genomic regions that bind proteins involved in various chromosomal functions. Although the development of next-generation sequencers offers the technology needed to identify these protein-binding sites, the analysis can be computationally challenging because sequencing data sometimes consist of >100 million reads/sample. Herein, we describe a cost-effective and time-efficient protocol that is generally applicable to ChIP-seq analysis; this protocol uses a novel peak-calling program termed DROMPA to identify peaks and an additional program, parse2wig, to preprocess read-map files. This two-step procedure drastically reduces computational time and memory requirements compared with other programs. DROMPA enables the identification of protein localization sites in repetitive sequences and efficiently identifies both broad and sharp protein localization peaks. Specifically, DROMPA outputs a protein-binding profile map in pdf or png format, which can be easily manipulated by users who have a limited background in bioinformatics. © 2013 The Authors Genes to Cells © 2013 by the Molecular Biology Society of Japan and Wiley Publishing Asia Pty Ltd.
System support software for the Space Ultrareliable Modular Computer (SUMC)
NASA Technical Reports Server (NTRS)
Hill, T. E.; Hintze, G. C.; Hodges, B. C.; Austin, F. A.; Buckles, B. P.; Curran, R. T.; Lackey, J. D.; Payne, R. E.
1974-01-01
The highly transportable programming system designed and implemented to support the development of software for the Space Ultrareliable Modular Computer (SUMC) is described. The SUMC system support software consists of program modules called processors. The initial set of processors consists of the supervisor, the general purpose assembler for SUMC instruction and microcode input, linkage editors, an instruction level simulator, a microcode grid print processor, and user oriented utility programs. A FORTRAN 4 compiler is undergoing development. The design facilitates the addition of new processors with a minimum effort and provides the user quasi host independence on the ground based operational software development computer. Additional capability is provided to accommodate variations in the SUMC architecture without consequent major modifications in the initial processors.
ERIC Educational Resources Information Center
Bayley-Hamlet, Simone O.
2017-01-01
The purpose of this study was to examine the effect of Imagine Learning, a computer assisted language learning (CALL) program, on addressing reading achievement for English language learners (ELLs). This is a measurement used in the Accessing Comprehension and Communication in English State-to-State (ACCESS for ELLs or ACCESS) reading scale…
Optical Computing Based on Neuronal Models
1988-05-01
walking, and cognition are far too complex for existing sequential digital computers. Therefore new architectures, hardware, and algorithms modeled...collective behavior, and iterative processing into optical processing and artificial neurodynamical systems. Another intriguing promise of neural nets is...with architectures, implementations, and programming; and material research s -7- called for. Our future research in neurodynamics will continue to
Virtual Frame Buffer Interface Program
NASA Technical Reports Server (NTRS)
Wolfe, Thomas L.
1990-01-01
Virtual Frame Buffer Interface program makes all frame buffers appear as generic frame buffer with specified set of characteristics, allowing programmers to write codes that run unmodified on all supported hardware. Converts generic commands to actual device commands. Consists of definition of capabilities and FORTRAN subroutines called by application programs. Developed in FORTRAN 77 for DEC VAX 11/780 or DEC VAX 11/750 computer under VMS 4.X.
NASA Technical Reports Server (NTRS)
Bennett, Robert M.; Batina, John T.
1989-01-01
The application and assessment of a computer program called CAP-TSD (Computational Aeroelasticity Program - Transonic Small Disturbance) for flutter predictions are described. Flutter calculations are presented for two thin swept-and-tapered wing planforms with well-defined modal properties. One planform is a series of 45-degree swept wings and the other planform is a clipped delta wing. Comparisons are made between the results of CAP-TSD using the linear equation and no airfoil thickness and the results obtained from a subsonic kernel function analysis. The calculations cover a Mach number range from low subsonic to low supersonic values, including the transonic range, and are compared with subsonic linear theory and experimental data. It is noted that since both wings have very thin airfoil sections, the effects of thickness are minimal.
NASA Technical Reports Server (NTRS)
Brentner, K. S.
1986-01-01
A computer program has been developed at the Langley Research Center to predict the discrete frequency noise of conventional and advanced helicopter rotors. The program, called WOPWOP, uses the most advanced subsonic formulation of Farassat that is less sensitive to errors and is valid for nearly all helicopter rotor geometries and flight conditions. A brief derivation of the acoustic formulation is presented along with a discussion of the numerical implementation of the formulation. The computer program uses realistic helicopter blade motion and aerodynamic loadings, input by the user, for noise calculation in the time domain. A detailed definition of all the input variables, default values, and output data is included. A comparison with experimental data shows good agreement between prediction and experiment; however, accurate aerodynamic loading is needed.
A distributed version of the NASA Engine Performance Program
NASA Technical Reports Server (NTRS)
Cours, Jeffrey T.; Curlett, Brian P.
1993-01-01
Distributed NEPP, a version of the NASA Engine Performance Program, uses the original NEPP code but executes it in a distributed computer environment. Multiple workstations connected by a network increase the program's speed and, more importantly, the complexity of the cases it can handle in a reasonable time. Distributed NEPP uses the public domain software package, called Parallel Virtual Machine, allowing it to execute on clusters of machines containing many different architectures. It includes the capability to link with other computers, allowing them to process NEPP jobs in parallel. This paper discusses the design issues and granularity considerations that entered into programming Distributed NEPP and presents the results of timing runs.
A SCILAB Program for Computing Rotating Magnetic Compact Objects
NASA Astrophysics Data System (ADS)
Papasotiriou, P. J.; Geroyannis, V. S.
We implement the so-called ``complex-plane iterative technique'' (CIT) to the computation of classical differentially rotating magnetic white dwarf and neutron star models. The program has been written in SCILAB (© INRIA-ENPC), a matrix-oriented high-level programming language, which can be downloaded free of charge from the site http://www-rocq.inria.fr/scilab. Due to the advanced capabilities of this language, the code is short and understandable. Highlights of the program are: (a) time-saving character, (b) easy use due to the built-in graphics user interface, (c) easy interfacing with Fortran via online dynamic link. We interpret our numerical results in various ways by extensively using the graphics environment of SCILAB.
CAD/CAE Integration Enhanced by New CAD Services Standard
NASA Technical Reports Server (NTRS)
Claus, Russell W.
2002-01-01
A Government-industry team led by the NASA Glenn Research Center has developed a computer interface standard for accessing data from computer-aided design (CAD) systems. The Object Management Group, an international computer standards organization, has adopted this CAD services standard. The new standard allows software (e.g., computer-aided engineering (CAE) and computer-aided manufacturing software to access multiple CAD systems through one programming interface. The interface is built on top of a distributed computing system called the Common Object Request Broker Architecture (CORBA). CORBA allows the CAD services software to operate in a distributed, heterogeneous computing environment.
Computational thinking in life science education.
Rubinstein, Amir; Chor, Benny
2014-11-01
We join the increasing call to take computational education of life science students a step further, beyond teaching mere programming and employing existing software tools. We describe a new course, focusing on enriching the curriculum of life science students with abstract, algorithmic, and logical thinking, and exposing them to the computational "culture." The design, structure, and content of our course are influenced by recent efforts in this area, collaborations with life scientists, and our own instructional experience. Specifically, we suggest that an effective course of this nature should: (1) devote time to explicitly reflect upon computational thinking processes, resisting the temptation to drift to purely practical instruction, (2) focus on discrete notions, rather than on continuous ones, and (3) have basic programming as a prerequisite, so students need not be preoccupied with elementary programming issues. We strongly recommend that the mere use of existing bioinformatics tools and packages should not replace hands-on programming. Yet, we suggest that programming will mostly serve as a means to practice computational thinking processes. This paper deals with the challenges and considerations of such computational education for life science students. It also describes a concrete implementation of the course and encourages its use by others.
NASA Technical Reports Server (NTRS)
1990-01-01
FluiDyne Engineering Corporation, Minneapolis, MN is one of the world's leading companies in design and construction of wind tunnels. In its designing work, FluiDyne uses a computer program called GTRAN. With GTRAN, engineers create a design and test its performance on the computer before actually building a model; should the design fail to meet criteria, the system or any component part can be redesigned and retested on the computer, saving a great deal of time and money.
NASA Technical Reports Server (NTRS)
Enison, R. L.
1971-01-01
A computer program called Character String Scanner (CSS), is presented. It is designed to search a data set for any specified group of characters and then to flag this group. The output of the CSS program is a listing of the data set being searched with the specified group of characters being flagged by asterisks. Therefore, one may readily identify specific keywords, groups of keywords or specified lines of code internal to a computer program, in a program output, or in any other specific data set. Possible applications of this program include the automatic scan of an output data set for pertinent keyword data, the editing of a program to change the appearance of a certain word or group of words, and the conversion of a set of code to a different set of code.
The SCEC Community Modeling Environment(SCEC/CME): A Collaboratory for Seismic Hazard Analysis
NASA Astrophysics Data System (ADS)
Maechling, P. J.; Jordan, T. H.; Minster, J. B.; Moore, R.; Kesselman, C.
2005-12-01
The SCEC Community Modeling Environment (SCEC/CME) Project is an NSF-supported Geosciences/IT partnership that is actively developing an advanced information infrastructure for system-level earthquake science in Southern California. This partnership includes SCEC, USC's Information Sciences Institute (ISI), the San Diego Supercomputer Center (SDSC), the Incorporated Institutions for Research in Seismology (IRIS), and the U.S. Geological Survey. The goal of the SCEC/CME is to develop seismological applications and information technology (IT) infrastructure to support the development of Seismic Hazard Analysis (SHA) programs and other geophysical simulations. The SHA application programs developed on the Project include a Probabilistic Seismic Hazard Analysis system called OpenSHA. OpenSHA computational elements that are currently available include a collection of attenuation relationships, and several Earthquake Rupture Forecasts (ERFs). Geophysicists in the collaboration have also developed Anelastic Wave Models (AWMs) using both finite-difference and finite-element approaches. Earthquake simulations using these codes have been run for a variety of earthquake sources. Rupture Dynamic Model (RDM) codes have also been developed that simulate friction-based fault slip. The SCEC/CME collaboration has also developed IT software and hardware infrastructure to support the development, execution, and analysis of these SHA programs. To support computationally expensive simulations, we have constructed a grid-based scientific workflow system. Using the SCEC grid, project collaborators can submit computations from the SCEC/CME servers to High Performance Computers at USC and TeraGrid High Performance Computing Centers. Data generated and archived by the SCEC/CME is stored in a digital library system, the Storage Resource Broker (SRB). This system provides a robust and secure system for maintaining the association between the data seta and their metadata. To provide an easy-to-use system for constructing SHA computations, a browser-based workflow assembly web portal has been developed. Users can compose complex SHA calculations, specifying SCEC/CME data sets as inputs to calculations, and calling SCEC/CME computational programs to process the data and the output. Knowledge-based software tools have been implemented that utilize ontological descriptions of SHA software and data can validate workflows created with this pathway assembly tool. Data visualization software developed by the collaboration supports analysis and validation of data sets. Several programs have been developed to visualize SCEC/CME data including GMT-based map making software for PSHA codes, 4D wavefield propagation visualization software based on OpenGL, and 3D Geowall-based visualization of earthquakes, faults, and seismic wave propagation. The SCEC/CME Project also helps to sponsor the SCEC UseIT Intern program. The UseIT Intern Program provides research opportunities in both Geosciences and Information Technology to undergraduate students in a variety of fields. The UseIT group has developed a 3D data visualization tool, called SCEC-VDO, as a part of this undergraduate research program.
ERIC Educational Resources Information Center
Schmid, Euline Cutrim; Hegelheimer, Volker
2014-01-01
This paper presents research findings of a longitudinal empirical case study that investigated an innovative Computer Assisted Language Learning (CALL) professional development program for pre-service English as Foreign Language (EFL) teachers. The conceptualization of the program was based on the assumption that pre-service language teachers…
Visual management support system
Lee Anderson; Jerry Mosier; Geoffrey Chandler
1979-01-01
The Visual Management Support System (VMSS) is an extension of an existing computer program called VIEWIT, which has been extensively used by the U. S. Forest Service. The capabilities of this program lie in the rapid manipulation of large amounts of data, specifically opera-ting as a tool to overlay or merge one set of data with another. VMSS was conceived to...
Produce yellow-poplar furniture dimension at minimum cost by using YELLOPOP
David G. Marten; David G. Marten
1986-01-01
Describes a computer program called YELLOPOP that determines the least-cost combination of lumber grades required to produce a given cutting order of furniture dimension parts. If the least-cost mix is not available, YELLOPOP can be used to determine the next best alternative. The steps involved in using the program are also described.
OPTIGRAMI: Optimum lumber grade mix program for hardwood dimension parts
David G. Martens; Jr., Robert L. Nevel; Jr. Nevel
1985-01-01
With rapidly increasing lumber prices and shortages of some grades and species, the furniture industry must find ways to use its hardwood lumber resource more efficiently. A computer program called OPTIGRAMI is designed to help managers determine the best lumber to use in producing furniture parts. OPTIGRAMI determines the least-cost grade mix of lumber required to...
Conversion of the CALAP (Computer Aided Landform Analysis Program) Program from FORTRAN to DUCK.
1986-09-01
J’ DUCK artificial intelligence logic programming 20 AVrACT (Cthm m reerse stabN ameeaaW idelfr by block mbae) An expert advisor program named CALAP...original program was developed in FORTRAN on an HP- 1000, a mirticomputer. CALAP was reprogrammed in an Artificial Intelligence (AI) language called DUCK...the Artificial Intelligence Center, U.S. Army Engineer Topographic Laboratory, Fort Belvoir. Z" I. S. n- Page 1 I. Introduction An expert advisor
NASA Technical Reports Server (NTRS)
Mcbride, Bonnie J.; Reno, Martin A.; Gordon, Sanford
1994-01-01
The NASA Lewis chemical equilibrium program with applications continues to be improved and updated. The latest version is CET93. This code, with smaller arrays, has been compiled for use on an IBM or IBM-compatible personal computer and is called CETPC. This report is intended to be primarily a users manual for CET93 and CETPC. It does not repeat the more complete documentation of earlier reports on the equilibrium program. Most of the discussion covers input and output files, two new options (ONLY and comments), example problems, and implementation of CETPC.
Web-based training: a new paradigm in computer-assisted instruction in medicine.
Haag, M; Maylein, L; Leven, F J; Tönshoff, B; Haux, R
1999-01-01
Computer-assisted instruction (CAI) programs based on internet technologies, especially on the world wide web (WWW), provide new opportunities in medical education. The aim of this paper is to examine different aspects of such programs, which we call 'web-based training (WBT) programs', and to differentiate them from conventional CAI programs. First, we will distinguish five different interaction types: presentation; browsing; tutorial dialogue; drill and practice; and simulation. In contrast to conventional CAI, there are four architectural types of WBT programs: client-based; remote data and knowledge; distributed teaching; and server-based. We will discuss the implications of the different architectures for developing WBT software. WBT programs have to meet other requirements than conventional CAI programs. The most important tools and programming languages for developing WBT programs will be listed and assigned to the architecture types. For the future, we expect a trend from conventional CAI towards WBT programs.
1978-05-01
controls and executes the jet plume flow field compu- tation. After each axial slice has been evaluated, the MAIN program calls subroutine SLICE to...input data; otherwise the execution is halted. 4.3.2 ARCCOS(X) This is a function subroutine which computes the principal value of the arc cosine of the... execution time available. Each successive case requires a title card (80 - character label in columns 1 - 80), followed by the INPUT NAMELIST. The data from
ERIC Educational Resources Information Center
Bernard, Robert M.; Bethel, Edward Clement; Abrami, Philip C.; Wade, C. Anne
2007-01-01
This study examines the achievement outcomes accompanying the implementation of a Grade 3 laptop or so-called "ubiquitous computing" program in a Quebec school district. CAT3 reading, language, and mathematics batteries were administered at the end of Grade 2 and again at the end of Grade 3, after the first year of computer…
Construction of a General Purpose Command Language for Use in Computer Dialog.
1980-09-01
Page 1 Skeletal Command Action File...............35 2 Sample from Cyber Action File.................36 3 Program MONITOR Structure Chart...return indicates subroutine call and no return Fig 3. Program MONITOR Structure Chart 48 IV. Validation The general purpose command language was...executive control of these functions, in C addition to its role as interpreter. C C The structure , concept, design, and implementation of program C
Multidisciplinary analysis of actively controlled large flexible spacecraft
NASA Technical Reports Server (NTRS)
Cooper, Paul A.; Young, John W.; Sutter, Thomas R.
1986-01-01
The control of Flexible Structures (COFS) program has supported the development of an analysis capability at the Langley Research Center called the Integrated Multidisciplinary Analysis Tool (IMAT) which provides an efficient data storage and transfer capability among commercial computer codes to aid in the dynamic analysis of actively controlled structures. IMAT is a system of computer programs which transfers Computer-Aided-Design (CAD) configurations, structural finite element models, material property and stress information, structural and rigid-body dynamic model information, and linear system matrices for control law formulation among various commercial applications programs through a common database. Although general in its formulation, IMAT was developed specifically to aid in the evaluation of the structures. A description of the IMAT system and results of an application of the system are given.
NASA Technical Reports Server (NTRS)
Plankey, B.
1981-01-01
A computer program called ECPVER (Energy Consumption Program - Verification) was developed to simulate all energy loads for any number of buildings. The program computes simulated daily, monthly, and yearly energy consumption which can be compared with actual meter readings for the same time period. Such comparison can lead to validation of the model under a variety of conditions, which allows it to be used to predict future energy saving due to energy conservation measures. Predicted energy saving can then be compared with actual saving to verify the effectiveness of those energy conservation changes. This verification procedure is planned to be an important advancement in the Deep Space Network Energy Project, which seeks to reduce energy cost and consumption at all DSN Deep Space Stations.
Image Algebra Matlab language version 2.3 for image processing and compression research
NASA Astrophysics Data System (ADS)
Schmalz, Mark S.; Ritter, Gerhard X.; Hayden, Eric
2010-08-01
Image algebra is a rigorous, concise notation that unifies linear and nonlinear mathematics in the image domain. Image algebra was developed under DARPA and US Air Force sponsorship at University of Florida for over 15 years beginning in 1984. Image algebra has been implemented in a variety of programming languages designed specifically to support the development of image processing and computer vision algorithms and software. The University of Florida has been associated with development of the languages FORTRAN, Ada, Lisp, and C++. The latter implementation involved a class library, iac++, that supported image algebra programming in C++. Since image processing and computer vision are generally performed with operands that are array-based, the Matlab™ programming language is ideal for implementing the common subset of image algebra. Objects include sets and set operations, images and operations on images, as well as templates and image-template convolution operations. This implementation, called Image Algebra Matlab (IAM), has been found to be useful for research in data, image, and video compression, as described herein. Due to the widespread acceptance of the Matlab programming language in the computing community, IAM offers exciting possibilities for supporting a large group of users. The control over an object's computational resources provided to the algorithm designer by Matlab means that IAM programs can employ versatile representations for the operands and operations of the algebra, which are supported by the underlying libraries written in Matlab. In a previous publication, we showed how the functionality of IAC++ could be carried forth into a Matlab implementation, and provided practical details of a prototype implementation called IAM Version 1. In this paper, we further elaborate the purpose and structure of image algebra, then present a maturing implementation of Image Algebra Matlab called IAM Version 2.3, which extends the previous implementation of IAM to include polymorphic operations over different point sets, as well as recursive convolution operations and functional composition. We also show how image algebra and IAM can be employed in image processing and compression research, as well as algorithm development and analysis.
Nishizawa, Hiroaki; Nishimura, Yoshifumi; Kobayashi, Masato; Irle, Stephan; Nakai, Hiromi
2016-08-05
The linear-scaling divide-and-conquer (DC) quantum chemical methodology is applied to the density-functional tight-binding (DFTB) theory to develop a massively parallel program that achieves on-the-fly molecular reaction dynamics simulations of huge systems from scratch. The functions to perform large scale geometry optimization and molecular dynamics with DC-DFTB potential energy surface are implemented to the program called DC-DFTB-K. A novel interpolation-based algorithm is developed for parallelizing the determination of the Fermi level in the DC method. The performance of the DC-DFTB-K program is assessed using a laboratory computer and the K computer. Numerical tests show the high efficiency of the DC-DFTB-K program, a single-point energy gradient calculation of a one-million-atom system is completed within 60 s using 7290 nodes of the K computer. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
The force on the flex: Global parallelism and portability
NASA Technical Reports Server (NTRS)
Jordan, H. F.
1986-01-01
A parallel programming methodology, called the force, supports the construction of programs to be executed in parallel by an unspecified, but potentially large, number of processes. The methodology was originally developed on a pipelined, shared memory multiprocessor, the Denelcor HEP, and embodies the primitive operations of the force in a set of macros which expand into multiprocessor Fortran code. A small set of primitives is sufficient to write large parallel programs, and the system has been used to produce 10,000 line programs in computational fluid dynamics. The level of complexity of the force primitives is intermediate. It is high enough to mask detailed architectural differences between multiprocessors but low enough to give the user control over performance. The system is being ported to a medium scale multiprocessor, the Flex/32, which is a 20 processor system with a mixture of shared and local memory. Memory organization and the type of processor synchronization supported by the hardware on the two machines lead to some differences in efficient implementations of the force primitives, but the user interface remains the same. An initial implementation was done by retargeting the macros to Flexible Computer Corporation's ConCurrent C language. Subsequently, the macros were caused to directly produce the system calls which form the basis for ConCurrent C. The implementation of the Fortran based system is in step with Flexible Computer Corporations's implementation of a Fortran system in the parallel environment.
Structural Analysis and Design Software
NASA Technical Reports Server (NTRS)
1997-01-01
Collier Research and Development Corporation received a one-of-a-kind computer code for designing exotic hypersonic aircraft called ST-SIZE in the first ever Langley Research Center software copyright license agreement. Collier transformed the NASA computer code into a commercial software package called HyperSizer, which integrates with other Finite Element Modeling and Finite Analysis private-sector structural analysis program. ST-SIZE was chiefly conceived as a means to improve and speed the structural design of a future aerospace plane for Langley Hypersonic Vehicles Office. Including the NASA computer code into HyperSizer has enabled the company to also apply the software to applications other than aerospace, including improved design and construction for offices, marine structures, cargo containers, commercial and military aircraft, rail cars, and a host of everyday consumer products.
Advanced Simulation and Computing: A Summary Report to the Director's Review
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCoy, M G; Peck, T
2003-06-01
It has now been three years since the Advanced Simulation and Computing Program (ASCI), as managed by Defense and Nuclear Technologies (DNT) Directorate, has been reviewed by this Director's Review Committee (DRC). Since that time, there has been considerable progress for all components of the ASCI Program, and these developments will be highlighted in this document and in the presentations planned for June 9 and 10, 2003. There have also been some name changes. Today, the Program is called ''Advanced Simulation and Computing,'' Although it retains the familiar acronym ASCI, the initiative nature of the effort has given way tomore » sustained services as an integral part of the Stockpile Stewardship Program (SSP). All computing efforts at LLNL and the other two Defense Program (DP) laboratories are funded and managed under ASCI. This includes the so-called legacy codes, which remain essential tools in stockpile stewardship. The contract between the Department of Energy (DOE) and the University of California (UC) specifies an independent appraisal of Directorate technical work and programmatic management. Such represents the work of this DNT Review Committee. Beginning this year, the Laboratory is implementing a new review system. This process was negotiated between UC, the National Nuclear Security Administration (NNSA), and the Laboratory Directors. Central to this approach are eight performance objectives that focus on key programmatic and administrative goals. Associated with each of these objectives are a number of performance measures to more clearly characterize the attainment of the objectives. Each performance measure has a lead directorate and one or more contributing directorates. Each measure has an evaluation plan and has identified expected documentation to be included in the ''Assessment File''.« less
NASA Technical Reports Server (NTRS)
Gerstle, Walter
1989-01-01
Engineering problems sometimes involve the numerical solution of boundary value problems over domains containing geometric feature with widely varying scales. Often, a detailed solution is required at one or more of these features. Small details in large structures may have profound effects upon global performance. Conversely, large-scale conditions may effect local performance. Many man-hours and CPU-hours are currently spent in modeling such problems. With the structural zooming technique, it is now possible to design an integrated program which allows the analyst to interactively focus upon a small region of interest, to modify the local geometry, and then to obtain highly accurate responses in that region which reflect both the properties of the overall structure and the local detail. A boundary integral equation analysis program, called BOAST, was recently developed for the stress analysis of cracks. This program can accurately analyze two-dimensional linear elastic fracture mechanics problems with far less computational effort than existing finite element codes. An interactive computer graphical interface to BOAST was written. The graphical interface would have several requirements: it would be menu-driven, with mouse input; all aspects of input would be entered graphically; the results of a BOAST analysis would be displayed pictorially but also the user would be able to probe interactively to get numerical values of displacement and stress at desired locations within the analysis domain; the entire procedure would be integrated into a single, easy to use package; and it would be written using calls to the graphic package called HOOPS. The program is nearing completion. All of the preprocessing features are working satisfactorily and were debugged. The postprocessing features are under development, and rudimentary postprocessing should be available by the end of the summer. The program was developed and run on a VAX workstation, and must be ported to the SUN workstation. This activity is currently underway.
ERIC Educational Resources Information Center
Duda, Richard O.; Shortliffe, Edward H.
1983-01-01
Discusses a class of artificial intelligence computer programs (often called "expert systems" because they address problems normally thought to require human specialists for their solution) intended to serve as consultants for decision making. Also discusses accomplishments (including information systematization in medical diagnosis and…
Artificial intelligence: Learning to play Go from scratch
NASA Astrophysics Data System (ADS)
Singh, Satinder; Okun, Andy; Jackson, Andrew
2017-10-01
An artificial-intelligence program called AlphaGo Zero has mastered the game of Go without any human data or guidance. A computer scientist and two members of the American Go Association discuss the implications. See Article p.354
The X-ray system of crystallographic programs for any computer having a PIDGIN FORTRAN compiler
NASA Technical Reports Server (NTRS)
Stewart, J. M.; Kruger, G. J.; Ammon, H. L.; Dickinson, C.; Hall, S. R.
1972-01-01
A manual is presented for the use of a library of crystallographic programs. This library, called the X-ray system, is designed to carry out the calculations required to solve the structure of crystals by diffraction techniques. It has been implemented at the University of Maryland on the Univac 1108. It has, however, been developed and run on a variety of machines under various operating systems. It is considered to be an essentially machine independent library of applications programs. The report includes definition of crystallographic computing terms, program descriptions, with some text to show their application to specific crystal problems, detailed card input descriptions, mass storage file structure and some example run streams.
Application of the ASP3D Computer Program to Unsteady Aerodynamic and Aeroelastic Analyses
NASA Technical Reports Server (NTRS)
Batina, John T.
2006-01-01
A new computer program has been developed called ASP3D (Advanced Small Perturbation - 3D), which solves the small perturbation potential flow equation in an advanced form including mass-consistent surface and trailing wake boundary conditions, and entropy, vorticity, and viscous effects. The purpose of the program is for unsteady aerodynamic and aeroelastic analyses, especially in the nonlinear transonic flight regime. The program exploits the simplicity of stationary Cartesian meshes with the movement or deformation of the configuration under consideration incorporated into the solution algorithm through a planar surface boundary condition. The paper presents unsteady aerodynamic and aeroelastic applications of ASP3D to assess the time dependent capability and demonstrate various features of the code.
NASA Technical Reports Server (NTRS)
Bingle, Bradford D.; Shea, Anne L.; Hofler, Alicia S.
1993-01-01
Transferable Output ASCII Data (TOAD) computer program (LAR-13755), implements format designed to facilitate transfer of data across communication networks and dissimilar host computer systems. Any data file conforming to TOAD format standard called TOAD file. TOAD Editor is interactive software tool for manipulating contents of TOAD files. Commonly used to extract filtered subsets of data for visualization of results of computation. Also offers such user-oriented features as on-line help, clear English error messages, startup file, macroinstructions defined by user, command history, user variables, UNDO features, and full complement of mathematical statistical, and conversion functions. Companion program, TOAD Gateway (LAR-14484), converts data files from variety of other file formats to that of TOAD. TOAD Editor written in FORTRAN 77.
- XSUMMER- Transcendental functions and symbolic summation in FORM
NASA Astrophysics Data System (ADS)
Moch, S.; Uwer, P.
2006-05-01
Harmonic sums and their generalizations are extremely useful in the evaluation of higher-order perturbative corrections in quantum field theory. Of particular interest have been the so-called nested sums, where the harmonic sums and their generalizations appear as building blocks, originating for example, from the expansion of generalized hypergeometric functions around integer values of the parameters. In this paper we discuss the implementation of several algorithms to solve these sums by algebraic means, using the computer algebra system FORM. Program summaryTitle of program:XSUMMER Catalogue identifier:ADXQ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXQ_v1_0 Program obtainable from:CPC Program Library, Queen's University of Belfast, N. Ireland License:GNU Public License and FORM License Computers:all Operating system:all Program language:FORM Memory required to execute:Depending on the complexity of the problem, recommended at least 64 MB RAM No. of lines in distributed program, including test data, etc.:9854 No. of bytes in distributed program, including test data, etc.:126 551 Distribution format:tar.gz Other programs called:none External files needed:none Nature of the physical problem:Systematic expansion of higher transcendental functions in a small parameter. The expansions arise in the calculation of loop integrals in perturbative quantum field theory. Method of solution:Algebraic manipulations of nested sums. Restrictions on complexity of the problem:Usually limited only by the available disk space. Typical running time:Dependent on the complexity of the problem.
Simplifying Facility and Event Scheduling: Saving Time and Money.
ERIC Educational Resources Information Center
Raasch, Kevin
2003-01-01
Describes a product called the Event Management System (EMS), a computer software program to manage facility and event scheduling. Provides example of the school district and university uses of EMS. Describes steps in selecting a scheduling-management system. (PKP)
Chandelier: Picturing Potential
ERIC Educational Resources Information Center
Tebbs, Trevor J.
2014-01-01
The author--artist, scientist, educator, and visual-spatial thinker--describes the genesis of, and provides insight into, an innovative, strength-based, visually dynamic computer-aided communication system called Chandelier©. This system is the powerful combination of a sophisticated, user-friendly software program and an organizational…
[Application of virtual instrumentation technique in toxicological studies].
Moczko, Jerzy A
2005-01-01
Research investigations require frequently direct connection of measuring equipment to the computer. Virtual instrumentation technique considerably facilitates programming of sophisticated acquisition-and-analysis procedures. In standard approach these two steps are performed subsequently with separate software tools. The acquired data are transfered with export / import procedures of particular program to the another one which executes next step of analysis. The described procedure is cumbersome, time consuming and may be potential source of the errors. In 1987 National Instruments Corporation introduced LabVIEW language based on the concept of graphical programming. Contrary to conventional textual languages it allows the researcher to concentrate on the resolved problem and omit all syntactical rules. Programs developed in LabVIEW are called as virtual instruments (VI) and are portable among different computer platforms as PCs, Macintoshes, Sun SPARCstations, Concurrent PowerMAX stations, HP PA/RISK workstations. This flexibility warrants that the programs prepared for one particular platform would be also appropriate to another one. In presented paper basic principles of connection of research equipment to computer systems were described.
NASA Technical Reports Server (NTRS)
1990-01-01
A mathematician, David R. Hedgley, Jr. developed a computer program that considers whether a line in a graphic model of a three-dimensional object should or should not be visible. Known as the Hidden Line Computer Code, the program automatically removes superfluous lines and displays an object from a specific viewpoint, just as the human eye would see it. An example of how one company uses the program is the experience of Birdair which specializes in production of fabric skylights and stadium covers. The fabric called SHEERFILL is a Teflon coated fiberglass material developed in cooperation with DuPont Company. SHEERFILL glazed structures are either tension structures or air-supported tension structures. Both are formed by patterned fabric sheets supported by a steel or aluminum frame or cable network. Birdair uses the Hidden Line Computer Code, to illustrate a prospective structure to an architect or owner. The program generates a three- dimensional perspective with the hidden lines removed. This program is still used by Birdair and continues to be commercially available to the public.
The development of an interim generalized gate logic software simulator
NASA Technical Reports Server (NTRS)
Mcgough, J. G.; Nemeroff, S.
1985-01-01
A proof-of-concept computer program called IGGLOSS (Interim Generalized Gate Logic Software Simulator) was developed and is discussed. The simulator engine was designed to perform stochastic estimation of self test coverage (fault-detection latency times) of digital computers or systems. A major attribute of the IGGLOSS is its high-speed simulation: 9.5 x 1,000,000 gates/cpu sec for nonfaulted circuits and 4.4 x 1,000,000 gates/cpu sec for faulted circuits on a VAX 11/780 host computer.
uPy: a ubiquitous computer graphics Python API with Biological Modeling Applications
Autin, L.; Johnson, G.; Hake, J.; Olson, A.; Sanner, M.
2015-01-01
In this paper we describe uPy, an extension module for the Python programming language that provides a uniform abstraction of the APIs of several 3D computer graphics programs called hosts, including: Blender, Maya, Cinema4D, and DejaVu. A plugin written with uPy is a unique piece of code that will run in all uPy-supported hosts. We demonstrate the creation of complex plug-ins for molecular/cellular modeling and visualization and discuss how uPy can more generally simplify programming for many types of projects (not solely science applications) intended for multi-host distribution. uPy is available at http://upy.scripps.edu PMID:24806987
NASA Technical Reports Server (NTRS)
1972-01-01
The QL module of the Performance Analysis and Design Synthesis (PADS) computer program is described. Execution of this module is initiated when and if subroutine PADSI calls subroutine GROPE. Subroutine GROPE controls the high level logical flow of the QL module. The purpose of the module is to determine a trajectory that satisfies the necessary variational conditions for optimal performance. The module achieves this by solving a nonlinear multi-point boundary value problem. The numerical method employed is described. It is an iterative technique that converges quadratically when it does converge. The three basic steps of the module are: (1) initialization, (2) iteration, and (3) culmination. For Volume 1 see N73-13199.
Dynamics of flexible bodies in tree topology - A computer oriented approach
NASA Technical Reports Server (NTRS)
Singh, R. P.; Vandervoort, R. J.; Likins, P. W.
1984-01-01
An approach suited for automatic generation of the equations of motion for large mechanical systems (i.e., large space structures, mechanisms, robots, etc.) is presented. The system topology is restricted to a tree configuration. The tree is defined as an arbitrary set of rigid and flexible bodies connected by hinges characterizing relative translations and rotations of two adjoining bodies. The equations of motion are derived via Kane's method. The resulting equation set is of minimum dimension. Dynamical equations are imbedded in a computer program called TREETOPS. Extensive control simulation capability is built in the TREETOPS program. The simulation is driven by an interactive set-up program resulting in an easy to use analysis tool.
NASA Technical Reports Server (NTRS)
Wu, R. W.; Witmer, E. A.
1972-01-01
A user-oriented FORTRAN 4 computer program, called JET 3, is presented. The JET 3 program, which employs the spatial finite-element and timewise finite-difference method, can be used to predict the large two-dimensional elastic-plastic transient Kirchhoff-type deformations of a complete or partial structural ring, with various support conditions and restraints, subjected to a variety of initial velocity distributions and externally-applied transient forcing functions. The geometric shapes of the structural ring can be circular or arbitrarily curved and with variable thickness. Strain-hardening and strain-rate effects of the material are taken into account.
NASA Technical Reports Server (NTRS)
Benyo, Theresa L.
2002-01-01
Integration of a supersonic inlet simulation with a computer aided design (CAD) system is demonstrated. The integration is performed using the Project Integration Architecture (PIA). PIA provides a common environment for wrapping many types of applications. Accessing geometry data from CAD files is accomplished by incorporating appropriate function calls from the Computational Analysis Programming Interface (CAPRI). CAPRI is a CAD vendor neutral programming interface that aids in acquiring geometry data directly from CAD files. The benefits of wrapping a supersonic inlet simulation into PIA using CAPRI are; direct access of geometry data, accurate capture of geometry data, automatic conversion of data units, CAD vendor neutral operation, and on-line interactive history capture. This paper describes the PIA and the CAPRI wrapper and details the supersonic inlet simulation demonstration.
EngineSim: Turbojet Engine Simulator Adapted for High School Classroom Use
NASA Technical Reports Server (NTRS)
Petersen, Ruth A.
2001-01-01
EngineSim is an interactive educational computer program that allows users to explore the effect of engine operation on total aircraft performance. The software is supported by a basic propulsion web site called the Beginner's Guide to Propulsion, which includes educator-created, web-based activities for the classroom use of EngineSim. In addition, educators can schedule videoconferencing workshops in which EngineSim's creator demonstrates the software and discusses its use in the educational setting. This software is a product of NASA Glenn Research Center's Learning Technologies Project, an educational outreach initiative within the High Performance Computing and Communications Program.
NASA Technical Reports Server (NTRS)
1990-01-01
Magnetic Resonance Imaging (MRI) and Computer-aided Tomography (CT) images are often complementary. In most cases, MRI is good for viewing soft tissue but not bone, while CT images are good for bone but not always good for soft tissue discrimination. Physicians and engineers in the Department of Radiology at the University of Michigan Hospitals are developing a technique for combining the best features of MRI and CT scans to increase the accuracy of discriminating one type of body tissue from another. One of their research tools is a computer program called HICAP. The program can be used to distinguish between healthy and diseased tissue in body images.
A distributed program composition system
NASA Technical Reports Server (NTRS)
Brown, Robert L.
1989-01-01
A graphical technique for creating distributed computer programs is investigated and a prototype implementation is described which serves as a testbed for the concepts. The type of programs under examination is restricted to those comprising relatively heavyweight parts that intercommunicate by passing messages of typed objects. Such programs are often presented visually as a directed graph with computer program parts as the nodes and communication channels as the edges. This class of programs, called parts-based programs, is not well supported by existing computer systems; much manual work is required to describe the program to the system, establish the communication paths, accommodate the heterogeneity of data types, and to locate the parts of the program on the various systems involved. The work described solves most of these problems by providing an interface for describing parts-based programs in this class in a way that closely models the way programmers think about them: using sketches of diagraphs. Program parts, the computational modes of the larger program system are categorized in libraries and are accessed with browsers. The process of programming has the programmer draw the program graph interactively. Heterogeneity is automatically accommodated by the insertion of type translators where necessary between the parts. Many decisions are necessary in the creation of a comprehensive tool for interactive creation of programs in this class. Possibilities are explored and the issues behind such decisions are presented. An approach to program composition is described, not a carefully implemented programming environment. However, a prototype implementation is described that can demonstrate the ideas presented.
A Research Program in Computer Technology
1976-07-01
K PROGRAM VERIFICATION 12 [Shaw76b] Shaw, M., W. A. Wulf, and R. L. London, Abstraction and Verification ain Aiphard: Iteration and Generators...millisecond trame of speech: pitch, gain, and 10 k -parameters (often called reflection coefficients). The 12 parameters from each frame are encoded into...del rey, CA 90291 Program Code 3D30 & 3P1O I,%’POLLING OFFICE NAME AND ADDRESS 12 REPORT DATE Defense Advanced Research Projects Agency July 1976 1400
Program Aids Creation Of X-Y Plots
NASA Technical Reports Server (NTRS)
Jeletic, James F.
1993-01-01
VEGAS computer program enables application programmers to create X-Y plots in various modes through high-level subroutine calls. Modes consist of passive, autoupdate, and interactive modes. In passive mode, VEGAS takes input data, produces plot, and returns control to application program. In autoupdate mode, forms plots and automatically updates them as more information received. In interactive mode, displays plot and provides popup menus for user to alter appearance of plot or to modify data. Written in FORTRAN 77.
ExpoCast: Exposure Science for Prioritization and Toxicity Testing (S)
The US EPA is completing the Phase I pilot for a chemical prioritization research program, called ToxCast. Here EPA is developing methods for using computational chemistry, high-throughput screening, and toxicogenomic technologies to predict potential toxicity and prioritize limi...
ExpoCast: Exposure Science for Prioritization and Toxicity Testing
The US EPA is completing the Phase I pilot for a chemical prioritization research program, called ToxCastTM. Here EPA is developing methods for using computational chemistry, high-throughput screening, and toxicogenomic technologies to predict potential toxicity and prioritize l...
NASA Astrophysics Data System (ADS)
1983-01-01
FMC Corporation conducts extensive proof lift tests and computerized analysis to insure that the cranes can lift rated capacity loads up to one million pounds in a wide range of applications. In their analysis work, engineers makes use of a computer program supplied by COSMIC. Called Analysis of Beam Columns, the program is used as part of the required analysis for determining bending moments, deflections and critical load for latticed crane booms.
1974-11-01
PR(20)aPO(20)#NNC20)aNCC2S)DFOW(28)a 3FOM(90)*SOM(2@)iSOW(20) CALL IFILECI&SIIFRAGS) CALL OFILE(2a5HMENTS) I FORMAT(2A5) 2 FORMAT( F6 *0) 3 FORMAT...Department of National Defence (2 copies) The Director, Defence Scientific Information & Documentation Centre, India Director, Defence Research (entre, Ministry of Derence, Malaysia i; 4 * 4
Operation of the HP2250 with the HP9000 series 200 using PASCAL 3.0
NASA Technical Reports Server (NTRS)
Perry, John; Stroud, C. W.
1986-01-01
A computer program has been written to provide an interface between the HP Series 200 desktop computers, operating under HP Standard Pascal 3.0, and the HP2250 Data Acquisition and Control System. Pascal 3.0 for the HP9000 desktop computer gives a number of procedures for handling bus communication at various levels. It is necessary, however, to reach the lowest possible level in Pascal to handle the bus protocols required by the HP2250. This makes programming extremely complex since these protocols are not documented. The program described solves those problems and allows the user to immediately program, simply and efficiently, any measurement and control language (MCL/50) application with a few procedure calls. The complete set of procedures is available on a 5 1/4 inch diskette from Cosmic. Included in this group of procedures is an Exerciser which allows the user to exercise his HP2250 interactively. The exerciser operates in a fashion similar to the Series 200 operating system programs, but is adapted to the requirements of the HP2250. The programs on the diskette and the user's manual assume the user is acquainted with both the MCL/50 programming language and HP Standard Pascal 3.0 for the HP series 200 desktop computers.
DOT National Transportation Integrated Search
2009-11-01
The Oregon Department of Transportation and Portland State University evaluated the seismic : vulnerability of state highway bridges in western Oregon. The study used a computer program : called REDARS2 that simulated the damage to bridges within a t...
Distance Learning in a Multimedia Networks Project: Main Results.
ERIC Educational Resources Information Center
Ruokamo, Heli; Pohjolainen, Seppo
2000-01-01
Discusses a goal-oriented project, focused on open learning environments using computer networks, called Distance Learning in Multimedia Networks that was part of the Finnish Multimedia Program. Describes the combined efforts of Finnish telecommunications companies, content providers, publishing houses, hardware companies, and educational…
Virtual Reality in the Classroom.
ERIC Educational Resources Information Center
Pantelidis, Veronica S.
1993-01-01
Considers the concept of virtual reality; reviews its history; describes general uses of virtual reality, including entertainment, medicine, and design applications; discusses classroom uses of virtual reality, including a software program called Virtus WalkThrough for use with a computer monitor; and suggests future possibilities. (34 references)…
The Wireless Student & the Library.
ERIC Educational Resources Information Center
Drew, Bill
2002-01-01
Describes a program at the State University of New York College of Agriculture and Technology at Morrisville (SUNY-Morrisville) developed with IBM called ThinkPad University that integrates computers into the teaching and learning environment. Explains a partnership with Raytheon that provides wireless connectivity; and discusses changes in…
ERIC Educational Resources Information Center
Kean, Sam
2007-01-01
In this article, the author discusses a computer program called Psiphon which bypasses government filters undetected. The University of Toronto's Citizen Lab, a research center for digital media and politics, designed Psiphon for technology-savvy activists. Some technology-savvy activists use other open-source software, like Tor (which relies on…
The University of Wisconsin OAO operating system
NASA Technical Reports Server (NTRS)
Heacox, H. C.; Mcnall, J. F.
1972-01-01
The Wisconsin OAO operating system is presented which consists of two parts: a computer program called HARUSPEX, which makes possible reasonably efficient and convenient operation of the package and ground operations equipment which provides real-time status monitoring, commanding and a quick-look at the data.
Computation of steady nozzle flow by a time-dependent method
NASA Technical Reports Server (NTRS)
Cline, M. C.
1974-01-01
The equations of motion governing steady, inviscid flow are of a mixed type, that is, hyperbolic in the supersonic region and elliptic in the subsonic region. These mathematical difficulties may be removed by using the so-called time-dependent method, where the governing equations become hyperbolic everywhere. The steady-state solution may be obtained as the asymptotic solution for large time. The object of this research was to develop a production type computer program capable of solving converging, converging-diverging, and plug two-dimensional nozzle flows in computational times of 1 min or less on a CDC 6600 computer.
Documenting AUTOGEN and APGEN Model Files
NASA Technical Reports Server (NTRS)
Gladden, Roy E.; Khanampompan, Teerapat; Fisher, Forest W.; DelGuericio, Chris c.
2008-01-01
A computer program called "autogen hypertext map generator" satisfies a need for documenting and assisting in visualization of, and navigation through, model files used in the AUTOGEN and APGEN software mentioned in the two immediately preceding articles. This program parses autogen script files, autogen model files, PERL scripts, and apgen activity-definition files and produces a hypertext map of the files to aid in the navigation of the model. This program also provides a facility for adding notes and descriptions, beyond what is in the source model represented by the hypertext map. Further, this program provides access to a summary of the model through variable, function, sub routine, activity and resource declarations as well as providing full access to the source model and source code. The use of the tool enables easy access to the declarations and the ability to traverse routines and calls while analyzing the model.
Computational tools for fitting the Hill equation to dose-response curves.
Gadagkar, Sudhindra R; Call, Gerald B
2015-01-01
Many biological response curves commonly assume a sigmoidal shape that can be approximated well by means of the 4-parameter nonlinear logistic equation, also called the Hill equation. However, estimation of the Hill equation parameters requires access to commercial software or the ability to write computer code. Here we present two user-friendly and freely available computer programs to fit the Hill equation - a Solver-based Microsoft Excel template and a stand-alone GUI-based "point and click" program, called HEPB. Both computer programs use the iterative method to estimate two of the Hill equation parameters (EC50 and the Hill slope), while constraining the values of the other two parameters (the minimum and maximum asymptotes of the response variable) to fit the Hill equation to the data. In addition, HEPB draws the prediction band at a user-defined confidence level, and determines the EC50 value for each of the limits of this band to give boundary values that help objectively delineate sensitive, normal and resistant responses to the drug being tested. Both programs were tested by analyzing twelve datasets that varied widely in data values, sample size and slope, and were found to yield estimates of the Hill equation parameters that were essentially identical to those provided by commercial software such as GraphPad Prism and nls, the statistical package in the programming language R. The Excel template provides a means to estimate the parameters of the Hill equation and plot the regression line in a familiar Microsoft Office environment. HEPB, in addition to providing the above results, also computes the prediction band for the data at a user-defined level of confidence, and determines objective cut-off values to distinguish among response types (sensitive, normal and resistant). Both programs are found to yield estimated values that are essentially the same as those from standard software such as GraphPad Prism and the R-based nls. Furthermore, HEPB also has the option to simulate 500 response values based on the range of values of the dose variable in the original data and the fit of the Hill equation to that data. Copyright © 2014. Published by Elsevier Inc.
Context Switching with Multiple Register Windows: A RISC Performance Study
NASA Technical Reports Server (NTRS)
Konsek, Marion B.; Reed, Daniel A.; Watcharawittayakul, Wittaya
1987-01-01
Although previous studies have shown that a large file of overlapping register windows can greatly reduce procedure call/return overhead, the effects of register windows in a multiprogramming environment are poorly understood. This paper investigates the performance of multiprogrammed, reduced instruction set computers (RISCs) as a function of window management strategy. Using an analytic model that reflects context switch and procedure call overheads, we analyze the performance of simple, linearly self-recursive programs. For more complex programs, we present the results of a simulation study. These studies show that a simple strategy that saves all windows prior to a context switch, but restores only a single window following a context switch, performs near optimally.
NASA Technical Reports Server (NTRS)
DeBaca, Richard C.; Sarkissian, Edwin; Madatyan, Mariyetta; Shepard, Douglas; Gluck, Scott; Apolinski, Mark; McDuffie, James; Tremblay, Dennis
2006-01-01
TES L1B Subsystem is a computer program that performs several functions for the Tropospheric Emission Spectrometer (TES). The term "L1B" (an abbreviation of "level 1B"), refers to data, specific to the TES, on radiometric calibrated spectral radiances and their corresponding noise equivalent spectral radiances (NESRs), plus ancillary geolocation, quality, and engineering data. The functions performed by TES L1B Subsystem include shear analysis, monitoring of signal levels, detection of ice build-up, and phase correction and radiometric and spectral calibration of TES target data. Also, the program computes NESRs for target spectra, writes scientific TES level-1B data to hierarchical- data-format (HDF) files for public distribution, computes brightness temperatures, and quantifies interpixel signal variability for the purpose of first-order cloud and heterogeneous land screening by the level-2 software summarized in the immediately following article. This program uses an in-house-developed algorithm, called "NUSRT," to correct instrument line-shape factors.
C-MOS bulk metal design handbook. [LSI standard cell (circuits)
NASA Technical Reports Server (NTRS)
Edge, T. M.
1977-01-01
The LSI standard cell array technique was used in the fabrication of more than 20 CMOS custom arrays. This technique consists of a series of computer programs and design automation techniques referred to as the Computer Aided Design And Test (CADAT) system that automatically translate a partitioned logic diagram into a set of instructions for driving an automatic plotter which generates precision mask artwork for complex LSI arrays of CMOS standard cells. The standard cell concept for producing LSI arrays begins with the design, layout, and validation of a group of custom circuits called standard cells. Once validated, these cells are given identification or pattern numbers and are permanently stored. To use one of these cells in a logic design, the user calls for the desired cell by pattern number. The Place, Route in Two Dimension (PR2D) computer program is then used to automatically generate the metalization and/or tunnels to interconnect the standard cells into the required function. Data sheets that describe the function, artwork, and performance of each of the standard cells, the general procedure for implementation of logic in CMOS standard cells, and additional detailed design information are presented.
On Undecidability Aspects of Resilient Computations and Implications to Exascale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Nageswara S
2014-01-01
Future Exascale computing systems with a large number of processors, memory elements and interconnection links, are expected to experience multiple, complex faults, which affect both applications and operating-runtime systems. A variety of algorithms, frameworks and tools are being proposed to realize and/or verify the resilience properties of computations that guarantee correct results on failure-prone computing systems. We analytically show that certain resilient computation problems in presence of general classes of faults are undecidable, that is, no algorithms exist for solving them. We first show that the membership verification in a generic set of resilient computations is undecidable. We describe classesmore » of faults that can create infinite loops or non-halting computations, whose detection in general is undecidable. We then show certain resilient computation problems to be undecidable by using reductions from the loop detection and halting problems under two formulations, namely, an abstract programming language and Turing machines, respectively. These two reductions highlight different failure effects: the former represents program and data corruption, and the latter illustrates incorrect program execution. These results call for broad-based, well-characterized resilience approaches that complement purely computational solutions using methods such as hardware monitors, co-designs, and system- and application-specific diagnosis codes.« less
WebChem Viewer: a tool for the easy dissemination of chemical and structural data sets
2014-01-01
Background Sharing sets of chemical data (e.g., chemical properties, docking scores, etc.) among collaborators with diverse skill sets is a common task in computer-aided drug design and medicinal chemistry. The ability to associate this data with images of the relevant molecular structures greatly facilitates scientific communication. There is a need for a simple, free, open-source program that can automatically export aggregated reports of entire chemical data sets to files viewable on any computer, regardless of the operating system and without requiring the installation of additional software. Results We here present a program called WebChem Viewer that automatically generates these types of highly portable reports. Furthermore, in designing WebChem Viewer we have also created a useful online web application for remotely generating molecular structures from SMILES strings. We encourage the direct use of this online application as well as its incorporation into other software packages. Conclusions With these features, WebChem Viewer enables interdisciplinary collaborations that require the sharing and visualization of small molecule structures and associated sets of heterogeneous chemical data. The program is released under the FreeBSD license and can be downloaded from http://nbcr.ucsd.edu/WebChemViewer. The associated web application (called “Smiley2png 1.0”) can be accessed through freely available web services provided by the National Biomedical Computation Resource at http://nbcr.ucsd.edu. PMID:24886360
Ybarra, Michele; Biringi, Ruth; Prescott, Tonya; Bull, Sheana S.
2012-01-01
Use of Internet is growing in Sub Saharan Africa. Evidence of computer and Internet effectiveness for reduction in risk behaviors associated with HIV shown in U.S. settings has yet to be replicated in Africa. We describe the development, usability and navigability testing of an Internet-based HIV prevention program for secondary school students in Uganda, called CyberSenga. For this work, we used four data collection activities, including observation of (a) computer skills and (b) navigation, (c) focus group discussions, and (d) field assessments to document comprehension and usability of program content. We document limited skills among students, but youth with basic computers skills were able to navigate the program after instruction. Youth were most interested in activities with more interaction. Field-testing illustrated the importance of using a stand-alone electrical source during program delivery. This work suggests delivery of Internet-based health promotion content in Africa requires attention to user preparedness and literacy, bandwidth, Internet connection, and electricity. PMID:22918136
A flexible tool for diagnosing water, energy, and entropy budgets in climate models
NASA Astrophysics Data System (ADS)
Lembo, Valerio; Lucarini, Valerio
2017-04-01
We have developed a new flexible software for studying the global energy budget, the hydrological cycle, and the material entropy production of global climate models. The program receives as input radiative, latent and sensible energy fluxes, with the requirement that the variable names are in agreement with the Climate and Forecast (CF) conventions for the production of NetCDF datasets. Annual mean maps, meridional sections and time series are computed by means of Climate Data Operators (CDO) collection of command line operators developed at Max-Planck Institute for Meteorology (MPI-M). If a land-sea mask is provided, the program also computes the required quantities separately on the continents and oceans. Depending on the user's choice, the program also calls the MATLAB software to compute meridional heat transports and location and intensities of the peaks in the two hemispheres. We are currently planning to adapt the program in order to be included in the Earth System Model eValuation Tool (ESMValTool) community diagnostics.
Reinforced Concrete Wall Form Design Program
1992-08-01
criteria is an absolute limit. You have the choice of 1/8 or 1/16 of an inch total deflection in a span. Once these limits are set here, then they are...Calls GET-INFO-TEXT - Calls ZERO -PLY - If the response to GET-INFO-TEXT is "Values retrieved by computer", then the following procedures are executed...like to enter their own values. ZERO -PLY - Re-initializes all PLY-VEC values to"?". GET-PLY-CLASS - Retrieves from the user the grade of plyform to be
Digitizing for Computer-Aided Finite Element Model Generation.
1979-10-10
this approach is a collection of programs developed over the last eight years at the University of Arizona, and called the GIFTS system. This paper...briefly describes the latest version of the system, GIFTS -5, and demonstrates its suitability in a design environment by simple examples. The programs...constituting the GIFTS system were used as a tool for research in many areas, including mesh generation, finite element data base design, interactive
Learning with a Missing Sense: What Can We Learn from the Interaction of a Deaf Child with a Turtle?
ERIC Educational Resources Information Center
Miller, Paul
2009-01-01
This case study reports on the progress of Navon, a 13-year-old boy with prelingual deafness, over a 3-month period following exposure to Logo, a computer programming language that visualizes specific programming commands by means of a virtual drawing tool called the Turtle. Despite an almost complete lack of skills in spoken and sign language,…
ERIC Educational Resources Information Center
Hegelheimer, Volker; Reppert, Ketty; Broberg, Megan; Daisy, Brenda; Grgurovic, Maja; Middlebrooks, Katy; Liu, Sammi
2004-01-01
As more and more teacher preparation programs realize the need to include courses that deal with computer-assisted language learning, a crucial decision as to what is taught needs to be made, taking into consideration the various post-graduation goals ranging from teacher or teacher-trainer to researcher. Thus, the question of whether to go beyond…
Graphics and Flow Visualization of Computer Generated Flow Fields
NASA Technical Reports Server (NTRS)
Kathong, M.; Tiwari, S. N.
1987-01-01
Flow field variables are visualized using color representations described on surfaces that are interpolated from computational grids and transformed to digital images. Techniques for displaying two and three dimensional flow field solutions are addressed. The transformations and the use of an interactive graphics program for CFD flow field solutions, called PLOT3D, which runs on the color graphics IRIS workstation are described. An overview of the IRIS workstation is also described.
A high-fidelity N-body ephemeris generator for satellites in Earth orbit
NASA Astrophysics Data System (ADS)
Simmons, David R.
1991-10-01
A program is currently used for mission planning called the Analytic Satellite Ephemeris Program (ASEP), which produces projected data for orbits that remain fairly close to Earth. Lunar and solar perturbations are taken into account in another program called GRAVE. This project is a revision of GRAVE which incorporates more flexible means of input for initial data, provides additional kinds of output information, and makes use of structured programming techniques to make the program more understandable and reliable. The computer program ORBIT was tested against tracking data for the first 313 days of operation of the CRRES satellite. A sample graph is given comparing the semi-major axis calculated by the program with the values supplied by NORAD. When calculated for points at which CRRES passes through the ascending node, the argument of perigee, the right ascension of the ascending node, and the mean anomaly all stay within about a degree of the corresponding values from NORAD; the inclination of the orbital plane is much closer. The program value of the eccentricity is in error by no more than 0.0002.
Speeding up parallel processing
NASA Technical Reports Server (NTRS)
Denning, Peter J.
1988-01-01
In 1967 Amdahl expressed doubts about the ultimate utility of multiprocessors. The formulation, now called Amdahl's law, became part of the computing folklore and has inspired much skepticism about the ability of the current generation of massively parallel processors to efficiently deliver all their computing power to programs. The widely publicized recent results of a group at Sandia National Laboratory, which showed speedup on a 1024 node hypercube of over 500 for three fixed size problems and over 1000 for three scalable problems, have convincingly challenged this bit of folklore and have given new impetus to parallel scientific computing.
NASA Technical Reports Server (NTRS)
Yanosy, James L.
1988-01-01
Emulation/Simulation Computer Model (ESCM) computes the transient performance of a Space Station air revitalization subsystem with carbon dioxide removal provided by a solid amine water desorbed subsystem called SAWD. This manual describes the mathematical modeling and equations used in the ESCM. For the system as a whole and for each individual component, the fundamental physical and chemical laws which govern their operations are presented. Assumptions are stated, and when necessary, data is presented to support empirically developed relationships.
ERIC Educational Resources Information Center
Congress of the U.S., Washington, DC. House Committee on Science, Space and Technology.
This hearing focused on H. R. 656, companion bill of S. 272, which calls for high performance computing legislation. This is one of several initiatives to provide for a coordinated federal research program to ensure continued U.S. leadership in high performance computing. The bill authorizes the development of a National Research and Education…
NASA Astrophysics Data System (ADS)
Jamie, Majid
2016-11-01
Singh and Mogi (2003) presented a forward modeling (FWD) program, coded in FORTRAN 77 called "EMLCLLER", which is capable of computing the frequency-domain electromagnetic (EM) response of a large circular loop, in terms of vertical magnetic component (Hz), over 1D layer earth models; computations at this program could be performed by assuming variable transmitter-receiver configurations and incorporating both conduction and displacement currents into computations. Integral equations at this program are computed through digital linear filters based on the Hankel transforms together with analytic solutions based on hyper-geometric functions. Despite capabilities of EMLCLLER, there are some mistakes at this program that make its FWD results unreliable. The mistakes in EMLCLLER arise in using wrong algorithm for computing reflection coefficient of the EM wave in TE-mode (rTE), and using flawed algorithms for computing phase and normalized phase values relating to Hz; in this paper corrected form of these mistakes are presented. Moreover, in order to illustrate how these mistakes can affect FWD results, EMLCLLER and corrected version of this program presented in this paper titled "EMLCLLER_Corr" are conducted on different two- and three-layered earth models; afterwards their FWD results in terms of real and imaginary parts of Hz, its normalized amplitude, and the corresponding normalized phase curves are plotted versus frequency and compared to each other. In addition, in Singh and Mogi (2003) extra derivations for computing radial component of the magnetic field (Hr) and angular component of the electric field (Eϕ) are also presented where the numerical solution presented for Hr is incorrect; in this paper the correct numerical solution for this derivation is also presented.
Basinsoft, a computer program to quantify drainage basin characteristics
Harvey, Craig A.; Eash, David A.
2001-01-01
In 1988, the USGS began developing a program called Basinsoft. The initial program quantified 16 selected drainage basin characteristics from three source-data layers that were manually digitized from topographic maps using the versions of ARC/INFO, Fortran programs, and prime system Command Programming Language (CPL) programs available in 1988 (Majure and Soenksen, 1991). By 1991, Basinsoft was enhanced to quantify 27 selected drainage-basin characteristics from three source-data layers automatically generated from digital elevation model (DEM) data using a set of Fortran programs (Majure and Eash, 1991: Jenson and Dominique, 1988). Due to edge-matching problems encountered in 1991 with the preprocessing
A microcomputer model for simulating pressurized flow in a storm sewer system : final report.
DOT National Transportation Integrated Search
1989-01-01
A review was made of several computer programs capable of simulating sewer flows under surcharge or pressurized flow conditions. A modified version of the EXTRAN module of the SYMM model, called PFSM, was developed and attached to the FHYA Pooled Fun...
COMPUTER PROGRAM DOCUMENTATION FOR THE ENHANCED STREAM WATER QUALITY MODEL QUAL2E
Presented in the manual are recent modifications and improvements to the widely used stream water quality model QUAL-II. Called QUAL2E, the enhanced model incorporates improvements in eight areas: (1) algal, nitrogen, phosphorus, and dissolved oxygen interactions; (2) algal growt...
More than Spinning Their Wheels
ERIC Educational Resources Information Center
Cassola, Joel
2007-01-01
Last fall, when Mastercam, the leading manufacturer of computer-aided manufacturing (CAM) software, announced the winners of its Innovators of the Future (IOF) contest, first, second and third prizes went to students in the advanced manufacturing program of Vincennes University's (VU's) Machine Trades Technology Department. The contest called for…
NASA Technical Reports Server (NTRS)
1979-01-01
The machinery pictured is a set of Turbodyne steam turbines which power a sugar mill at Bell Glade, Florida. A NASA-developed computer program called NASTRAN aided development of these and other turbines manufactured by Turbodyne Corporation's Steam Turbine Division, Wellsville, New York. An acronym for NASA Structural Analysis Program, NASTRAN is a predictive tool which advises development teams how a structural design will perform under service use conditions. Turbodyne uses NASTRAN to analyze the dynamic behavior of steam turbine components, achieving substantial savings in development costs. One of the most widely used spinoffs, NASTRAN is made available to private industry through NASA's Computer Software Management Information Center (COSMIC) at the University of Georgia.
Recursive partitioned inversion of large (1500 x 1500) symmetric matrices
NASA Technical Reports Server (NTRS)
Putney, B. H.; Brownd, J. E.; Gomez, R. A.
1976-01-01
A recursive algorithm was designed to invert large, dense, symmetric, positive definite matrices using small amounts of computer core, i.e., a small fraction of the core needed to store the complete matrix. The described algorithm is a generalized Gaussian elimination technique. Other algorithms are also discussed for the Cholesky decomposition and step inversion techniques. The purpose of the inversion algorithm is to solve large linear systems of normal equations generated by working geodetic problems. The algorithm was incorporated into a computer program called SOLVE. In the past the SOLVE program has been used in obtaining solutions published as the Goddard earth models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Qingda; Gao, Xiaoyang; Krishnamoorthy, Sriram
Empirical optimizers like ATLAS have been very effective in optimizing computational kernels in libraries. The best choice of parameters such as tile size and degree of loop unrolling is determined by executing different versions of the computation. In contrast, optimizing compilers use a model-driven approach to program transformation. While the model-driven approach of optimizing compilers is generally orders of magnitude faster than ATLAS-like library generators, its effectiveness can be limited by the accuracy of the performance models used. In this paper, we describe an approach where a class of computations is modeled in terms of constituent operations that are empiricallymore » measured, thereby allowing modeling of the overall execution time. The performance model with empirically determined cost components is used to perform data layout optimization together with the selection of library calls and layout transformations in the context of the Tensor Contraction Engine, a compiler for a high-level domain-specific language for expressing computational models in quantum chemistry. The effectiveness of the approach is demonstrated through experimental measurements on representative computations from quantum chemistry.« less
NASA Technical Reports Server (NTRS)
Kolb, Mark A.
1990-01-01
Originally, computer programs for engineering design focused on detailed geometric design. Later, computer programs for algorithmically performing the preliminary design of specific well-defined classes of objects became commonplace. However, due to the need for extreme flexibility, it appears unlikely that conventional programming techniques will prove fruitful in developing computer aids for engineering conceptual design. The use of symbolic processing techniques, such as object-oriented programming and constraint propagation, facilitate such flexibility. Object-oriented programming allows programs to be organized around the objects and behavior to be simulated, rather than around fixed sequences of function- and subroutine-calls. Constraint propagation allows declarative statements to be understood as designating multi-directional mathematical relationships among all the variables of an equation, rather than as unidirectional assignments to the variable on the left-hand side of the equation, as in conventional computer programs. The research has concentrated on applying these two techniques to the development of a general-purpose computer aid for engineering conceptual design. Object-oriented programming techniques are utilized to implement a user-extensible database of design components. The mathematical relationships which model both geometry and physics of these components are managed via constraint propagation. In addition, to this component-based hierarchy, special-purpose data structures are provided for describing component interactions and supporting state-dependent parameters. In order to investigate the utility of this approach, a number of sample design problems from the field of aerospace engineering were implemented using the prototype design tool, Rubber Airplane. The additional level of organizational structure obtained by representing design knowledge in terms of components is observed to provide greater convenience to the program user, and to result in a database of engineering information which is easier both to maintain and to extend.
Evaluation of Infrared Target Discrimination Algorithms.
1983-04-01
application of this work is embodied in a computer program called PALANTIR , which Ref. 2 also describes in some detail. From a given set of narrow band spectral...chan- nels PALANTIR chooses a prescribed number of channels, picking those that will provide the least error when used in connection with a minimum
Why Save Your Course as a Relational Database?
ERIC Educational Resources Information Center
Hamilton, Gregory C.; Katz, David L.; Davis, James E.
2000-01-01
Describes a system that stores course materials for computer-based training programs in a relational database called Of Course! Outlines the basic structure of the databases; explains distinctions between Of Course! and other authoring languages; and describes how data is retrieved from the database and presented to the student. (Author/LRW)
Designing, Developing, and Implementing a Course on LEGO Robotics for Technology Teacher Education
ERIC Educational Resources Information Center
Chambers, Joan M.; Carbonaro, Mike
2003-01-01
Within a constructivist philosophy of learning, teachers, as students, are introduced to different perspectives of teaching with robotic technology while immersed in what Papert called a "constructionist" environment. Robotics allows students to creatively explore computer programming, mechanical design and construction, problem solving,…
Object-Oriented Algorithm For Evaluation Of Fault Trees
NASA Technical Reports Server (NTRS)
Patterson-Hine, F. A.; Koen, B. V.
1992-01-01
Algorithm for direct evaluation of fault trees incorporates techniques of object-oriented programming. Reduces number of calls needed to solve trees with repeated events. Provides significantly improved software environment for such computations as quantitative analyses of safety and reliability of complicated systems of equipment (e.g., spacecraft or factories).
Software Engineering Basics: A Primer for the Project Manager.
1982-06-01
computer software (45, 46]. It is named after Ada Augusta who is generally credited as having been the first programmer as an assistant to Charles ... Babbage , and is called, appropriately enough, ADA. The development of one common programming language for tactical software clearly has the p-.tential for
Challenge '89: Interfacing of Chemical Instruments to Computers.
ERIC Educational Resources Information Center
Lyons, Jim; Lamarre, Colin
This project involved interfacing of microcomputers with three chemical instruments--Nuclear Magnetic Resonance (NMR), Infrared Spectroscopy (IR), and the spectrophotometer. A Pascal program called "Spectrum" allows data from the NMR to be read and graphed, a specific area of the graph zoomed, ratios of specified areas of the graph…
de Castro Lacaze, Denise Helena; Sacco, Isabel de C. N.; Rocha, Lys Esther; de Bragança Pereira, Carlos Alberto; Casarotto, Raquel Aparecida
2010-01-01
AIM: We sought to evaluate musculoskeletal discomfort and mental and physical fatigue in the call-center workers of an airline company before and after a supervised exercise program compared with rest breaks during the work shift. INTRODUCTION: This was a longitudinal pilot study conducted in a flight-booking call-center for an airline in São Paulo, Brazil. Occupational health activities are recommended to decrease the negative effects of the call-center working conditions. In practice, exercise programs are commonly recommended for computer workers, but their effects have not been studied in call-center operators. METHODS: Sixty-four call-center operators participated in this study. Thirty-two subjects were placed into the experimental group and attended a 10-min daily exercise session for 2 months. Conversely, 32 participants were placed into the control group and took a 10-min daily rest break during the same period. Each subject was evaluated once a week by means of the Corlett-Bishop body map with a visual analog discomfort scale and the Chalder fatigue questionnaire. RESULTS: Musculoskeletal discomfort decreased in both groups, but the reduction was only statistically significant for the spine and buttocks (p=0.04) and the sum of the segments (p=0.01) in the experimental group. In addition, the experimental group showed significant differences in the level of mental fatigue, especially in questions related to memory Rienzo, #181ff and tiredness (p=0.001). CONCLUSIONS: Our preliminary results demonstrate that appropriately designed and supervised exercise programs may be more efficient than rest breaks in decreasing discomfort and fatigue levels in call-center operators. PMID:20668622
Computer code for charge-exchange plasma propagation
NASA Technical Reports Server (NTRS)
Robinson, R. S.; Kaufman, H. R.
1981-01-01
The propagation of the charge-exchange plasma from an electrostatic ion thruster is crucial in determining the interaction of that plasma with the associated spacecraft. A model that describes this plasma and its propagation is described, together with a computer code based on this model. The structure and calling sequence of the code, named PLASIM, is described. An explanation of the program's input and output is included, together with samples of both. The code is written in ASNI Standard FORTRAN.
Labacher, Lukas; Mitchell, Claudia
2013-01-01
Young adults often lack access to confidential, long-lasting, and nonjudgmental interactions with sexual health professionals at brick-and-mortar clinics. To ensure that patients return for their STI test results, post-result counseling, and STI-related information, computer-mediated health intervention programming allows them to receive sexual health information through onsite computers, the Internet, and mobile phone calls and text messages. To determine whether young adults (age: M = 21 years) prefer to communicate with health professionals about the status of their sexual health through computer-mediated communication devices, 303 second-year university students (183 from an urban North American university and 120 from a periurban university in South Africa) completed a paper-based survey indicating how they prefer to communicate with doctors and nurses: talking face to face, mobile phone call, text message, Internet chat programs, Facebook, Twitter, or e-mail. Nearly all students, and female students in South Africa in particular, prefer to receive their STI test results, post-results counseling, and STI-related information by talking face to face with doctors and nurses rather than communicating through computers or mobile phones. Results are clarified in relation to gender, availability of various technologies, and prevalence of HIV in Canada and in South Africa.
An automated device for provoking and capturing wildlife calls
Ausband, David E.; Skrivseth, Jesse; Mitchell, Michael S.
2011-01-01
Some animals exhibit call-and-response behaviors that can be exploited to facilitate detection. Traditionally, acoustic surveys that use call-and-respond techniques have required an observer's presence to perform the broadcast, record the response, or both events. This can be labor-intensive and may influence animal behavior and, thus, survey results. We developed an automated acoustic survey device using commercially available hardware (e.g., laptop computer, speaker, microphone) and an author-created (JS) software program ("HOOT") that can be used to survey for any animal that calls. We tested this device to determine 1) deployment longevity, 2) effective sampling area, and 3) ability to detect known packs of gray wolves (Canis lupus) in Idaho, USA. Our device was able to broadcast and record twice daily for 6–7 days using the internal computer battery and surveyed an area of 3.3–17.5 km2 in relatively open habitat depending on the hardware components used. We surveyed for wolves at 2 active rendezvous sites used by closely monitored, radiocollared wolf packs and obtained 4 responses across both packs over 3 days of sampling. We confirmed reproduction in these 2 packs by detecting pup howls aurally from the resulting device recordings. Our device can broadcast and record animal calls and the computer software is freely downloadable. This automated survey device can be used to collect reliable data while reducing the labor costs traditionally associated with acoustic surveys.
An automated device for provoking and capturing Wildlife calls
Ausband, D.E.; Skrivseth, J.; Mitchell, M.S.
2011-01-01
Some animals exhibit call-and-response behaviors that can be exploited to facilitate detection. Traditionally, acoustic surveys that use call-and-respond techniques have required an observer's presence to perform the broadcast, record the response, or both events. This can be labor-intensive and may influence animal behavior and, thus, survey results. We developed an automated acoustic survey device using commercially available hardware (e.g., laptop computer, speaker, microphone) and an author-created (JS) software program ("HOOT") that can be used to survey for any animal that calls. We tested this device to determine 1) deployment longevity, 2) effective sampling area, and 3) ability to detect known packs of gray wolves (Canis lupus) in Idaho, USA. Our device was able to broadcast and record twice daily for 6-7 days using the internal computer battery and surveyed an area of 3.3-17.5 km in relatively open habitat depending on the hardware components used. We surveyed for wolves at 2 active rendezvous sites used by closely monitored, radiocollared wolf packs and obtained 4 responses across both packs over 3 days of sampling. We confirmed reproduction in these 2 packs by detecting pup howls aurally from the resulting device recordings. Our device can broadcast and record animal calls and the computer software is freely downloadable. This automated survey device can be used to collect reliable data while reducing the labor costs traditionally associated with acoustic surveys. ?? 2011 The Wildlife Society.
1989-01-01
access. 8 An example of a Trojan Horse was one that affected many Macintosh users in 1987. The program called "Sexy Ladies " deleted files as the...be malicious, just the disruption and freezing of the system would be enough to send a panic throughout the financial world. Gold prices would soar...Protection Products," Computers and Security, Apr 88, p. 159. 15 Neil Rubenking, " Antivirus Programs Fight Data Loss," PC Magazine (First Look), 28 Jun
Improving Demonstration Using Better Interaction Techniques
1997-01-14
Programming by demonstration (PBD) can be used to create tools and methods that eliminate the need to learn difficult computer languages. Gamut is a...do this, Gamut uses advanced interaction techniques that make it easier for a software author to express all needed aspects of one’s program. These...techniques include a simplified way to demonstrate new examples, called nudges, and a way to highlight objects to show they are important. Also, Gamut
Helicopter In-Flight Monitoring System Second Generation (HIMS II).
1983-08-01
acquisition cycle. B. Computer Chassis CPU (DEC LSI-II/2) -- Executes instructions contained in the memory. 32K memory (DEC MSVII-DD) --Contains program...when the operator executes command #2, 3, or 5 (display data). New cartridges can be inserted as required for truly unlimited, continuous data...is called bootstrapping. The software, which is stored on a tape cartridge, is loaded into memory by execution of a small program stored in read-only
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hunn, B. D.; Diamond, S. C.; Bennett, G. A.
1977-10-01
A set of computer programs, called Cal-ERDA, is described that is capable of rapid and detailed analysis of energy consumption in buildings. A new user-oriented input language, named the Building Design Language (BDL), has been written to allow simplified manipulation of the many variables used to describe a building and its operation. This manual provides the user with information necessary to understand in detail the Cal-ERDA set of computer programs. The new computer programs described include: an EXECUTIVE Processor to create computer system control commands; a BDL Processor to analyze input instructions, execute computer system control commands, perform assignments andmore » data retrieval, and control the operation of the LOADS, SYSTEMS, PLANT, ECONOMICS, and REPORT programs; a LOADS analysis program that calculates peak (design) zone and hourly loads and the effect of the ambient weather conditions, the internal occupancy, lighting, and equipment within the building, as well as variations in the size, location, orientation, construction, walls, roofs, floors, fenestrations, attachments (awnings, balconies), and shape of a building; a Heating, Ventilating, and Air-Conditioning (HVAC) SYSTEMS analysis program capable of modeling the operation of HVAC components including fans, coils, economizers, humidifiers, etc.; 16 standard configurations and operated according to various temperature and humidity control schedules. A plant equipment program models the operation of boilers, chillers, electrical generation equipment (diesel or turbines), heat storage apparatus (chilled or heated water), and solar heating and/or cooling systems. An ECONOMIC analysis program calculates life-cycle costs. A REPORT program produces tables of user-selected variables and arranges them according to user-specified formats. A set of WEATHER ANALYSIS programs manipulates, summarizes and plots weather data. Libraries of weather data, schedule data, and building data were prepared.« less
Software for Automated Reading of STEP Files by I-DEAS(trademark)
NASA Technical Reports Server (NTRS)
Pinedo, John
2003-01-01
A program called "readstep" enables the I-DEAS(tm) computer-aided-design (CAD) software to automatically read Standard for the Exchange of Product Model Data (STEP) files. (The STEP format is one of several used to transfer data between dissimilar CAD programs.) Prior to the development of "readstep," it was necessary to read STEP files into I-DEAS(tm) one at a time in a slow process that required repeated intervention by the user. In operation, "readstep" prompts the user for the location of the desired STEP files and the names of the I-DEAS(tm) project and model file, then generates an I-DEAS(tm) program file called "readstep.prg" and two Unix shell programs called "runner" and "controller." The program "runner" runs I-DEAS(tm) sessions that execute readstep.prg, while "controller" controls the execution of "runner" and edits readstep.prg if necessary. The user sets "runner" and "controller" into execution simultaneously, and then no further intervention by the user is required. When "runner" has finished, the user should see only parts from successfully read STEP files present in the model file. STEP files that could not be read successfully (e.g., because of format errors) should be regenerated before attempting to read them again.
1983-06-30
176 .- 441 CALL CADU (FK22A.FK22E.XAA.YAE.YRA.APF) 442 TF(LPCVR.EC.2) TIPFK 443 7AAuFK12A*tOTA2*QRAl 444 7AEwFKj2E(PTE?,tQREl 445 7PAnFK22A’flTA2.CRA2...470 YAE.023E2.flQ33Ei 471 WBA=DC23A2*033A1 472 W8F.IPC23E2+Q33El 473 CALL CADU (YPA.YRE*XAA*XAEaWBA*YQE) 474 VAA.SC-NA2*Ct.D(O22A2*Q12A2 475 WA~wDtC22F2
Expertise transfer for expert system design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boose, J.H.
This book is about the Expertise Transfer System-a computer program which interviews experts and helps them build expert systems, i.e. computer programs that use knowledge from experts to make decisions and judgements under conditions of uncertainty. The techniques are useful to anyone who uses decision-making information based on the expertise of others. The methods can also be applied to personal decision-making. The interviewing methodology is borrowed from a branch of psychology called Personal Construct Theory. It is not necessary to use a computer to take advantage of the techniques from Personal Construction Theory; the fundamental procedures used by the Expertisemore » Transfer System can be performed using paper and pencil. It is not necessary that the reader understand very much about computers to understand the ideas in this book. The few relevant concepts from computer science and expert systems that are needed are explained in a straightforward manner. Ideas from Personal Construct Psychology are also introduced as needed.« less
NASA Technical Reports Server (NTRS)
Blackburn, C. L.; Dovi, A. R.; Kurtze, W. L.; Storaasli, O. O.
1981-01-01
A computer software system for the processing and integration of engineering data and programs, called IPAD (Integrated Programs for Aerospace-Vehicle Design), is described. The ability of the system to relieve the engineer of the mundane task of input data preparation is demonstrated by the application of a prototype system to the design, analysis, and/or machining of three simple structures. Future work to further enhance the system's automated data handling and ability to handle larger and more varied design problems are also presented.
Towards optimizing server performance in an educational MMORPG for teaching computer programming
NASA Astrophysics Data System (ADS)
Malliarakis, Christos; Satratzemi, Maya; Xinogalos, Stelios
2013-10-01
Web-based games have become significantly popular during the last few years. This is due to the gradual increase of internet speed, which has led to the ongoing multiplayer games development and more importantly the emergence of the Massive Multiplayer Online Role Playing Games (MMORPG) field. In parallel, similar technologies called educational games have started to be developed in order to be put into practice in various educational contexts, resulting in the field of Game Based Learning. However, these technologies require significant amounts of resources, such as bandwidth, RAM and CPU capacity etc. These amounts may be even larger in an educational MMORPG game that supports computer programming education, due to the usual inclusion of a compiler and the constant client/server data transmissions that occur during program coding, possibly leading to technical issues that could cause malfunctions during learning. Thus, the determination of the elements that affect the overall games resources' load is essential so that server administrators can configure them and ensure educational games' proper operation during computer programming education. In this paper, we propose a new methodology with which we can achieve monitoring and optimization of the load balancing, so that the essential resources for the creation and proper execution of an educational MMORPG for computer programming can be foreseen and bestowed without overloading the system.
A comparison of acoustic montoring methods for common anurans of the northeastern United States
Brauer, Corinne; Donovan, Therese; Mickey, Ruth M.; Katz, Jonathan; Mitchell, Brian R.
2016-01-01
Many anuran monitoring programs now include autonomous recording units (ARUs). These devices collect audio data for extended periods of time with little maintenance and at sites where traditional call surveys might be difficult. Additionally, computer software programs have grown increasingly accurate at automatically identifying the calls of species. However, increased automation may cause increased error. We collected 435 min of audio data with 2 types of ARUs at 10 wetland sites in Vermont and New York, USA, from 1 May to 1 July 2010. For each minute, we determined presence or absence of 4 anuran species (Hyla versicolor, Pseudacris crucifer, Anaxyrus americanus, and Lithobates clamitans) using 1) traditional human identification versus 2) computer-mediated identification with software package, Song Scope® (Wildlife Acoustics, Concord, MA). Detections were compared with a data set consisting of verified calls in order to quantify false positive, false negative, true positive, and true negative rates. Multinomial logistic regression analysis revealed a strong (P < 0.001) 3-way interaction between the ARU recorder type, identification method, and focal species, as well as a trend in the main effect of rain (P = 0.059). Overall, human surveyors had the lowest total error rate (<2%) compared with 18–31% total errors with automated methods. Total error rates varied by species, ranging from 4% for A. americanus to 26% for L. clamitans. The presence of rain may reduce false negative rates. For survey minutes where anurans were known to be calling, the odds of a false negative were increased when fewer individuals of the same species were calling.
NASA Technical Reports Server (NTRS)
Raju, I. S.
1992-01-01
A computer program that generates three-dimensional (3D) finite element models for cracked 3D solids was written. This computer program, gensurf, uses minimal input data to generate 3D finite element models for isotropic solids with elliptic or part-elliptic cracks. These models can be used with a 3D finite element program called surf3d. This report documents this mesh generator. In this manual the capabilities, limitations, and organization of gensurf are described. The procedures used to develop 3D finite element models and the input for and the output of gensurf are explained. Several examples are included to illustrate the use of this program. Several input data files are included with this manual so that the users can edit these files to conform to their crack configuration and use them with gensurf.
User-Defined Data Distributions in High-Level Programming Languages
NASA Technical Reports Server (NTRS)
Diaconescu, Roxana E.; Zima, Hans P.
2006-01-01
One of the characteristic features of today s high performance computing systems is a physically distributed memory. Efficient management of locality is essential for meeting key performance requirements for these architectures. The standard technique for dealing with this issue has involved the extension of traditional sequential programming languages with explicit message passing, in the context of a processor-centric view of parallel computation. This has resulted in complex and error-prone assembly-style codes in which algorithms and communication are inextricably interwoven. This paper presents a high-level approach to the design and implementation of data distributions. Our work is motivated by the need to improve the current parallel programming methodology by introducing a paradigm supporting the development of efficient and reusable parallel code. This approach is currently being implemented in the context of a new programming language called Chapel, which is designed in the HPCS project Cascade.
Computation of high Reynolds number internal/external flows
NASA Technical Reports Server (NTRS)
Cline, M. C.; Wilmoth, R. G.
1981-01-01
A general, user oriented computer program, called VNAP2, has been developed to calculate high Reynolds number, internal/external flows. VNAP2 solves the two-dimensional, time-dependent Navier-Stokes equations. The turbulence is modeled with either a mixing-length, a one transport equation, or a two transport equation model. Interior grid points are computed using the explicit MacCormack scheme with special procedures to speed up the calculation in the fine grid. All boundary conditions are calculated using a reference plane characteristic scheme with the viscous terms treated as source terms. Several internal, and internal/external flow calculations are presented.
NASA Technical Reports Server (NTRS)
Bailey, F. R.; Kutler, Paul
1988-01-01
Discussed are the capabilities of NASA's Numerical Aerodynamic Simulation (NAS) Program and its application as an advanced supercomputing system for computational fluid dynamics (CFD) research. First, the paper describes the NAS computational system, called the NAS Processing System Network, and the advanced computational capabilities it offers as a consequence of carrying out the NAS pathfinder objective. Second, it presents examples of pioneering CFD research accomplished during NAS's first operational year. Examples are included which illustrate CFD applications for predicting fluid phenomena, complementing and supplementing experimentation, and aiding in design. Finally, pacing elements and future directions for CFD and NAS are discussed.
GASP-PL/I Simulation of Integrated Avionic System Processor Architectures. M.S. Thesis
NASA Technical Reports Server (NTRS)
Brent, G. A.
1978-01-01
A development study sponsored by NASA was completed in July 1977 which proposed a complete integration of all aircraft instrumentation into a single modular system. Instead of using the current single-function aircraft instruments, computers compiled and displayed inflight information for the pilot. A processor architecture called the Team Architecture was proposed. This is a hardware/software approach to high-reliability computer systems. A follow-up study of the proposed Team Architecture is reported. GASP-PL/1 simulation models are used to evaluate the operating characteristics of the Team Architecture. The problem, model development, simulation programs and results at length are presented. Also included are program input formats, outputs and listings.
Johnston, Matthew D
2017-12-01
Recent work of Johnston et al. has produced sufficient conditions on the structure of a chemical reaction network which guarantee that the corresponding discrete state space system exhibits an extinction event. The conditions consist of a series of systems of equalities and inequalities on the edges of a modified reaction network called a domination-expanded reaction network. In this paper, we present a computational implementation of these conditions written in Python and apply the program on examples drawn from the biochemical literature. We also run the program on 458 models from the European Bioinformatics Institute's BioModels Database and report our results. Copyright © 2017 Elsevier Inc. All rights reserved.
EMGAN: A computer program for time and frequency domain reduction of electromyographic data
NASA Technical Reports Server (NTRS)
Hursta, W. N.
1975-01-01
An experiment in electromyography utilizing surface electrode techniques was developed for the Apollo-Soyuz test project. This report describes the computer program, EMGAN, which was written to provide first order data reduction for the experiment. EMG signals are produced by the membrane depolarization of muscle fibers during a muscle contraction. Surface electrodes detect a spatially summated signal from a large number of muscle fibers commonly called an interference pattern. An interference pattern is usually so complex that analysis through signal morphology is extremely difficult if not impossible. It has become common to process EMG interference patterns in the frequency domain. Muscle fatigue and certain myopathic conditions are recognized through changes in muscle frequency spectra.
SMP93-PC: Standard Ship Motion Program for Personal Computer with Small Boat Capability
1994-06-01
1 DO 20 J=1,NP P (I,.J) =Y(J,K)I ~ ~~ P (2.3) =Z(J,K)P.NDI 20 CONTINUE CALL SPINT2 ~PSEGS ( 11,K), NS5, AREA, 1, ZERO , IS, ONE, 0)I ASTAT(K) = TWO...NDIC2) ,ENDI(2,25 DATA ZEROONE /0.0,1.0/£ DATA NDI,ENDI /2*1,4*0.0/ CALL SPt.1T2 (PSEGS , P ,NPTS ,NDI ,ENDI) CALL SPINT2- (PSEGS,NS,SPAREA,1, ZERO ,NS...spline * and a plane defined by a point and a direction vector I * INPUTS * P (i) = X-COORDINATE OF POINT USED TO DEFINE THE PLANE * = -COORDINATE OF
POSE Algorithms for Automated Docking
NASA Technical Reports Server (NTRS)
Heaton, Andrew F.; Howard, Richard T.
2011-01-01
POSE (relative position and attitude) can be computed in many different ways. Given a sensor that measures bearing to a finite number of spots corresponding to known features (such as a target) of a spacecraft, a number of different algorithms can be used to compute the POSE. NASA has sponsored the development of a flash LIDAR proximity sensor called the Vision Navigation Sensor (VNS) for use by the Orion capsule in future docking missions. This sensor generates data that can be used by a variety of algorithms to compute POSE solutions inside of 15 meters, including at the critical docking range of approximately 1-2 meters. Previously NASA participated in a DARPA program called Orbital Express that achieved the first automated docking for the American space program. During this mission a large set of high quality mated sensor data was obtained at what is essentially the docking distance. This data set is perhaps the most accurate truth data in existence for docking proximity sensors in orbit. In this paper, the flight data from Orbital Express is used to test POSE algorithms at 1.22 meters range. Two different POSE algorithms are tested for two different Fields-of-View (FOVs) and two different pixel noise levels. The results of the analysis are used to predict future performance of the POSE algorithms with VNS data.
PLASIM: A computer code for simulating charge exchange plasma propagation
NASA Technical Reports Server (NTRS)
Robinson, R. S.; Deininger, W. D.; Winder, D. R.; Kaufman, H. R.
1982-01-01
The propagation of the charge exchange plasma for an electrostatic ion thruster is crucial in determining the interaction of that plasma with the associated spacecraft. A model that describes this plasma and its propagation is described, together with a computer code based on this model. The structure and calling sequence of the code, named PLASIM, is described. An explanation of the program's input and output is included, together with samples of both. The code is written in ANSI Standard FORTRAN.
Generic command interpreter for robot controllers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Werner, J.
1991-04-09
Generic command interpreter programs have been written for robot controllers at Sandia National Laboratories (SNL). Each interpreter program resides on a robot controller and interfaces the controller with a supervisory program on another (host) computer. We call these interpreter programs monitors because they wait, monitoring a communication line, for commands from the supervisory program. These monitors are designed to interface with the object-oriented software structure of the supervisory programs. The functions of the monitor programs are written in each robot controller's native language but reflect the object-oriented functions of the supervisory programs. These functions and other specifics of the monitormore » programs written for three different robots at SNL will be discussed. 4 refs., 4 figs.« less
HeNCE: A Heterogeneous Network Computing Environment
Beguelin, Adam; Dongarra, Jack J.; Geist, George Al; ...
1994-01-01
Network computing seeks to utilize the aggregate resources of many networked computers to solve a single problem. In so doing it is often possible to obtain supercomputer performance from an inexpensive local area network. The drawback is that network computing is complicated and error prone when done by hand, especially if the computers have different operating systems and data formats and are thus heterogeneous. The heterogeneous network computing environment (HeNCE) is an integrated graphical environment for creating and running parallel programs over a heterogeneous collection of computers. It is built on a lower level package called parallel virtual machine (PVM).more » The HeNCE philosophy of parallel programming is to have the programmer graphically specify the parallelism of a computation and to automate, as much as possible, the tasks of writing, compiling, executing, debugging, and tracing the network computation. Key to HeNCE is a graphical language based on directed graphs that describe the parallelism and data dependencies of an application. Nodes in the graphs represent conventional Fortran or C subroutines and the arcs represent data and control flow. This article describes the present state of HeNCE, its capabilities, limitations, and areas of future research.« less
ASTROP2 users manual: A program for aeroelastic stability analysis of propfans
NASA Technical Reports Server (NTRS)
Narayanan, G. V.; Kaza, K. R. V.
1991-01-01
A user's manual is presented for the aeroelastic stability and response of propulsion systems computer program called ASTROP2. The ASTROP2 code preforms aeroelastic stability analysis of rotating propfan blades. This analysis uses a two-dimensional, unsteady cascade aerodynamics model and a three-dimensional, normal-mode structural model. Analytical stability results from this code are compared with published experimental results of a rotating composite advanced turboprop model and of nonrotating metallic wing model.
Computer Aided Method for System Safety and Reliability Assessments
2008-09-01
program between 1998 and 2003. This tool was not marketed in the public domain after the CRV program ended. The other tool is called eXpress, and it...support Government reviewed and approved analyses methodologies which can 5 then be shared with other government agencies and industry partners...Documented for B&R, UP&L, EPRI 30 DEC 80 GO IBM Version Enhanced at UCC , Dallas, Descriptors, Facility to Alter Array Sizes, Explanation of Use 1 SEP 82
The PLATO System: A Study in the Diffusion of an Innovation.
ERIC Educational Resources Information Center
Driscoll, Francis D.; Wolf, W. C., Jr.
This study was designed to ascertain the relationships between the steps of a tool designed to link knowledge production and the needs of knowledge users (the Wolf-Welsh Linkage Methodology or WWLM) with milestones in the evolution of an innovative computer-assisted instructional system called PLATO (Programming Logic for Advanced Teaching…
Whenever You Use a Computer You Are Using a Program Called an Operating System.
ERIC Educational Resources Information Center
Cook, Rick
1984-01-01
Examines design, features, and shortcomings of eight disk-based operating systems designed for general use that are popular or most likely to affect the future of microcomputing. Included are the CP/M family, MS-DOS, Apple DOS/ProDOS, Unix, Pick, the p-System, TRSDOS, and Macintosh/Lisa. (MBR)
An Authoring System for Creating Computer-Based Role-Performance Trainers.
ERIC Educational Resources Information Center
Guralnick, David; Kass, Alex
This paper describes a multimedia authoring system called MOPed-II. Like other authoring systems, MOPed-II reduces the time and expense of producing end-user applications by eliminating much of the programming effort they require. However, MOPed-II reflects an approach to authoring tools for educational multimedia which is different from most…
Confidence bounds for normal and lognormal distribution coefficients of variation
Steve Verrill
2003-01-01
This paper compares the so-called exact approach for obtaining confidence intervals on normal distribution coefficients of variation to approximate methods. Approximate approaches were found to perform less well than the exact approach for large coefficients of variation and small sample sizes. Web-based computer programs are described for calculating confidence...
The Development of a Web-Based Virtual Environment for Teaching Qualitative Analysis of Structures
ERIC Educational Resources Information Center
O'Dwyer, D. W.; Logan-Phelan, T. M.; O'Neill, E. A.
2007-01-01
The current paper describes the design and development of a qualitative analysis course and an interactive web-based teaching and assessment tool called VSE (virtual structural environment). The widespread reliance on structural analysis programs requires engineers to be able to verify computer output by carrying out qualitative analyses.…
A Failing Grade for WEEE Take-Back Programs for Information Technology Equipment
ERIC Educational Resources Information Center
Nakajima, Nina; Vanderburg, Willem H.
2005-01-01
Product take-back (also called extended producer responsibility) has become a trend for dealing with the garbage resulting from categories of problematic products. Waste electrical and electronic equipment (WEEE) is one such category with computer equipment being of particular significance. This article provides a description of the European…
Learning to Analyze and Code Accounting Transactions in Interactive Mode.
ERIC Educational Resources Information Center
Bentz, William F.; Ambler, Eric E.
An interactive computer-assisted instructional (CAI) system, called CODE, is used to teach transactional analysis, or coding, in elementary accounting. The first major component of CODE is TEACH, a program which controls student input and output. Following the statement of a financial position on a cathode ray tube, TEACH describes an event to…
The Revenue vs. Service Balance
ERIC Educational Resources Information Center
Savarese, John
2006-01-01
Ten years ago, students at the University of Vermont (UVM) had to carry separate ID cards, meal cards, and athletic cards. Today, the single CATcard combines all of these functions, plus library privileges, an optional declining balance program called CAT$cratch, access to computer labs, use of vending machines without quarters, and even a ride on…
21st-Century Citizen Scholars: Testing What Is Possible and Desirable.
ERIC Educational Resources Information Center
Schwartz, Helen J.
A pilot program at Indiana University-Purdue University at Indianapolis (IUPUI), called the Twenty-First Century Citizen Scholars, explores and evaluates the pedagogy of computer conferencing in writing-across-the-curriculum and makes sure of equal access by students. The purpose of the project is to build intellectual coherence, reduce conflict…
Enumerating Substituted Benzene Isomers of Tree-Like Chemical Graphs.
Li, Jinghui; Nagamochi, Hiroshi; Akutsu, Tatsuya
2018-01-01
Enumeration of chemical structures is useful for drug design, which is one of the main targets of computational biology and bioinformatics. A chemical graph with no other cycles than benzene rings is called tree-like, and becomes a tree possibly with multiple edges if we contract each benzene ring into a single virtual atom of valence 6. All tree-like chemical graphs with a given tree representation are called the substituted benzene isomers of . When we replace each virtual atom in with a benzene ring to obtain a substituted benzene isomer, distinct isomers of are caused by the difference in arrangements of atom groups around a benzene ring. In this paper, we propose an efficient algorithm that enumerates all substituted benzene isomers of a given tree representation . Our algorithm first counts the number of all the isomers of the tree representation by a dynamic programming method. To enumerate all the isomers, for each , our algorithm then generates the th isomer by backtracking the counting phase of the dynamic programming. We also implemented our algorithm for computational experiments.
Flight test validation of a design procedure for digital autopilots
NASA Technical Reports Server (NTRS)
Bryant, W. H.
1983-01-01
Commercially available general aviation autopilots are currently in transition from an analogue circuit system to a computer implemented digital flight control system. Well known advantages of the digital autopilot include enhanced modes, self-test capacity, fault detection, and greater computational capacity. A digital autopilot's computational capacity can be used to full advantage by increasing the sophistication of the digital autopilot's chief function, stability and control. NASA's Langley Research Center has been pursuing the development of direct digital design tools for aircraft stabilization systems for several years. This effort has most recently been directed towards the development and realization of multi-mode digital autopilots for GA aircraft, conducted under a SPIFR-related program called the General Aviation Terminal Operations Research (GATOR) Program. This presentation focuses on the implementation and testing of a candidate multi-mode autopilot designed using these newly developed tools.
Quasi-Optimal Elimination Trees for 2D Grids with Singularities
Paszyńska, A.; Paszyński, M.; Jopek, K.; ...
2015-01-01
We consmore » truct quasi-optimal elimination trees for 2D finite element meshes with singularities. These trees minimize the complexity of the solution of the discrete system. The computational cost estimates of the elimination process model the execution of the multifrontal algorithms in serial and in parallel shared-memory executions. Since the meshes considered are a subspace of all possible mesh partitions, we call these minimizers quasi-optimal. We minimize the cost functionals using dynamic programming. Finding these minimizers is more computationally expensive than solving the original algebraic system. Nevertheless, from the insights provided by the analysis of the dynamic programming minima, we propose a heuristic construction of the elimination trees that has cost O N e log N e , where N e is the number of elements in the mesh. We show that this heuristic ordering has similar computational cost to the quasi-optimal elimination trees found with dynamic programming and outperforms state-of-the-art alternatives in our numerical experiments.« less
Quasi-Optimal Elimination Trees for 2D Grids with Singularities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paszyńska, A.; Paszyński, M.; Jopek, K.
We consmore » truct quasi-optimal elimination trees for 2D finite element meshes with singularities. These trees minimize the complexity of the solution of the discrete system. The computational cost estimates of the elimination process model the execution of the multifrontal algorithms in serial and in parallel shared-memory executions. Since the meshes considered are a subspace of all possible mesh partitions, we call these minimizers quasi-optimal. We minimize the cost functionals using dynamic programming. Finding these minimizers is more computationally expensive than solving the original algebraic system. Nevertheless, from the insights provided by the analysis of the dynamic programming minima, we propose a heuristic construction of the elimination trees that has cost O N e log N e , where N e is the number of elements in the mesh. We show that this heuristic ordering has similar computational cost to the quasi-optimal elimination trees found with dynamic programming and outperforms state-of-the-art alternatives in our numerical experiments.« less
Adaptation of a Control Center Development Environment for Industrial Process Control
NASA Technical Reports Server (NTRS)
Killough, Ronnie L.; Malik, James M.
1994-01-01
In the control center, raw telemetry data is received for storage, display, and analysis. This raw data must be combined and manipulated in various ways by mathematical computations to facilitate analysis, provide diversified fault detection mechanisms, and enhance display readability. A development tool called the Graphical Computation Builder (GCB) has been implemented which provides flight controllers with the capability to implement computations for use in the control center. The GCB provides a language that contains both general programming constructs and language elements specifically tailored for the control center environment. The GCB concept allows staff who are not skilled in computer programming to author and maintain computer programs. The GCB user is isolated from the details of external subsystem interfaces and has access to high-level functions such as matrix operators, trigonometric functions, and unit conversion macros. The GCB provides a high level of feedback during computation development that improves upon the often cryptic errors produced by computer language compilers. An equivalent need can be identified in the industrial data acquisition and process control domain: that of an integrated graphical development tool tailored to the application to hide the operating system, computer language, and data acquisition interface details. The GCB features a modular design which makes it suitable for technology transfer without significant rework. Control center-specific language elements can be replaced by elements specific to industrial process control.
Constructing a logical, regular axis topology from an irregular topology
Faraj, Daniel A.
2014-07-22
Constructing a logical regular topology from an irregular topology including, for each axial dimension and recursively, for each compute node in a subcommunicator until returning to a first node: adding to a logical line of the axial dimension a neighbor specified in a nearest neighbor list; calling the added compute node; determining, by the called node, whether any neighbor in the node's nearest neighbor list is available to add to the logical line; if a neighbor in the called compute node's nearest neighbor list is available to add to the logical line, adding, by the called compute node to the logical line, any neighbor in the called compute node's nearest neighbor list for the axial dimension not already added to the logical line; and, if no neighbor in the called compute node's nearest neighbor list is available to add to the logical line, returning to the calling compute node.
Constructing a logical, regular axis topology from an irregular topology
Faraj, Daniel A.
2014-07-01
Constructing a logical regular topology from an irregular topology including, for each axial dimension and recursively, for each compute node in a subcommunicator until returning to a first node: adding to a logical line of the axial dimension a neighbor specified in a nearest neighbor list; calling the added compute node; determining, by the called node, whether any neighbor in the node's nearest neighbor list is available to add to the logical line; if a neighbor in the called compute node's nearest neighbor list is available to add to the logical line, adding, by the called compute node to the logical line, any neighbor in the called compute node's nearest neighbor list for the axial dimension not already added to the logical line; and, if no neighbor in the called compute node's nearest neighbor list is available to add to the logical line, returning to the calling compute node.
User's Guide for a Modular Flutter Analysis Software System (Fast Version 1.0)
NASA Technical Reports Server (NTRS)
Desmarais, R. N.; Bennett, R. M.
1978-01-01
The use and operation of a group of computer programs to perform a flutter analysis of a single planar wing are described. This system of programs is called FAST for Flutter Analysis System, and consists of five programs. Each program performs certain portions of a flutter analysis and can be run sequentially as a job step or individually. FAST uses natural vibration modes as input data and performs a conventional V-g type of solution. The unsteady aerodynamics programs in FAST are based on the subsonic kernel function lifting-surface theory although other aerodynamic programs can be used. Application of the programs is illustrated by a sample case of a complete flutter calculation that exercises each program.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joe, Jeffrey Clark; Boring, Ronald Laurids; Herberger, Sarah Elizabeth Marie
The United States (U.S.) Department of Energy (DOE) Light Water Reactor Sustainability (LWRS) program has the overall objective to help sustain the existing commercial nuclear power plants (NPPs). To accomplish this program objective, there are multiple LWRS “pathways,” or research and development (R&D) focus areas. One LWRS focus area is called the Risk-Informed Safety Margin and Characterization (RISMC) pathway. Initial efforts under this pathway to combine probabilistic and plant multi-physics models to quantify safety margins and support business decisions also included HRA, but in a somewhat simplified manner. HRA experts at Idaho National Laboratory (INL) have been collaborating with othermore » experts to develop a computational HRA approach, called the Human Unimodel for Nuclear Technology to Enhance Reliability (HUNTER), for inclusion into the RISMC framework. The basic premise of this research is to leverage applicable computational techniques, namely simulation and modeling, to develop and then, using RAVEN as a controller, seamlessly integrate virtual operator models (HUNTER) with 1) the dynamic computational MOOSE runtime environment that includes a full-scope plant model, and 2) the RISMC framework PRA models already in use. The HUNTER computational HRA approach is a hybrid approach that leverages past work from cognitive psychology, human performance modeling, and HRA, but it is also a significant departure from existing static and even dynamic HRA methods. This report is divided into five chapters that cover the development of an external flooding event test case and associated statistical modeling considerations.« less
NASA Technical Reports Server (NTRS)
1971-01-01
Computational techniques were developed and assimilated for the design optimization. The resulting computer program was then used to perform initial optimization and sensitivity studies on a typical thermal protection system (TPS) to demonstrate its application to the space shuttle TPS design. The program was developed in Fortran IV for the CDC 6400 but was subsequently converted to the Fortran V language to be used on the Univac 1108. The program allows for improvement and update of the performance prediction techniques. The program logic involves subroutines which handle the following basic functions: (1) a driver which calls for input, output, and communication between program and user and between the subroutines themselves; (2) thermodynamic analysis; (3) thermal stress analysis; (4) acoustic fatigue analysis; and (5) weights/cost analysis. In addition, a system total cost is predicted based on system weight and historical cost data of similar systems. Two basic types of input are provided, both of which are based on trajectory data. These are vehicle attitude (altitude, velocity, and angles of attack and sideslip), for external heat and pressure loads calculation, and heating rates and pressure loads as a function of time.
CaseLog: semantic network interface to a student computer-based patient record system.
Cimino, C.; Goldman, E. K.; Curtis, J. A.; Reichgott, M. J.
1993-01-01
We have developed a computer program called CaseLog, which serves as an exemplary, computer-based patient record (CPR) system. The program allows for the introduction of the students to issues unique to patient record systems. These include record security, unique patient identifiers, and the use of controlled vocabularies. A particularly challenging aspect of the development of this program was allowing for student entry of controlled vocabulary terms. There were four goals we wished to achieve: students should be able to find the terms they are looking for; once a term has been found, it should be easy to find contextually related terms; it should be easy to determine that a sought-for term is not in the vocabulary; and the structure of the vocabulary should be dynamically altered by contextual information to allow its use for a variety of purposes. We chose a semantic network for our vocabulary structure. Within the processing power of the equipment we were working with, we achieved our goals. This paper will describe the development of the vocabulary, the design of the CaseLog program, and the feedback from student users of the program. PMID:8130581
Applications of Parallel Computation in Micro-Mechanics and Finite Element Method
NASA Technical Reports Server (NTRS)
Tan, Hui-Qian
1996-01-01
This project discusses the application of parallel computations related with respect to material analyses. Briefly speaking, we analyze some kind of material by elements computations. We call an element a cell here. A cell is divided into a number of subelements called subcells and all subcells in a cell have the identical structure. The detailed structure will be given later in this paper. It is obvious that the problem is "well-structured". SIMD machine would be a better choice. In this paper we try to look into the potentials of SIMD machine in dealing with finite element computation by developing appropriate algorithms on MasPar, a SIMD parallel machine. In section 2, the architecture of MasPar will be discussed. A brief review of the parallel programming language MPL also is given in that section. In section 3, some general parallel algorithms which might be useful to the project will be proposed. And, combining with the algorithms, some features of MPL will be discussed in more detail. In section 4, the computational structure of cell/subcell model will be given. The idea of designing the parallel algorithm for the model will be demonstrated. Finally in section 5, a summary will be given.
Sa, Eduardo Costa; Ferreira Junior, Mario; Rocha, Lys Esther
2012-01-01
The aims of this study were to investigate work conditions, to estimate the prevalence and to describe risk factors associated with Computer Vision Syndrome among two call centers' operators in São Paulo (n = 476). The methods include a quantitative cross-sectional observational study and an ergonomic work analysis, using work observation, interviews and questionnaires. The case definition was the presence of one or more specific ocular symptoms answered as always, often or sometimes. The multiple logistic regression model, were created using the stepwise forward likelihood method and remained the variables with levels below 5% (p < 0.05). The operators were mainly female and young (from 15 to 24 years old). The call center was opened 24 hours and the operators weekly hours were 36 hours with break time from 21 to 35 minutes per day. The symptoms reported were eye fatigue (73.9%), "weight" in the eyes (68.2%), "burning" eyes (54.6%), tearing (43.9%) and weakening of vision (43.5%). The prevalence of Computer Vision Syndrome was 54.6%. Associations verified were: being female (OR 2.6, 95% CI 1.6 to 4.1), lack of recognition at work (OR 1.4, 95% CI 1.1 to 1.8), organization of work in call center (OR 1.4, 95% CI 1.1 to 1.7) and high demand at work (OR 1.1, 95% CI 1.0 to 1.3). The organization and psychosocial factors at work should be included in prevention programs of visual syndrome among call centers' operators.
X-38 Experimental Controls Laws
NASA Technical Reports Server (NTRS)
Munday, Steve; Estes, Jay; Bordano, Aldo J.
2000-01-01
X-38 Experimental Control Laws X-38 is a NASA JSC/DFRC experimental flight test program developing a series of prototypes for an International Space Station (ISS) Crew Return Vehicle, often called an ISS "lifeboat." X- 38 Vehicle 132 Free Flight 3, currently scheduled for the end of this month, will be the first flight test of a modem FCS architecture called Multi-Application Control-Honeywell (MACH), originally developed by the Honeywell Technology Center. MACH wraps classical P&I outer attitude loops around a modem dynamic inversion attitude rate loop. The dynamic inversion process requires that the flight computer have an onboard aircraft model of expected vehicle dynamics based upon the aerodynamic database. Dynamic inversion is computationally intensive, so some timing modifications were made to implement MACH on the slower flight computers of the subsonic test vehicles. In addition to linear stability margin analyses and high fidelity 6-DOF simulation, hardware-in-the-loop testing is used to verify the implementation of MACH and its robustness to aerodynamic and environmental uncertainties and disturbances.
Ground temperature measurement by PRT-5 for maps experiment
NASA Technical Reports Server (NTRS)
Gupta, S. K.; Tiwari, S. N.
1978-01-01
A simple algorithm and computer program were developed for determining the actual surface temperature from the effective brightness temperature as measured remotely by a radiation thermometer called PRT-5. This procedure allows the computation of atmospheric correction to the effective brightness temperature without performing detailed radiative transfer calculations. Model radiative transfer calculations were performed to compute atmospheric corrections for several values of the surface and atmospheric parameters individually and in combination. Polynomial regressions were performed between the magnitudes or deviations of these parameters and the corresponding computed corrections to establish simple analytical relations between them. Analytical relations were also developed to represent combined correction for simultaneous variation of parameters in terms of their individual corrections.
Academic computer science and gender: A naturalistic study investigating the causes of attrition
NASA Astrophysics Data System (ADS)
Declue, Timothy Hall
Far fewer women than men take computer science classes in high school, enroll in computer science programs in college, or complete advanced degrees in computer science. The computer science pipeline begins to shrink for women even before entering college, but it is at the college level that the "brain drain" is the most evident numerically, especially in the first class taken by most computer science majors called "Computer Science 1" or CS-I. The result, for both academia and industry, is a pronounced technological gender disparity in academic and industrial computer science. The study revealed the existence of several factors influencing success in CS-I. First, and most clearly, the effect of attribution processes seemed to be quite strong. These processes tend to work against success for females and in favor of success for males. Likewise, evidence was discovered which strengthens theories related to prior experience and the perception that computer science has a culture which is hostile to females. Two unanticipated themes related to the motivation and persistence of successful computer science majors. The findings did not support the belief that females have greater logistical problems in computer science than males, or that females tend to have a different programming style than males which adversely affects the females' ability to succeed in CS-I.
United States Air Force High School Apprenticeship Program. 1990 Program Management Report. Volume 3
1991-04-18
User Guide Shelly Knupp 73 Computer-Aided Design (CAD) Area Christopher O’Dell 74 Electron Beam Lithography Suzette Yu 68 Flight Dynamics Laboratory 75...fabrication. I Mr. Ed Davis, for the background knowledge of device processes and I information on electron beam lithography . Captain Mike Cheney, for...researcher may write gates on to the wafer by a process called lithography . This is the most crucial and complex part of the process. Two types of proven
Program For A Pushbutton Display
NASA Technical Reports Server (NTRS)
Busquets, Anthony M.; Luck, William S., Jr.
1989-01-01
Programmable Display Pushbutton (PDP) is pushbutton device available from Micro Switch having programmable 16X35 matrix of light-emitting diodes on pushbutton surface. Any desired legends display on PDP's, producing user-friendly applications reducing need for dedicated manual controls. Interacts with operator, calls for correct response before transmitting next message. Both simple manual control and sophisticated programmable link between operator and host system. Programmable Display Pushbutton Legend Editor (PDPE) computer program used to create light-emitting-diode (LED) displays for pushbuttons. Written in FORTRAN.
Software Engineering Tools for Scientific Models
NASA Technical Reports Server (NTRS)
Abrams, Marc; Saboo, Pallabi; Sonsini, Mike
2013-01-01
Software tools were constructed to address issues the NASA Fortran development community faces, and they were tested on real models currently in use at NASA. These proof-of-concept tools address the High-End Computing Program and the Modeling, Analysis, and Prediction Program. Two examples are the NASA Goddard Earth Observing System Model, Version 5 (GEOS-5) atmospheric model in Cell Fortran on the Cell Broadband Engine, and the Goddard Institute for Space Studies (GISS) coupled atmosphere- ocean model called ModelE, written in fixed format Fortran.
Cloud-based large-scale air traffic flow optimization
NASA Astrophysics Data System (ADS)
Cao, Yi
The ever-increasing traffic demand makes the efficient use of airspace an imperative mission, and this paper presents an effort in response to this call. Firstly, a new aggregate model, called Link Transmission Model (LTM), is proposed, which models the nationwide traffic as a network of flight routes identified by origin-destination pairs. The traversal time of a flight route is assumed to be the mode of distribution of historical flight records, and the mode is estimated by using Kernel Density Estimation. As this simplification abstracts away physical trajectory details, the complexity of modeling is drastically decreased, resulting in efficient traffic forecasting. The predicative capability of LTM is validated against recorded traffic data. Secondly, a nationwide traffic flow optimization problem with airport and en route capacity constraints is formulated based on LTM. The optimization problem aims at alleviating traffic congestions with minimal global delays. This problem is intractable due to millions of variables. A dual decomposition method is applied to decompose the large-scale problem such that the subproblems are solvable. However, the whole problem is still computational expensive to solve since each subproblem is an smaller integer programming problem that pursues integer solutions. Solving an integer programing problem is known to be far more time-consuming than solving its linear relaxation. In addition, sequential execution on a standalone computer leads to linear runtime increase when the problem size increases. To address the computational efficiency problem, a parallel computing framework is designed which accommodates concurrent executions via multithreading programming. The multithreaded version is compared with its monolithic version to show decreased runtime. Finally, an open-source cloud computing framework, Hadoop MapReduce, is employed for better scalability and reliability. This framework is an "off-the-shelf" parallel computing model that can be used for both offline historical traffic data analysis and online traffic flow optimization. It provides an efficient and robust platform for easy deployment and implementation. A small cloud consisting of five workstations was configured and used to demonstrate the advantages of cloud computing in dealing with large-scale parallelizable traffic problems.
Crystallographic and general use programs for the XDS Sigma 5 computer
NASA Technical Reports Server (NTRS)
Snyder, R. L.
1973-01-01
Programs in basic FORTRAN 4 are described, which fall into three catagories: (1) interactive programs to be executed under time sharing (BTM); (2) non interactive programs which are executed in batch processing mode (BPM); and (3) large non interactive programs which require more memory than is available in the normal BPM/BTM operating system and must be run overnight on a special system called XRAY which releases about 45,000 words of memory to the user. Programs in catagories (1) and (2) are stored as FORTRAN source files in the account FSNYDER. Programs in catagory (3) are stored in the XRAY system as load modules. The type of file in account FSNYDER is identified by the first two letters in the name.
TimeSet: A computer program that accesses five atomic time services on two continents
NASA Technical Reports Server (NTRS)
Petrakis, P. L.
1993-01-01
TimeSet is a shareware program for accessing digital time services by telephone. At its initial release, it was capable of capturing time signals only from the U.S. Naval Observatory to set a computer's clock. Later the ability to synchronize with the National Institute of Standards and Technology was added. Now, in Version 7.10, TimeSet is able to access three additional telephone time services in Europe - in Sweden, Austria, and Italy - making a total of five official services addressable by the program. A companion program, TimeGen, allows yet another source of telephone time data strings for callers equipped with TimeSet version 7.10. TimeGen synthesizes UTC time data strings in the Naval Observatory's format from an accurately set and maintained DOS computer clock, and transmits them to callers. This allows an unlimited number of 'freelance' time generating stations to be created. Timesetting from TimeGen is made feasible by the advent of Becker's RighTime, a shareware program that learns the drift characteristics of a computer's clock and continuously applies a correction to keep it accurate, and also brings .01 second resolution to the DOS clock. With clock regulation by RighTime and periodic update calls by the TimeGen station to an official time source via TimeSet, TimeGen offers the same degree of accuracy within the resolution of the computer clock as any official atomic time source.
NASA Astrophysics Data System (ADS)
DiSalvo, Elizabeth Betsy
The implementation of a learning environment for young African American males, called the Glitch Game Testers, was launched in 2009. The development of this program was based on formative work that looked at the contrasting use of digital games between young African American males and individuals who chose to become computer science majors. Through analysis of cultural values and digital game play practices, the program was designed to intertwine authentic game development practices and computer science learning. The resulting program employed 25 African American male high school students to test pre-release digital games full-time in the summer and part-time in the school year, with an hour of each day dedicated to learning introductory computer science. Outcomes for persisting in computer science education are remarkable; of the 16 participants who had graduated from high school as of 2012, 12 have gone on to school in computing-related majors. These outcomes, and the participants' enthusiasm for engaging in computing, are in sharp contrast to the crisis in African American male education and learning motivation. The research presented in this dissertation discusses the formative research that shaped the design of Glitch, the evaluation of the implementation of Glitch, and a theoretical investigation of the way in which participants navigated conflicting motivations in learning environments.
xPerm: fast index canonicalization for tensor computer algebra
NASA Astrophysics Data System (ADS)
Martín-García, José M.
2008-10-01
We present a very fast implementation of the Butler-Portugal algorithm for index canonicalization with respect to permutation symmetries. It is called xPerm, and has been written as a combination of a Mathematica package and a C subroutine. The latter performs the most demanding parts of the computations and can be linked from any other program or computer algebra system. We demonstrate with tests and timings the effectively polynomial performance of the Butler-Portugal algorithm with respect to the number of indices, though we also show a case in which it is exponential. Our implementation handles generic tensorial expressions with several dozen indices in hundredths of a second, or one hundred indices in a few seconds, clearly outperforming all other current canonicalizers. The code has been already under intensive testing for several years and has been essential in recent investigations in large-scale tensor computer algebra. Program summaryProgram title: xPerm Catalogue identifier: AEBH_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEBH_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 93 582 No. of bytes in distributed program, including test data, etc.: 1 537 832 Distribution format: tar.gz Programming language: C and Mathematica (version 5.0 or higher) Computer: Any computer running C and Mathematica (version 5.0 or higher) Operating system: Linux, Unix, Windows XP, MacOS RAM:: 20 Mbyte Word size: 64 or 32 bits Classification: 1.5, 5 Nature of problem: Canonicalization of indexed expressions with respect to permutation symmetries. Solution method: The Butler-Portugal algorithm. Restrictions: Multiterm symmetries are not considered. Running time: A few seconds with generic expressions of up to 100 indices. The xPermDoc.nb notebook supplied with the distribution takes approximately one and a half hours to execute in full.
LUNSORT list of lunar orbiter data by LAC area
NASA Technical Reports Server (NTRS)
Hixon, S.
1976-01-01
Lunar orbiter (missions 1-5) photographic data are listed sequentially according to the number (1 to 147) LAC (Lunar Aeronautical Chart) areas by use of a computer program called LUNSORT. This listing, as well as a similar one from Apollo would simplify the task of identifying images of a given Lunar area. Instructions and sample case are included.
LamLum : a tool for evaluating the financial feasibility of laminated lumber plants
E.M. (Ted) Bilek; John F. Hunt
2006-01-01
A spreadsheet-based computer program called LamLum was created to analyze the economics of value- added laminated lumber manufacturing facilities. Such facilities manufacture laminations, typically from lower grades of structural lumber, then glue these laminations together to make various types of higher value laminated lumber products. This report provides the...
High Powered Rocketry: Design, Construction, and Launching Experience and Analysis
ERIC Educational Resources Information Center
Paulson, Pryce; Curtis, Jarret; Bartel, Evan; Cyr, Waycen Owens; Lamsal, Chiranjivi
2018-01-01
In this study, the nuts and bolts of designing and building a high powered rocket have been presented. A computer simulation program called RockSim was used to design the rocket. Simulation results are consistent with time variations of altitude, velocity, and acceleration obtained in the actual flight. The actual drag coefficient was determined…
Heat simulation via Scilab programming
NASA Astrophysics Data System (ADS)
Hasan, Mohammad Khatim; Sulaiman, Jumat; Karim, Samsul Arifin Abdul
2014-07-01
This paper discussed the used of an open source sofware called Scilab to develop a heat simulator. In this paper, heat equation was used to simulate heat behavior in an object. The simulator was developed using finite difference method. Numerical experiment output show that Scilab can produce a good heat behavior simulation with marvellous visual output with only developing simple computer code.
Peregrine System User Basics | High-Performance Computing | NREL
peregrine.hpc.nrel.gov or to one of the login nodes. Example commands to access Peregrine from a Linux or Mac OS X system Code Example Create a file called hello.F90 containing the following code: program hello write(6 information by enclosing it in brackets < >. For example: $ ssh -Y
A Conceptual View of the Officer Procurement Model (TOPOPS). Technical Report No. 73-73.
ERIC Educational Resources Information Center
Akman, Allan; Nordhauser, Fred
This report presents the conceptual design of a computer-based linear programing model of the Air Force officer procurement system called TOPOPS. The TOPOPS model is an aggregate model which simulates officer accession and training and is directed at optimizing officer procurement in terms of either minimizing cost or maximizing accession quality…
Talking high-tech turkey: USDA uses new software to analyze habitat management scenarios
H. Michael Rauscher; John E. Spearman; C. Preston Fout; Robert H. Giles; Mark J. Twery
2001-01-01
Researchers at the USDA Forest Service, Northeastern and Southern Research Stations, with many collaborators, have been developing a computer software product called the NED Decision Support System. This program is designed to help forestry consultants and their private landowner clients develop goals, assess current and potential conditions, provide ways to study and...
The P.E.A.C.E. Pack: A Computerized Online Assessment of School Bullying
ERIC Educational Resources Information Center
Slee, Phillip T.; Mohyla, Jury
2014-01-01
School bullying is an international problem with harmful outcomes for those involved. This study describes the design and field testing of an innovative computer-based social learning tool for assessing student perceptions of bullying developed for an Australian intervention program called the P.E.A.C.E. Pack. Students rate their peer group…
ERIC Educational Resources Information Center
Stevenson, R. D.
These materials were designed to be used by life science students for instruction in the application of physical theory to ecosystem operation. Most modules contain computer programs which are built around a particular application of a physical process. This report describes concepts presented in another module called "The First Law of…
Computational Fluid Dynamics Program at NASA Ames Research Center
NASA Technical Reports Server (NTRS)
Holst, Terry L.
1989-01-01
The Computational Fluid Dynamics (CFD) Program at NASA Ames Research Center is reviewed and discussed. The technical elements of the CFD Program are listed and briefly discussed. These elements include algorithm research, research and pilot code development, scientific visualization, advanced surface representation, volume grid generation, and numerical optimization. Next, the discipline of CFD is briefly discussed and related to other areas of research at NASA Ames including experimental fluid dynamics, computer science research, computational chemistry, and numerical aerodynamic simulation. These areas combine with CFD to form a larger area of research, which might collectively be called computational technology. The ultimate goal of computational technology research at NASA Ames is to increase the physical understanding of the world in which we live, solve problems of national importance, and increase the technical capabilities of the aerospace community. Next, the major programs at NASA Ames that either use CFD technology or perform research in CFD are listed and discussed. Briefly, this list includes turbulent/transition physics and modeling, high-speed real gas flows, interdisciplinary research, turbomachinery demonstration computations, complete aircraft aerodynamics, rotorcraft applications, powered lift flows, high alpha flows, multiple body aerodynamics, and incompressible flow applications. Some of the individual problems actively being worked in each of these areas is listed to help define the breadth or extent of CFD involvement in each of these major programs. State-of-the-art examples of various CFD applications are presented to highlight most of these areas. The main emphasis of this portion of the presentation is on examples which will not otherwise be treated at this conference by the individual presentations. Finally, a list of principal current limitations and expected future directions is given.
Application of a neural network to simulate analysis in an optimization process
NASA Technical Reports Server (NTRS)
Rogers, James L.; Lamarsh, William J., II
1992-01-01
A new experimental software package called NETS/PROSSS aimed at reducing the computing time required to solve a complex design problem is described. The software combines a neural network for simulating the analysis program with an optimization program. The neural network is applied to approximate results of a finite element analysis program to quickly obtain a near-optimal solution. Results of the NETS/PROSSS optimization process can also be used as an initial design in a normal optimization process and make it possible to converge to an optimum solution with significantly fewer iterations.
National meeting to review IPAD status and goals. [Integrated Programs for Aerospace-vehicle Design
NASA Technical Reports Server (NTRS)
Fulton, R. E.
1980-01-01
A joint NASA/industry project called Integrated Programs for Aerospace-vehicle Design (IPAD) is described, which has the goal of raising aerospace-industry productivity through the application of computers to integrate company-wide management of engineering data. Basically a general-purpose interactive computing system developed to support engineering design processes, the IPAD design is composed of three major software components: the executive, data management, and geometry and graphics software. Results of IPAD activities include a comprehensive description of a future representative aerospace vehicle design process and its interface to manufacturing, and requirements and preliminary design of a future IPAD software system to integrate engineering activities of an aerospace company having several products under simultaneous development.
NASA Technical Reports Server (NTRS)
Sreekanta Murthy, T.
1992-01-01
Results of the investigation of formal nonlinear programming-based numerical optimization techniques of helicopter airframe vibration reduction are summarized. The objective and constraint function and the sensitivity expressions used in the formulation of airframe vibration optimization problems are presented and discussed. Implementation of a new computational procedure based on MSC/NASTRAN and CONMIN in a computer program system called DYNOPT for optimizing airframes subject to strength, frequency, dynamic response, and dynamic stress constraints is described. An optimization methodology is proposed which is thought to provide a new way of applying formal optimization techniques during the various phases of the airframe design process. Numerical results obtained from the application of the DYNOPT optimization code to a helicopter airframe are discussed.
Amplified crossflow disturbances in the laminar boundary layer on swept wings with suction
NASA Technical Reports Server (NTRS)
Dagenhart, J. R.
1981-01-01
Solution charts of the Orr-Sommerfeld equation for stationary crossflow disturbances are presented for 10 typical velocity profiles on a swept laminar flow control wing. The critical crossflow Reynolds number is shown to be a function of a boundary layer shape factor. Amplification rates for crossflow disturbances are shown to be proportional to the maximum crossflow velocity. A computer stability program called MARIA, employing the amplification rate data for the 10 crossflow velocity profiles, is constructed. This code is shown to adequately approximate more involved computer stability codes using less than two percent as much computer time while retaining the essential physical disturbance growth model.
Advanced information society (8)
NASA Astrophysics Data System (ADS)
Shimoda, Hirotsugu
As technology such as computer, office automation equipments, industrial robots have come into wide use, mental and physical fatigue called technostress as well as health injury has become social issues. Some people attribute this technostress to psychological unrest created by masscommunication or to computer works. On the other hand other people have been conducting investigations of the stress caused by programming works, and gathering information on the related symptoms. The expression and causes of technostress are diverse depending on the kind of computer related labor, therefore, it is necessary to have delicate and detailed countermeasures against it. However, after all technostress is much concerned with individuals' life style and industrial climate.
NASA Technical Reports Server (NTRS)
Roth, J. P.
1972-01-01
The following problems are considered: (1) methods for development of logic design together with algorithms, so that it is possible to compute a test for any failure in the logic design, if such a test exists, and developing algorithms and heuristics for the purpose of minimizing the computation for tests; and (2) a method of design of logic for ultra LSI (large scale integration). It was discovered that the so-called quantum calculus can be extended to render it possible: (1) to describe the functional behavior of a mechanism component by component, and (2) to compute tests for failures, in the mechanism, using the diagnosis algorithm. The development of an algorithm for the multioutput two-level minimization problem is presented and the program MIN 360 was written for this algorithm. The program has options of mode (exact minimum or various approximations), cost function, cost bound, etc., providing flexibility.
Parallel Evolutionary Optimization for Neuromorphic Network Training
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schuman, Catherine D; Disney, Adam; Singh, Susheela
One of the key impediments to the success of current neuromorphic computing architectures is the issue of how best to program them. Evolutionary optimization (EO) is one promising programming technique; in particular, its wide applicability makes it especially attractive for neuromorphic architectures, which can have many different characteristics. In this paper, we explore different facets of EO on a spiking neuromorphic computing model called DANNA. We focus on the performance of EO in the design of our DANNA simulator, and on how to structure EO on both multicore and massively parallel computing systems. We evaluate how our parallel methods impactmore » the performance of EO on Titan, the U.S.'s largest open science supercomputer, and BOB, a Beowulf-style cluster of Raspberry Pi's. We also focus on how to improve the EO by evaluating commonality in higher performing neural networks, and present the result of a study that evaluates the EO performed by Titan.« less
Computer-assisted instruction: a library service for the community teaching hospital.
McCorkel, J; Cook, V
1986-04-01
This paper reports on five years of experience with computer-assisted instruction (CAI) at Winthrop-University Hospital, a major affiliate of the SUNY at Stony Brook School of Medicine. It compares CAI programs available from Ohio State University and Massachusetts General Hospital (accessed by telephone and modem), and software packages purchased from the Health Sciences Consortium (MED-CAPS) and Scientific American (DISCOTEST). The comparison documents one library's experience of the cost of these programs and the use made of them by medical students, house staff, and attending physicians. It describes the space allocated for necessary equipment, as well as the marketing of CAI. Finally, in view of the decision of the National Board of Medical Examiners to administer the Part III examination on computer (the so-called CBX) starting in 1988, the paper speculates on the future importance of CAI in the community teaching hospital.
Inheritance on processes, exemplified on distributed termination detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomsen, K.S.
1987-02-01
A multiple inheritance mechanism on processes is designed and presented within the framework of a small object oriented language. Processes are described in classes, and the different action parts of a process inherited from different classes are executed in a coroutine-like style called alternation. The inheritance mechanism is a useful tool for factorizing the description of common aspects of processes. This is demonstrated within the domain of distributed programming by using the inheritance mechanism to factorize the description of distributed termination detection algorithms from the description of the distributed main computations for which termination is to be detected. A clearmore » separation of concerns is obtained, and arbitrary combinations of terminations detection algorithms and main computations can be formed. The same termination detection classes can also be used for more general purposes within distributed programming, such as detecting termination of each phase in a multi-phase main computation.« less
Sign: large-scale gene network estimation environment for high performance computing.
Tamada, Yoshinori; Shimamura, Teppei; Yamaguchi, Rui; Imoto, Seiya; Nagasaki, Masao; Miyano, Satoru
2011-01-01
Our research group is currently developing software for estimating large-scale gene networks from gene expression data. The software, called SiGN, is specifically designed for the Japanese flagship supercomputer "K computer" which is planned to achieve 10 petaflops in 2012, and other high performance computing environments including Human Genome Center (HGC) supercomputer system. SiGN is a collection of gene network estimation software with three different sub-programs: SiGN-BN, SiGN-SSM and SiGN-L1. In these three programs, five different models are available: static and dynamic nonparametric Bayesian networks, state space models, graphical Gaussian models, and vector autoregressive models. All these models require a huge amount of computational resources for estimating large-scale gene networks and therefore are designed to be able to exploit the speed of 10 petaflops. The software will be available freely for "K computer" and HGC supercomputer system users. The estimated networks can be viewed and analyzed by Cell Illustrator Online and SBiP (Systems Biology integrative Pipeline). The software project web site is available at http://sign.hgc.jp/ .
Reflections of Computing Experiences in a Steel Factory in the Early 1960s
NASA Astrophysics Data System (ADS)
Järvinen, Pertti
We can best see many things from a historical perspective. What were the first pioneers doing in the information technology departments of Finnish manufacturing companies? In early 1960s, I had a special chance to work in a steel industry that had long traditions to use rather advanced tools and methods to intensify their productivity. The first computer in our company had such novel properties as movable disk packs making a direct access of stored data possible. In this paper, we describe the following issues and innovations in some depth. These include (a) transitioning from the punched card machines to a new computer era, (b) using advanced programming language to intensify production of new computer software, (c) drawing pictures by using a line printer, (d) supporting steel making with mathematical software, (e) storing executable programs to the disk memory and calling and moving them from there to the core memory for running, and (f) building a simple report generator. I will also pay attention to the breakthrough in those innovations and in this way demonstrate how some computing solutions were growing at that time.
NASA Astrophysics Data System (ADS)
Esparza, Javier
In many areas of computer science entities can “reproduce”, “replicate”, or “create new instances”. Paramount examples are threads in multithreaded programs, processes in operating systems, and computer viruses, but many others exist: procedure calls create new incarnations of the callees, web crawlers discover new pages to be explored (and so “create” new tasks), divide-and-conquer procedures split a problem into subproblems, and leaves of tree-based data structures become internal nodes with children. For lack of a better name, I use the generic term systems with process creation to refer to all these entities.
ERIC Educational Resources Information Center
Congress of the U.S., Washington, DC. Senate Committee on Commerce, Science, and Transportation.
This collection of statements focuses on Title 2 of S. 1067, which calls for the National Science Foundation to establish a National Research and Education Network (NREN) by 1996. This is one of several titles in a bill to provide for a coordinated federal research program to ensure continued U.S. leadership in high performance computing. The…
NASA Technical Reports Server (NTRS)
Goglia, G. L.; Spiegler, E.
1977-01-01
The research activity focused on two main tasks: (1) the further development of the SCRAM program and, in particular, the addition of a procedure for modeling the mechanism of the internal adjustment process of the flow, in response to the imposed thermal load across the combustor and (2) the development of a numerical code for the computation of the variation of concentrations throughout a turbulent field, where finite-rate reactions occur. The code also includes an estimation of the effect of the phenomenon called 'unmixedness'.
Baseline mathematics and geodetics for tracking operations
NASA Technical Reports Server (NTRS)
James, R.
1981-01-01
Various geodetic and mapping algorithms are analyzed as they apply to radar tracking systems and tested in extended BASIC computer language for real time computer applications. Closed-form approaches to the solution of converting Earth centered coordinates to latitude, longitude, and altitude are compared with classical approximations. A simplified approach to atmospheric refractivity called gradient refraction is compared with conventional ray tracing processes. An extremely detailed set of documentation which provides the theory, derivations, and application of algorithms used in the programs is included. Validation methods are also presented for testing the accuracy of the algorithms.
VPython: Writing Real-time 3D Physics Programs
NASA Astrophysics Data System (ADS)
Chabay, Ruth
2001-06-01
VPython (http://cil.andrew.cmu.edu/projects/visual) combines the Python programming language with an innovative 3D graphics module called Visual, developed by David Scherer. Designed to make 3D physics simulations accessible to novice programmers, VPython allows the programmer to write a purely computational program without any graphics code, and produces an interactive realtime 3D graphical display. In a program 3D objects are created and their positions modified by computational algorithms. Running in a separate thread, the Visual module monitors the positions of these objects and renders them many times per second. Using the mouse, one can zoom and rotate to navigate through the scene. After one hour of instruction, students in an introductory physics course at Carnegie Mellon University, including those who have never programmed before, write programs in VPython to model the behavior of physical systems and to visualize fields in 3D. The Numeric array processing module allows the construction of more sophisticated simulations and models as well. VPython is free and open source. The Visual module is based on OpenGL, and runs on Windows, Linux, and Macintosh.
NASA Astrophysics Data System (ADS)
Jaipal-Jamani, Kamini; Angeli, Charoula
2017-04-01
The current impetus for increasing STEM in K-12 education calls for an examination of how preservice teachers are being prepared to teach STEM. This paper reports on a study that examined elementary preservice teachers' ( n = 21) self-efficacy, understanding of science concepts, and computational thinking as they engaged with robotics in a science methods course. Data collection methods included pretests and posttests on science content, prequestionnaires and postquestionnaires for interest and self-efficacy, and four programming assignments. Statistical results showed that preservice teachers' interest and self-efficacy with robotics increased. There was a statistically significant difference between preknowledge and postknowledge scores, and preservice teachers did show gains in learning how to write algorithms and debug programs over repeated programming tasks. The findings suggest that the robotics activity was an effective instructional strategy to enhance interest in robotics, increase self-efficacy to teach with robotics, develop understandings of science concepts, and promote the development of computational thinking skills. Study findings contribute quantitative evidence to the STEM literature on how robotics develops preservice teachers' self-efficacy, science knowledge, and computational thinking skills in higher education science classroom contexts.
NASA Astrophysics Data System (ADS)
Gupta, V.; Gupta, N.; Gupta, S.; Field, E.; Maechling, P.
2003-12-01
Modern laptop computers, and personal computers, can provide capabilities that are, in many ways, comparable to workstations or departmental servers. However, this doesn't mean we should run all computations on our local computers. We have identified several situations in which it preferable to implement our seismological application programs in a distributed, server-based, computing model. In this model, application programs on the user's laptop, or local computer, invoke programs that run on an organizational server, and the results are returned to the invoking system. Situations in which a server-based architecture may be preferred include: (a) a program is written in a language, or written for an operating environment, that is unsupported on the local computer, (b) software libraries or utilities required to execute a program are not available on the users computer, (c) a computational program is physically too large, or computationally too expensive, to run on a users computer, (d) a user community wants to enforce a consistent method of performing a computation by standardizing on a single implementation of a program, and (e) the computational program may require current information, that is not available to all client computers. Until recently, distributed, server-based, computational capabilities were implemented using client/server architectures. In these architectures, client programs were often written in the same language, and they executed in the same computing environment, as the servers. Recently, a new distributed computational model, called Web Services, has been developed. Web Services are based on Internet standards such as XML, SOAP, WDSL, and UDDI. Web Services offer the promise of platform, and language, independent distributed computing. To investigate this new computational model, and to provide useful services to the SCEC Community, we have implemented several computational and utility programs using a Web Service architecture. We have hosted these Web Services as a part of the SCEC Community Modeling Environment (SCEC/CME) ITR Project (http://www.scec.org/cme). We have implemented Web Services for several of the reasons sited previously. For example, we implemented a FORTRAN-based Earthquake Rupture Forecast (ERF) as a Web Service for use by client computers that don't support a FORTRAN runtime environment. We implemented a Generic Mapping Tool (GMT) Web Service for use by systems that don't have local access to GMT. We implemented a Hazard Map Calculator Web Service to execute Hazard calculations that are too computationally intensive to run on a local system. We implemented a Coordinate Conversion Web Service to enforce a standard and consistent method for converting between UTM and Lat/Lon. Our experience developing these services indicates both strengths and weakness in current Web Service technology. Client programs that utilize Web Services typically need network access, a significant disadvantage at times. Programs with simple input and output parameters were the easiest to implement as Web Services, while programs with complex parameter-types required a significant amount of additional development. We also noted that Web services are very data-oriented, and adapting object-oriented software into the Web Service model proved problematic. Also, the Web Service approach of converting data types into XML format for network transmission has significant inefficiencies for some data sets.
Orzol, Leonard L.; McGrath, Timothy S.
1992-01-01
This report documents modifications to the U.S. Geological Survey modular, three-dimensional, finite-difference, ground-water flow model, commonly called MODFLOW, so that it can read and write files used by a geographic information system (GIS). The modified model program is called MODFLOWARC. Simulation programs such as MODFLOW generally require large amounts of input data and produce large amounts of output data. Viewing data graphically, generating head contours, and creating or editing model data arrays such as hydraulic conductivity are examples of tasks that currently are performed either by the use of independent software packages or by tedious manual editing, manipulating, and transferring data. Programs such as GIS programs are commonly used to facilitate preparation of the model input data and analyze model output data; however, auxiliary programs are frequently required to translate data between programs. Data translations are required when different programs use different data formats. Thus, the user might use GIS techniques to create model input data, run a translation program to convert input data into a format compatible with the ground-water flow model, run the model, run a translation program to convert the model output into the correct format for GIS, and use GIS to display and analyze this output. MODFLOWARC, avoids the two translation steps and transfers data directly to and from the ground-water-flow model. This report documents the design and use of MODFLOWARC and includes instructions for data input/output of the Basic, Block-centered flow, River, Recharge, Well, Drain, Evapotranspiration, General-head boundary, and Streamflow-routing packages. The modification to MODFLOW and the Streamflow-Routing package was minimized. Flow charts and computer-program code describe the modifications to the original computer codes for each of these packages. Appendix A contains a discussion on the operation of MODFLOWARC using a sample problem.
Expanded serial communication capability for the transport systems research vehicle laptop computers
NASA Technical Reports Server (NTRS)
Easley, Wesley C.
1991-01-01
A recent upgrade of the Transport Systems Research Vehicle (TSRV) operated by the Advanced Transport Operating Systems Program Office at the NASA Langley Research Center included installation of a number of Grid 1500 series laptop computers. Each unit is a 80386-based IBM PC clone. RS-232 data busses are needed for TSRV flight research programs, and it has been advantageous to extend the application of the Grids in this area. Use was made of the expansion features of the Grid internal bus to add a user programmable serial communication channel. Software to allow use of the Grid bus expansion has been written and placed in a Turbo C library for incorporation into applications programs in a transparent manner via function calls. Port setup; interrupt-driven, two-way data transfer; and software flow control are built into the library functions.
NASA Technical Reports Server (NTRS)
Poole, L. R.
1974-01-01
A study was conducted of an alternate method for storage and use of bathymetry data in the Langley Research Center and Virginia Institute of Marine Science mid-Atlantic continental-shelf wave-refraction computer program. The regional bathymetry array was divided into 105 indexed modules which can be read individually into memory in a nonsequential manner from a peripheral file using special random-access subroutines. In running a sample refraction case, a 75-percent decrease in program field length was achieved by using the random-access storage method in comparison with the conventional method of total regional array storage. This field-length decrease was accompanied by a comparative 5-percent increase in central processing time and a 477-percent increase in the number of operating-system calls. A comparative Langley Research Center computer system cost savings of 68 percent was achieved by using the random-access storage method.
BLISS: a computer program for the protection of blood donors. Technical report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Catsimpoolas, N.; Cooke, C.; Valeri, C.R.
1982-06-28
A BASIC program has been developed for the Hewlett-Packard Model 9845 desk-top computer which allows the creation of blood donor files for subsequent retrieval, update, and correction. A similar modified version was developed for hte HP 9835 Model. This software system has been called BLISS which stands for Blood Information and Security System. In addition to its function as a file management system, BLISS provides warnings before a donation is performed to protect the donor from excessive exposure to radioactivity and DMSO levels, from too frequent of donations of blood, and from adverse reactions. The program can also be usedmore » to select donors who have participated in specific studies and to list the experimental details which have been stored in the file. The BLISS system has been actively utilized at the Naval Blood Research Laboratory in Boston and contains the files of over 750 donors.« less
Algorithms and software for nonlinear structural dynamics
NASA Technical Reports Server (NTRS)
Belytschko, Ted; Gilbertsen, Noreen D.; Neal, Mark O.
1989-01-01
The objective of this research is to develop efficient methods for explicit time integration in nonlinear structural dynamics for computers which utilize both concurrency and vectorization. As a framework for these studies, the program WHAMS, which is described in Explicit Algorithms for the Nonlinear Dynamics of Shells (T. Belytschko, J. I. Lin, and C.-S. Tsay, Computer Methods in Applied Mechanics and Engineering, Vol. 42, 1984, pp 225 to 251), is used. There are two factors which make the development of efficient concurrent explicit time integration programs a challenge in a structural dynamics program: (1) the need for a variety of element types, which complicates the scheduling-allocation problem; and (2) the need for different time steps in different parts of the mesh, which is here called mixed delta t integration, so that a few stiff elements do not reduce the time steps throughout the mesh.
Distributed and parallel Ada and the Ada 9X recommendations
NASA Technical Reports Server (NTRS)
Volz, Richard A.; Goldsack, Stephen J.; Theriault, R.; Waldrop, Raymond S.; Holzbacher-Valero, A. A.
1992-01-01
Recently, the DoD has sponsored work towards a new version of Ada, intended to support the construction of distributed systems. The revised version, often called Ada 9X, will become the new standard sometimes in the 1990s. It is intended that Ada 9X should provide language features giving limited support for distributed system construction. The requirements for such features are given. Many of the most advanced computer applications involve embedded systems that are comprised of parallel processors or networks of distributed computers. If Ada is to become the widely adopted language envisioned by many, it is essential that suitable compilers and tools be available to facilitate the creation of distributed and parallel Ada programs for these applications. The major languages issues impacting distributed and parallel programming are reviewed, and some principles upon which distributed/parallel language systems should be built are suggested. Based upon these, alternative language concepts for distributed/parallel programming are analyzed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Devine, K.D.; Hennigan, G.L.; Hutchinson, S.A.
1999-01-01
The theoretical background for the finite element computer program, MPSalsa Version 1.5, is presented in detail. MPSalsa is designed to solve laminar or turbulent low Mach number, two- or three-dimensional incompressible and variable density reacting fluid flows on massively parallel computers, using a Petrov-Galerkin finite element formulation. The code has the capability to solve coupled fluid flow (with auxiliary turbulence equations), heat transport, multicomponent species transport, and finite-rate chemical reactions, and to solve coupled multiple Poisson or advection-diffusion-reaction equations. The program employs the CHEMKIN library to provide a rigorous treatment of multicomponent ideal gas kinetics and transport. Chemical reactions occurringmore » in the gas phase and on surfaces are treated by calls to CHEMKIN and SURFACE CHEMK3N, respectively. The code employs unstructured meshes, using the EXODUS II finite element database suite of programs for its input and output files. MPSalsa solves both transient and steady flows by using fully implicit time integration, an inexact Newton method and iterative solvers based on preconditioned Krylov methods as implemented in the Aztec. solver library.« less
An innovative approach to compensator design
NASA Technical Reports Server (NTRS)
Mitchell, J. R.
1972-01-01
The primary goal is to present for a control system a computer-aided-compensator design technique from a frequency domain point of view. The thesis for developing this technique is to describe the open loop frequency response by n discrete frequency points which result in n functions of the compensator coefficients. Several of these functions are chosen so that the system specifications are properly portrayed; then mathematical programming is used to improve all of these functions which have values below minimum standards. In order to do this several definitions in regard to measuring the performance of a system in the frequency domain are given. Next, theorems which govern the number of compensator coefficients necessary to make improvements in a certain number of functions are proved. After this a mathematical programming tool for aiding in the solution of the problem is developed. Then for applying the constraint improvement algorithm generalized gradients for the constraints are derived. Finally, the necessary theory is incorporated in a computer program called CIP (compensator improvement program).
MaMR: High-performance MapReduce programming model for material cloud applications
NASA Astrophysics Data System (ADS)
Jing, Weipeng; Tong, Danyu; Wang, Yangang; Wang, Jingyuan; Liu, Yaqiu; Zhao, Peng
2017-02-01
With the increasing data size in materials science, existing programming models no longer satisfy the application requirements. MapReduce is a programming model that enables the easy development of scalable parallel applications to process big data on cloud computing systems. However, this model does not directly support the processing of multiple related data, and the processing performance does not reflect the advantages of cloud computing. To enhance the capability of workflow applications in material data processing, we defined a programming model for material cloud applications that supports multiple different Map and Reduce functions running concurrently based on hybrid share-memory BSP called MaMR. An optimized data sharing strategy to supply the shared data to the different Map and Reduce stages was also designed. We added a new merge phase to MapReduce that can efficiently merge data from the map and reduce modules. Experiments showed that the model and framework present effective performance improvements compared to previous work.
Dispatching function calls across accelerator devices
Jacob, Arpith C.; Sallenave, Olivier H.
2017-01-10
In one embodiment, a computer-implemented method for dispatching a function call includes receiving, at a supervisor processing element (PE) and from an origin PE, an identifier of a target device, a stack frame of the origin PE, and an address of a function called from the origin PE. The supervisor PE allocates a target PE of the target device. The supervisor PE copies the stack frame of the origin PE to a new stack frame on a call stack of the target PE. The supervisor PE instructs the target PE to execute the function. The supervisor PE receives a notification that execution of the function is complete. The supervisor PE copies the stack frame of the target PE to the stack frame of the origin PE. The supervisor PE releases the target PE of the target device. The supervisor PE instructs the origin PE to resume execution of the program.
Dispatching function calls across accelerator devices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jacob, Arpith C.; Sallenave, Olivier H.
In one embodiment, a computer-implemented method for dispatching a function call includes receiving, at a supervisor processing element (PE) and from an origin PE, an identifier of a target device, a stack frame of the origin PE, and an address of a function called from the origin PE. The supervisor PE allocates a target PE of the target device. The supervisor PE copies the stack frame of the origin PE to a new stack frame on a call stack of the target PE. The supervisor PE instructs the target PE to execute the function. The supervisor PE receives a notificationmore » that execution of the function is complete. The supervisor PE copies the stack frame of the target PE to the stack frame of the origin PE. The supervisor PE releases the target PE of the target device. The supervisor PE instructs the origin PE to resume execution of the program.« less
CTserver: A Computational Thermodynamics Server for the Geoscience Community
NASA Astrophysics Data System (ADS)
Kress, V. C.; Ghiorso, M. S.
2006-12-01
The CTserver platform is an Internet-based computational resource that provides on-demand services in Computational Thermodynamics (CT) to a diverse geoscience user base. This NSF-supported resource can be accessed at ctserver.ofm-research.org. The CTserver infrastructure leverages a high-quality and rigorously tested software library of routines for computing equilibrium phase assemblages and for evaluating internally consistent thermodynamic properties of materials, e.g. mineral solid solutions and a variety of geological fluids, including magmas. Thermodynamic models are currently available for 167 phases. Recent additions include Duan, Møller and Weare's model for supercritical C-O-H-S, extended to include SO2 and S2 species, and an entirely new associated solution model for O-S-Fe-Ni sulfide liquids. This software library is accessed via the CORBA Internet protocol for client-server communication. CORBA provides a standardized, object-oriented, language and platform independent, fast, low-bandwidth interface to phase property modules running on the server cluster. Network transport, language translation and resource allocation are handled by the CORBA interface. Users access server functionality in two principal ways. Clients written as browser- based Java applets may be downloaded which provide specific functionality such as retrieval of thermodynamic properties of phases, computation of phase equilibria for systems of specified composition, or modeling the evolution of these systems along some particular reaction path. This level of user interaction requires minimal programming effort and is ideal for classroom use. A more universal and flexible mode of CTserver access involves making remote procedure calls from user programs directly to the server public interface. The CTserver infrastructure relieves the user of the burden of implementing and testing the often complex thermodynamic models of real liquids and solids. A pilot application of this distributed architecture involves CFD computation of magma convection at Volcan Villarrica with magma properties and phase proportions calculated at each spatial node and at each time step via distributed function calls to MELTS-objects executing on the CTserver. Documentation and programming examples are provided at http://ctserver.ofm- research.org.
ERIC Educational Resources Information Center
Dickinson, J. Barry; Dickinson, Carleen D.
2012-01-01
This study examines the impact that experienced mentoring has on business decisions in a higher education business school. Students, arranged in teams, were given the opportunity to operate virtual companies in a well-known, business simulation program called Capsim. They were required to make decisions concerning marketing, production, finance,…
Using Activity Theory to Understand Intergenerational Play: The Case of Family Quest
ERIC Educational Resources Information Center
Siyahhan, Sinem; Barab, Sasha A.; Downton, Michael P.
2010-01-01
We implemented a five-week family program called "Family Quest" where parents and children ages 9 to 13 played Quest Atlantis, a multiuser 3D educational computer game, at a local after-school club for 90-minute sessions. We used activity theory as a conceptual and an analytical framework to study the nature of intergenerational play, the…
Statistical Tools for Determining Fitness to Fly
1981-09-01
program. (a) Number of Cards in file: 13 (b) Layout of Card 1: iIi Field Length a•e. Variable 1 8 Real EFAIL : Average # of failures for size of control...Method Compute Survival Probability and Frequency Tables 4-4 END 25P FLUW CHARTS 27 QSTART Input EFAIL ,CYEAR,NVAR,NAV,XINC BB~i i=1,3 name (1) i=1,4 Call
ERIC Educational Resources Information Center
Sharp, Steven Kary
2017-01-01
Research indicates a need for teacher education programs which include embedded computer assisted language learning (CALL) to support teachers' technological pedagogical and content knowledge (TPACK) of how to employ technology in classroom settings. Researchers also indicate a need to better understand the knowledge-base of language teacher…
ERIC Educational Resources Information Center
Arnold, Nike
2013-01-01
The ability to make effective use of technology is becoming increasingly important for prospective language teachers. As a result, many teacher preparation programs include some form of training in computer assisted language learning (CALL). This study focuses on one component of such training, the textbooks used in methods courses, and employs…
Interactive water monitoring system accessible by cordless telephone
NASA Astrophysics Data System (ADS)
Volpicelli, Richard; Andeweg, Pierre; Hagar, William G.
1985-12-01
A battery-operated, microcomputer-controlled monitoring device linked with a cordless telephone has been developed for remote measurements. This environmental sensor is self-contained and collects and processes data according to the information sent to its on-board computer system. An RCA model 1805 microprocessor forms the basic controller with a program encoded in memory for data acquisition and analysis. Signals from analog sensing devices used to monitor the environment are converted into digital signals and stored in random access memory of the microcomputer. This remote sensing system is linked to the laboratory by means of a cordless telephone whose base unit is connected to regular telephone lines. This offshore sensing system is simply accessed by a phone call originating from a computer terminal in the laboratory. Data acquisition is initiated upon request: Information continues to be processed and stored until the computer is reprogrammed by another phone call request. Information obtained may be recalled by a phone call after the desired environmental measurements are finished or while they are in progress. Data sampling parameters may be reset at any time, including in the middle of a measurement cycle. The range of the system is limited only by existing telephone grid systems and by the transmission characteristics of the cordless phone used as a communications link. This use of a cordless telephone, coupled with the on-board computer system, may be applied to other field studies requiring data transfer between an on-site analytical system and the laboratory.
Computation of transonic separated wing flows using an Euler/Navier-Stokes zonal approach
NASA Technical Reports Server (NTRS)
Kaynak, Uenver; Holst, Terry L.; Cantwell, Brian J.
1986-01-01
A computer program called Transonic Navier Stokes (TNS) has been developed which solves the Euler/Navier-Stokes equations around wings using a zonal grid approach. In the present zonal scheme, the physical domain of interest is divided into several subdomains called zones and the governing equations are solved interactively. The advantages of the Zonal Grid approach are as follows: (1) the grid for any subdomain can be generated easily; (2) grids can be, in a sense, adapted to the solution; (3) different equation sets can be used in different zones; and, (4) this approach allows for a convenient data base organization scheme. Using this code, separated flows on a NACA 0012 section wing and on the NASA Ames WING C have been computed. First, the effects of turbulence and artificial dissipation models incorporated into the code are assessed by comparing the TNS results with other CFD codes and experiments. Then a series of flow cases is described where data are available. The computed results, including cases with shock-induced separation, are in good agreement with experimental data. Finally, some futuristic cases are presented to demonstrate the abilities of the code for massively separated cases which do not have experimental data.
Orbiter Flying Qualities (OFQ) Workstation user's guide
NASA Technical Reports Server (NTRS)
Myers, Thomas T.; Parseghian, Zareh; Hogue, Jeffrey R.
1988-01-01
This project was devoted to the development of a software package, called the Orbiter Flying Qualities (OFQ) Workstation, for working with the OFQ Archives which are specially selected sets of space shuttle entry flight data relevant to flight control and flying qualities. The basic approach to creation of the workstation software was to federate and extend commercial software products to create a low cost package that operates on personal computers. Provision was made to link the workstation to large computers, but the OFQ Archive files were also converted to personal computer diskettes and can be stored on workstation hard disk drives. The primary element of the workstation developed in the project is the Interactive Data Handler (IDH) which allows the user to select data subsets from the archives and pass them to specialized analysis programs. The IDH was developed as an application in a relational database management system product. The specialized analysis programs linked to the workstation include a spreadsheet program, FREDA for spectral analysis, MFP for frequency domain system identification, and NIPIP for pilot-vehicle system parameter identification. The workstation also includes capability for ensemble analysis over groups of missions.
Biyikli, Emre; To, Albert C.
2015-01-01
A new topology optimization method called the Proportional Topology Optimization (PTO) is presented. As a non-sensitivity method, PTO is simple to understand, easy to implement, and is also efficient and accurate at the same time. It is implemented into two MATLAB programs to solve the stress constrained and minimum compliance problems. Descriptions of the algorithm and computer programs are provided in detail. The method is applied to solve three numerical examples for both types of problems. The method shows comparable efficiency and accuracy with an existing optimality criteria method which computes sensitivities. Also, the PTO stress constrained algorithm and minimum compliance algorithm are compared by feeding output from one algorithm to the other in an alternative manner, where the former yields lower maximum stress and volume fraction but higher compliance compared to the latter. Advantages and disadvantages of the proposed method and future works are discussed. The computer programs are self-contained and publicly shared in the website www.ptomethod.org. PMID:26678849
Multi-mode sensor processing on a dynamically reconfigurable massively parallel processor array
NASA Astrophysics Data System (ADS)
Chen, Paul; Butts, Mike; Budlong, Brad; Wasson, Paul
2008-04-01
This paper introduces a novel computing architecture that can be reconfigured in real time to adapt on demand to multi-mode sensor platforms' dynamic computational and functional requirements. This 1 teraOPS reconfigurable Massively Parallel Processor Array (MPPA) has 336 32-bit processors. The programmable 32-bit communication fabric provides streamlined inter-processor connections with deterministically high performance. Software programmability, scalability, ease of use, and fast reconfiguration time (ranging from microseconds to milliseconds) are the most significant advantages over FPGAs and DSPs. This paper introduces the MPPA architecture, its programming model, and methods of reconfigurability. An MPPA platform for reconfigurable computing is based on a structural object programming model. Objects are software programs running concurrently on hundreds of 32-bit RISC processors and memories. They exchange data and control through a network of self-synchronizing channels. A common application design pattern on this platform, called a work farm, is a parallel set of worker objects, with one input and one output stream. Statically configured work farms with homogeneous and heterogeneous sets of workers have been used in video compression and decompression, network processing, and graphics applications.
Control Data ICEM: A vendors IPAD-like system
NASA Technical Reports Server (NTRS)
Feldman, H. D.
1984-01-01
The IPAD program's goal which was to integrate aerospace applications used in support of the engineering design process is discussed. It is still the key goal, and has evolved into a design centered around the use of data base management, networking, and global user executive technology. An integrated CAD/CAM system modeled in part after the IPAD program and containing elements of the program's goals was developed. The integrated computer aided engineering and manufacturing (ICEM) program started with the acquisition of AD-2000 and Synthavision. The AD-2000 has evolved to a production geometry creation and drafting system which is called CD/2000. Synthavision has grown to be a full scale 3-dimensional modeling system, the ICEM Modeler.
The benchmark aeroelastic models program: Description and highlights of initial results
NASA Technical Reports Server (NTRS)
Bennett, Robert M.; Eckstrom, Clinton V.; Rivera, Jose A., Jr.; Dansberry, Bryan E.; Farmer, Moses G.; Durham, Michael H.
1991-01-01
An experimental effort was implemented in aeroelasticity called the Benchmark Models Program. The primary purpose of this program is to provide the necessary data to evaluate computational fluid dynamic codes for aeroelastic analysis. It also focuses on increasing the understanding of the physics of unsteady flows and providing data for empirical design. An overview is given of this program and some results obtained in the initial tests are highlighted. The tests that were completed include measurement of unsteady pressures during flutter of rigid wing with a NACA 0012 airfoil section and dynamic response measurements of a flexible rectangular wing with a thick circular arc airfoil undergoing shock boundary layer oscillations.
NASA Technical Reports Server (NTRS)
Johnson, S. C.
1982-01-01
An interface system for passing data between a relational information management (RIM) data base complex and engineering analysis language (EAL), a finite element structural analysis program is documented. The interface system, implemented on a CDC Cyber computer, is composed of two FORTRAN programs called RIM2EAL and EAL2RIM. The RIM2EAL reads model definition data from RIM and creates a file of EAL commands to define the model. The EAL2RIM reads model definition and EAL generated analysis data from EAL's data library and stores these data dirctly in a RIM data base. These two interface programs and the format for the RIM data complex are described.
Programming with process groups: Group and multicast semantics
NASA Technical Reports Server (NTRS)
Birman, Kenneth P.; Cooper, Robert; Gleeson, Barry
1991-01-01
Process groups are a natural tool for distributed programming and are increasingly important in distributed computing environments. Discussed here is a new architecture that arose from an effort to simplify Isis process group semantics. The findings include a refined notion of how the clients of a group should be treated, what the properties of a multicast primitive should be when systems contain large numbers of overlapping groups, and a new construct called the causality domain. A system based on this architecture is now being implemented in collaboration with the Chorus and Mach projects.
User's operating procedures. Volume 3: Projects directorate information programs
NASA Technical Reports Server (NTRS)
Haris, C. G.; Harris, D. K.
1985-01-01
A review of the user's operating procedures for the scout project automatic data system, called SPADS is presented. SPADS is the results of the past seven years of software development on a prime mini-computer. SPADS was developed as a single entry, multiple cross-reference data management and information retrieval system for the automation of Project office tasks, including engineering, financial, managerial, and clerical support. This volume, three of three, provides the instructions to operate the projects directorate information programs in data retrieval and file maintenance via the user friendly menu drivers.
''Do-it-yourself'' software program calculates boiler efficiency
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1984-03-01
An easy-to-use software package is described which runs on the IBM Personal Computer. The package calculates boiler efficiency, an important parameter of operating costs and equipment wellbeing. The program stores inputs and calculated results for 20 sets of boiler operating data, called cases. Cases can be displayed and modified on the CRT screen through multiple display pages or copied to a printer. All intermediate calculations are performed by this package. They include: steam enthalpy; water enthalpy; air humidity; gas, oil, coal, and wood heat capacity; and radiation losses.
An improved multiple linear regression and data analysis computer program package
NASA Technical Reports Server (NTRS)
Sidik, S. M.
1972-01-01
NEWRAP, an improved version of a previous multiple linear regression program called RAPIER, CREDUC, and CRSPLT, allows for a complete regression analysis including cross plots of the independent and dependent variables, correlation coefficients, regression coefficients, analysis of variance tables, t-statistics and their probability levels, rejection of independent variables, plots of residuals against the independent and dependent variables, and a canonical reduction of quadratic response functions useful in optimum seeking experimentation. A major improvement over RAPIER is that all regression calculations are done in double precision arithmetic.
A Modular Three-Dimensional Finite-Difference Ground-Water Flow Model
McDonald, Michael G.; Harbaugh, Arlen W.; Guo, Weixing; Lu, Guoping
1988-01-01
This report presents a finite-difference model and its associated modular computer program. The model simulates flow in three dimensions. The report includes detailed explanations of physical and mathematical concepts on which the model is based and an explanation of how those concepts are incorporated in the modular structure of the computer program. The modular structure consists of a Main Program and a series of highly independent subroutines called 'modules.' The modules are grouped into 'packages.' Each package deals with a specific feature of the hydrologic system which is to be simulated, such as flow from rivers or flow into drains, or with a specific method of solving linear equations which describe the flow system, such as the Strongly Implicit Procedure or Slice-Successive Overrelaxation. The division of the program into modules permits the user to examine specific hydrologic features of the model independently. This also facilita development of additional capabilities because new packages can be added to the program without modifying the existing packages. The input and output systems of the computer program are also designed to permit maximum flexibility. Ground-water flow within the aquifer is simulated using a block-centered finite-difference approach. Layers can be simulated as confined, unconfined, or a combination of confined and unconfined. Flow associated with external stresses, such as wells, areal recharge, evapotranspiration, drains, and streams, can also be simulated. The finite-difference equations can be solved using either the Strongly Implicit Procedure or Slice-Successive Overrelaxation. The program is written in FORTRAN 77 and will run without modification on most computers that have a FORTRAN 77 compiler. For each program ,module, this report includes a narrative description, a flow chart, a list of variables, and a module listing.
NASA Technical Reports Server (NTRS)
Johnson, Paul W.
2008-01-01
ePORT (electronic Project Online Risk Tool) provides a systematic approach to using an electronic database program to manage a program/project risk management processes. This presentation will briefly cover the standard risk management procedures, then thoroughly cover NASA's Risk Management tool called ePORT. This electronic Project Online Risk Tool (ePORT) is a web-based risk management program that provides a common framework to capture and manage risks, independent of a programs/projects size and budget. It is used to thoroughly cover the risk management paradigm providing standardized evaluation criterion for common management reporting, ePORT improves Product Line, Center and Corporate Management insight, simplifies program/project manager reporting, and maintains an archive of data for historical reference.
Representation and Use of Temporal Information in ONCOCIN
Kahn, Michael G.; Ferguson, Jay C.; Shortliffe, Edward H.; Fagan, Lawrence M.
1985-01-01
The past medical history of a patient is a complex collection of events yet the understanding of these past events is critical for effective medical diagnostic and therapeutic decisions. Although computers can store vast quantities of patient data, diagnostic and therapeutic computer programs have had difficulty in accessing and analyzing collections of patient information that is clinically pertinent to a specific decision facing a particular patient at a given moment in his disease. Without some model of the patient's past, the computer cannot fully interpret the meaning of the available patient data. We present some of the difficulties that were encountered in ONCOCIN, a cancer chemotherapy planning program. This program must be able to reason about the patient's past treatment history in order to generate a therapy plan that is responsive to the problems he or she may have encountered in the past. A design is presented that supports a more intuitive approach to capture and analyze important temporal relationships in a patient's computer record. In order to represent the time course of a patient, we have implemented a structure called the temporal network and a temporal syntax for data storage and retrieval. Using this system, ONCOCIN is able to quickly obtain data that is patient-specific and context-sensitive. Adding the temporal network to the ONCOCIN system has markedly improved the program's handling of complex temporal issues.
Parallel Computation of the Jacobian Matrix for Nonlinear Equation Solvers Using MATLAB
NASA Technical Reports Server (NTRS)
Rose, Geoffrey K.; Nguyen, Duc T.; Newman, Brett A.
2017-01-01
Demonstrating speedup for parallel code on a multicore shared memory PC can be challenging in MATLAB due to underlying parallel operations that are often opaque to the user. This can limit potential for improvement of serial code even for the so-called embarrassingly parallel applications. One such application is the computation of the Jacobian matrix inherent to most nonlinear equation solvers. Computation of this matrix represents the primary bottleneck in nonlinear solver speed such that commercial finite element (FE) and multi-body-dynamic (MBD) codes attempt to minimize computations. A timing study using MATLAB's Parallel Computing Toolbox was performed for numerical computation of the Jacobian. Several approaches for implementing parallel code were investigated while only the single program multiple data (spmd) method using composite objects provided positive results. Parallel code speedup is demonstrated but the goal of linear speedup through the addition of processors was not achieved due to PC architecture.
AUTOMATED FLOWCHART SYSTEM FROM TEXAS A&M UNIVERSITY
NASA Technical Reports Server (NTRS)
Woodford, W.
1994-01-01
An accurate flowchart is an important part of the documentation for any computer program. The flowchart offers the user an easy to follow overview of program operation and the maintenance programmer an effective debugging tool. The TAMU FLOWCHART System was developed to flowchart any program written in the FORTRAN language. It generates a line printer flowchart which is representative of the program logic. This flowchart provides the user with a detailed representation of the program action taken as each program statement is executed. The TAMU FLOWCHART System should prove to be a valuable aid to groups working with complex FORTRAN programs. Each statement in the program is displayed within a symbol which represents the program action during processing of the enclosed statement. Symbols available include: subroutine, function, and entry statements; arithmetic statements; input and output statements; arithmetical and logical IF statements; subroutine calls with or without argument list returns; computed and assigned GO TO statements; DO statements; STOP and RETURN statements; and CONTINUE and ASSIGN statements. Comment cards within the source program may be suppressed or displayed and associated with a succeeding source statement. Each symbol is annotated with a label (if present in the source code), a block number, and the statement sequence number. Program flow and options within the program are represented by line segments and direction indicators connecting symbols. The TAMU FLOWCHART System should be able to accurately flowchart any working FORTRAN program. This program is written in COBOL for batch execution and has been implemented on an IBM 370 series computer with an OS operating system and with a central memory requirement of approximately 380K of 8 bit bytes. The TAMU FLOWCHART System was developed in 1977.
Nuclear Weapon Environment Model. Volume II. Computer Code User’s Guide.
1979-02-01
J.R./IfW-09obArt AT NAME AND ADDRESS 10 PROGRAM ELEMENT PROJECT. TASK ’A a *0 RK UONGANIZATION TRW Defense and Space Systems GroupA 8WOKUINMES One...SIZE I I& DENSITY / DENSITY ZERO ,-NO OR TIME TOO YES LARGE? I CALL SIZER I r SETUP GRID IDIAGNOSTICI -7 PRINT DESIRED NOY-LOOP .? D I INCREMENT Y I I
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palermo, M.R.; Schroeder, P.R.
This technical note describes a technique for comparison of the predicted quality of effluent discharged from confined dredged material disposal areas with applicable water quality standards. This note also serves as documentation of a computer program called EFQUAL written for that purpose as part of the Automated Dredging and Disposal Alternatives Management System (ADDAMS).
ERIC Educational Resources Information Center
Ros, S.; Robles-Gomez, A.; Hernandez, R.; Caminero, A. C.; Pastor, R.
2012-01-01
This paper outlines the adaptation of a course on the management of network services in operating systems, called NetServicesOS, to the context of the new European Higher Education Area (EHEA). NetServicesOS is a mandatory course in one of the official graduate programs in the Faculty of Computer Science at the Universidad Nacional de Educacion a…
2009-05-14
11 Next Generation Internet Research Act of 1998...performance computing R&D and called for increased interagency planning and coordination. The second, the Next Generation Internet Research Act of...law is available at http://www.nitrd.gov/congressional/laws/pl_102-194.html. 19 Next Generation Internet Research Act of 1998, P.L. 105-305, 15 U.S.C
Gonçalves, Cristina P; Mohallem, José R
2004-11-15
We report the development of a simple algorithm to modify quantum chemistry codes based on the LCAO procedure, to account for the isotope problem in electronic structure calculations. No extra computations are required compared to standard Born-Oppenheimer calculations. An upgrade of the Gamess package called ISOTOPE is presented, and its applicability is demonstrated in some examples.
ERIC Educational Resources Information Center
Lane, David J.; Lindemann, Dana F.; Schmidt, James A.
2012-01-01
The National Institute of Alcohol Abuse and Alcoholism has called for the use of evidence-based approaches to address high-risk drinking prevalent on many college campuses. In line with this recommendation, the present study evaluated the efficacy of two evidence-based approaches to reducing alcohol use. One hundred and three college students in…
Patrol force allocation for law enforcement: An introductory planning guide
NASA Technical Reports Server (NTRS)
Sohn, R. L.; Kennedy, R. D.
1976-01-01
Previous and current methods for analyzing police patrol forces are reviewed and discussed. The steps in developing an allocation analysis procedure are defined, including the prediction of the rate of calls for service, determination of the number of patrol units needed, designing sectors, and analyzing dispatch strategies. Existing computer programs used for this purpose are briefly described, and some results of their application are given.
NASA Astrophysics Data System (ADS)
Marchand, R.; Purschke, D.; Samson, J.
2013-03-01
Understanding the physics of interaction between satellites and the space environment is essential in planning and exploiting space missions. Several computer models have been developed over the years to study this interaction. In all cases, simulations are carried out in the reference frame of the spacecraft and effects such as charging, the formation of electrostatic sheaths and wakes are calculated for given conditions of the space environment. In this paper we present a program used to compute magnetic fields and a number of space plasma and space environment parameters relevant to Low Earth Orbits (LEO) spacecraft-plasma interaction modeling. Magnetic fields are obtained from the International Geophysical Reference Field (IGRF) and plasma parameters are obtained from the International Reference Ionosphere (IRI) model. All parameters are computed in the spacecraft frame of reference as a function of its six Keplerian elements. They are presented in a format that can be used directly in most spacecraft-plasma interaction models. Catalogue identifier: AENY_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENY_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 270308 No. of bytes in distributed program, including test data, etc.: 2323222 Distribution format: tar.gz Programming language: FORTRAN 90. Computer: Non specific. Operating system: Non specific. RAM: 7.1 MB Classification: 19, 4.14. External routines: IRI, IGRF (included in the package). Nature of problem: Compute magnetic field components, direction of the sun, sun visibility factor and approximate plasma parameters in the reference frame of a Low Earth Orbit satellite. Solution method: Orbit integration, calls to IGRF and IRI libraries and transformation of coordinates from geocentric to spacecraft frame reference. Restrictions: Low Earth orbits, altitudes between 150 and 2000 km. Running time: Approximately two seconds to parameterize a full orbit with 1000 points.
Building flexible real-time systems using the Flex language
NASA Technical Reports Server (NTRS)
Kenny, Kevin B.; Lin, Kwei-Jay
1991-01-01
The design and implementation of a real-time programming language called Flex, which is a derivative of C++, are presented. It is shown how different types of timing requirements might be expressed and enforced in Flex, how they might be fulfilled in a flexible way using different program models, and how the programming environment can help in making binding and scheduling decisions. The timing constraint primitives in Flex are easy to use yet powerful enough to define both independent and relative timing constraints. Program models like imprecise computation and performance polymorphism can carry out flexible real-time programs. In addition, programmers can use a performance measurement tool that produces statistically correct timing models to predict the expected execution time of a program and to help make binding decisions. A real-time programming environment is also presented.
49 CFR 198.37 - State one-call damage prevention program.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 49 Transportation 3 2010-10-01 2010-10-01 false State one-call damage prevention program. 198.37... REGULATIONS FOR GRANTS TO AID STATE PIPELINE SAFETY PROGRAMS Adoption of One-Call Damage Prevention Program § 198.37 State one-call damage prevention program. A State must adopt a one-call damage prevention...
Enabling Wide-Scale Computer Science Education through Improved Automated Assessment Tools
NASA Astrophysics Data System (ADS)
Boe, Bryce A.
There is a proliferating demand for newly trained computer scientists as the number of computer science related jobs continues to increase. University programs will only be able to train enough new computer scientists to meet this demand when two things happen: when there are more primary and secondary school students interested in computer science, and when university departments have the resources to handle the resulting increase in enrollment. To meet these goals, significant effort is being made to both incorporate computational thinking into existing primary school education, and to support larger university computer science class sizes. We contribute to this effort through the creation and use of improved automated assessment tools. To enable wide-scale computer science education we do two things. First, we create a framework called Hairball to support the static analysis of Scratch programs targeted for fourth, fifth, and sixth grade students. Scratch is a popular building-block language utilized to pique interest in and teach the basics of computer science. We observe that Hairball allows for rapid curriculum alterations and thus contributes to wide-scale deployment of computer science curriculum. Second, we create a real-time feedback and assessment system utilized in university computer science classes to provide better feedback to students while reducing assessment time. Insights from our analysis of student submission data show that modifications to the system configuration support the way students learn and progress through course material, making it possible for instructors to tailor assignments to optimize learning in growing computer science classes.
Integrating Computer-Assisted Language Learning in Saudi Schools: A Change Model
ERIC Educational Resources Information Center
Alresheed, Saleh; Leask, Marilyn; Raiker, Andrea
2015-01-01
Computer-assisted language learning (CALL) technology and pedagogy have gained recognition globally for their success in supporting second language acquisition (SLA). In Saudi Arabia, the government aims to provide most educational institutions with computers and networking for integrating CALL into classrooms. However, the recognition of CALL's…
MODFLOW-2005 : the U.S. Geological Survey modular ground-water model--the ground-water flow process
Harbaugh, Arlen W.
2005-01-01
This report presents MODFLOW-2005, which is a new version of the finite-difference ground-water model commonly called MODFLOW. Ground-water flow is simulated using a block-centered finite-difference approach. Layers can be simulated as confined or unconfined. Flow associated with external stresses, such as wells, areal recharge, evapotranspiration, drains, and rivers, also can be simulated. The report includes detailed explanations of physical and mathematical concepts on which the model is based, an explanation of how those concepts are incorporated in the modular structure of the computer program, instructions for using the model, and details of the computer code. The modular structure consists of a MAIN Program and a series of highly independent subroutines. The subroutines are grouped into 'packages.' Each package deals with a specific feature of the hydrologic system that is to be simulated, such as flow from rivers or flow into drains, or with a specific method of solving the set of simultaneous equations resulting from the finite-difference method. Several solution methods are incorporated, including the Preconditioned Conjugate-Gradient method. The division of the program into packages permits the user to examine specific hydrologic features of the model independently. This also facilitates development of additional capabilities because new packages can be added to the program without modifying the existing packages. The input and output systems of the computer program also are designed to permit maximum flexibility. The program is designed to allow other capabilities, such as transport and optimization, to be incorporated, but this report is limited to describing the ground-water flow capability. The program is written in Fortran 90 and will run without modification on most computers that have a Fortran 90 compiler.
Optimization Issues with Complex Rotorcraft Comprehensive Analysis
NASA Technical Reports Server (NTRS)
Walsh, Joanne L.; Young, Katherine C.; Tarzanin, Frank J.; Hirsh, Joel E.; Young, Darrell K.
1998-01-01
This paper investigates the use of the general purpose automatic differentiation (AD) tool called Automatic Differentiation of FORTRAN (ADIFOR) as a means of generating sensitivity derivatives for use in Boeing Helicopter's proprietary comprehensive rotor analysis code (VII). ADIFOR transforms an existing computer program into a new program that performs a sensitivity analysis in addition to the original analysis. In this study both the pros (exact derivatives, no step-size problems) and cons (more CPU, more memory) of ADIFOR are discussed. The size (based on the number of lines) of the VII code after ADIFOR processing increased by 70 percent and resulted in substantial computer memory requirements at execution. The ADIFOR derivatives took about 75 percent longer to compute than the finite-difference derivatives. However, the ADIFOR derivatives are exact and are not functions of step-size. The VII sensitivity derivatives generated by ADIFOR are compared with finite-difference derivatives. The ADIFOR and finite-difference derivatives are used in three optimization schemes to solve a low vibration rotor design problem.
Kernel-based Linux emulation for Plan 9.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Minnich, Ronald G.
2010-09-01
CNKemu is a kernel-based system for the 9k variant of the Plan 9 kernel. It is designed to provide transparent binary support for programs compiled for IBM's Compute Node Kernel (CNK) on the Blue Gene series of supercomputers. This support allows users to build applications with the standard Blue Gene toolchain, including C++ and Fortran compilers. While the CNK is not Linux, IBM designed the CNK so that the user interface has much in common with the Linux 2.0 system call interface. The Plan 9 CNK emulator hence provides the foundation of kernel-based Linux system call support on Plan 9.more » In this paper we discuss cnkemu's implementation and some of its more interesting features, such as the ability to easily intermix Plan 9 and Linux system calls.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hollman, David; Lifflander, Jonathon; Wilke, Jeremiah
2017-03-14
DARMA is a portability layer for asynchronous many-task (AMT) runtime systems. AMT runtime systems show promise to mitigate challenges imposed by next generation high performance computing architectures. However, current runtime system technologies are not production-ready. DARMA is a portability layer that seeks to insulate application developers from idiosyncrasies of individual runtime systems, thereby facilitating application-developer use of these technologies. DARMA comprises a frontend application programming interface (API) for application developers, a backend API for runtime system developers, and a translation that translates frontend API calls into backend API calls. Application developers use C++ abstractions to annotate both data and tasksmore » in their code. The DARMA translation layer uses C++ template metaprogramming to capture data-task dependencies, and provides this information to a potential backend runtime system via a series of backend API calls.« less
Parallel Computational Protein Design.
Zhou, Yichao; Donald, Bruce R; Zeng, Jianyang
2017-01-01
Computational structure-based protein design (CSPD) is an important problem in computational biology, which aims to design or improve a prescribed protein function based on a protein structure template. It provides a practical tool for real-world protein engineering applications. A popular CSPD method that guarantees to find the global minimum energy solution (GMEC) is to combine both dead-end elimination (DEE) and A* tree search algorithms. However, in this framework, the A* search algorithm can run in exponential time in the worst case, which may become the computation bottleneck of large-scale computational protein design process. To address this issue, we extend and add a new module to the OSPREY program that was previously developed in the Donald lab (Gainza et al., Methods Enzymol 523:87, 2013) to implement a GPU-based massively parallel A* algorithm for improving protein design pipeline. By exploiting the modern GPU computational framework and optimizing the computation of the heuristic function for A* search, our new program, called gOSPREY, can provide up to four orders of magnitude speedups in large protein design cases with a small memory overhead comparing to the traditional A* search algorithm implementation, while still guaranteeing the optimality. In addition, gOSPREY can be configured to run in a bounded-memory mode to tackle the problems in which the conformation space is too large and the global optimal solution cannot be computed previously. Furthermore, the GPU-based A* algorithm implemented in the gOSPREY program can be combined with the state-of-the-art rotamer pruning algorithms such as iMinDEE (Gainza et al., PLoS Comput Biol 8:e1002335, 2012) and DEEPer (Hallen et al., Proteins 81:18-39, 2013) to also consider continuous backbone and side-chain flexibility.
NASA Technical Reports Server (NTRS)
Barry, Matthew R.
2006-01-01
The X-Windows Socket Widget Class ("Class" is used here in the object-oriented-programming sense of the word) was devised to simplify the task of implementing network connections for graphical-user-interface (GUI) computer programs. UNIX Transmission Control Protocol/Internet Protocol (TCP/IP) socket programming libraries require many method calls to configure, operate, and destroy sockets. Most X Windows GUI programs use widget sets or toolkits to facilitate management of complex objects. The widget standards facilitate construction of toolkits and application programs. The X-Windows Socket Widget Class encapsulates UNIX TCP/IP socket-management tasks within the framework of an X Windows widget. Using the widget framework, X Windows GUI programs can treat one or more network socket instances in the same manner as that of other graphical widgets, making it easier to program sockets. Wrapping ISP socket programming libraries inside a widget framework enables a programmer to treat a network interface as though it were a GUI.
SPARX, a new environment for Cryo-EM image processing.
Hohn, Michael; Tang, Grant; Goodyear, Grant; Baldwin, P R; Huang, Zhong; Penczek, Pawel A; Yang, Chao; Glaeser, Robert M; Adams, Paul D; Ludtke, Steven J
2007-01-01
SPARX (single particle analysis for resolution extension) is a new image processing environment with a particular emphasis on transmission electron microscopy (TEM) structure determination. It includes a graphical user interface that provides a complete graphical programming environment with a novel data/process-flow infrastructure, an extensive library of Python scripts that perform specific TEM-related computational tasks, and a core library of fundamental C++ image processing functions. In addition, SPARX relies on the EMAN2 library and cctbx, the open-source computational crystallography library from PHENIX. The design of the system is such that future inclusion of other image processing libraries is a straightforward task. The SPARX infrastructure intelligently handles retention of intermediate values, even those inside programming structures such as loops and function calls. SPARX and all dependencies are free for academic use and available with complete source.
NASA Technical Reports Server (NTRS)
Rule, William Keith
1991-01-01
A computer program called BALLIST that is intended to be a design tool for engineers is described. BALLlST empirically predicts the bumper thickness required to prevent perforation of the Space Station pressure wall by a projectile (such as orbital debris) as a function of the projectile's velocity. 'Ballistic' limit curves (bumper thickness vs. projectile velocity) are calculated and are displayed on the screen as well as being stored in an ASCII file. A Whipple style of spacecraft wall configuration is assumed. The predictions are based on a database of impact test results. NASA/Marshall Space Flight Center currently has the capability to generate such test results. Numerical simulation results of impact conditions that can not be tested (high velocities or large particles) can also be used for predictions.
Electron tunneling in proteins program.
Hagras, Muhammad A; Stuchebrukhov, Alexei A
2016-06-05
We developed a unique integrated software package (called Electron Tunneling in Proteins Program or ETP) which provides an environment with different capabilities such as tunneling current calculation, semi-empirical quantum mechanical calculation, and molecular modeling simulation for calculation and analysis of electron transfer reactions in proteins. ETP program is developed as a cross-platform client-server program in which all the different calculations are conducted at the server side while only the client terminal displays the resulting calculation outputs in the different supported representations. ETP program is integrated with a set of well-known computational software packages including Gaussian, BALLVIEW, Dowser, pKip, and APBS. In addition, ETP program supports various visualization methods for the tunneling calculation results that assist in a more comprehensive understanding of the tunneling process. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
GCLAS: a graphical constituent loading analysis system
McKallip, T.E.; Koltun, G.F.; Gray, J.R.; Glysson, G.D.
2001-01-01
The U. S. Geological Survey has developed a program called GCLAS (Graphical Constituent Loading Analysis System) to aid in the computation of daily constituent loads transported in stream flow. Due to the relative paucity with which most water-quality data are collected, computation of daily constituent loads is moderately to highly dependent on human interpretation of the relation between stream hydraulics and constituent transport. GCLAS provides a visual environment for evaluating the relation between hydraulic and other covariate time series and the constituent chemograph. GCLAS replaces the computer program Sedcalc, which is the most recent USGS sanctioned tool for constructing sediment chemographs and computing suspended-sediment loads. Written in a portable language, GCLAS has an interactive graphical interface that permits easy entry of estimated values and provides new tools to aid in making those estimates. The use of a portable language for program development imparts a degree of computer platform independence that was difficult to obtain in the past, making implementation more straightforward within the USGS' s diverse computing environment. Some of the improvements introduced in GCLAS include (1) the ability to directly handle periods of zero or reverse flow, (2) the ability to analyze and apply coefficient adjustments to concentrations as a function of time, streamflow, or both, (3) the ability to compute discharges of constituents other than suspended sediment, (4) the ability to easily view data related to the chemograph at different levels of detail, and (5) the ability to readily display covariate time series data to provide enhanced visual cues for drawing the constituent chemograph.
Alloy Design Workbench-Surface Modeling Package Developed
NASA Technical Reports Server (NTRS)
Abel, Phillip B.; Noebe, Ronald D.; Bozzolo, Guillermo H.; Good, Brian S.; Daugherty, Elaine S.
2003-01-01
NASA Glenn Research Center's Computational Materials Group has integrated a graphical user interface with in-house-developed surface modeling capabilities, with the goal of using computationally efficient atomistic simulations to aid the development of advanced aerospace materials, through the modeling of alloy surfaces, surface alloys, and segregation. The software is also ideal for modeling nanomaterials, since surface and interfacial effects can dominate material behavior and properties at this level. Through the combination of an accurate atomistic surface modeling methodology and an efficient computational engine, it is now possible to directly model these types of surface phenomenon and metallic nanostructures without a supercomputer. Fulfilling a High Operating Temperature Propulsion Components (HOTPC) project level-I milestone, a graphical user interface was created for a suite of quantum approximate atomistic materials modeling Fortran programs developed at Glenn. The resulting "Alloy Design Workbench-Surface Modeling Package" (ADW-SMP) is the combination of proven quantum approximate Bozzolo-Ferrante-Smith (BFS) algorithms (refs. 1 and 2) with a productivity-enhancing graphical front end. Written in the portable, platform independent Java programming language, the graphical user interface calls on extensively tested Fortran programs running in the background for the detailed computational tasks. Designed to run on desktop computers, the package has been deployed on PC, Mac, and SGI computer systems. The graphical user interface integrates two modes of computational materials exploration. One mode uses Monte Carlo simulations to determine lowest energy equilibrium configurations. The second approach is an interactive "what if" comparison of atomic configuration energies, designed to provide real-time insight into the underlying drivers of alloying processes.
NASA Technical Reports Server (NTRS)
Raju, I. S.; Newman, J. C., Jr.
1993-01-01
A computer program, surf3d, that uses the 3D finite-element method to calculate the stress-intensity factors for surface, corner, and embedded cracks in finite-thickness plates with and without circular holes, was developed. The cracks are assumed to be either elliptic or part eliptic in shape. The computer program uses eight-noded hexahedral elements to model the solid. The program uses a skyline storage and solver. The stress-intensity factors are evaluated using the force method, the crack-opening displacement method, and the 3-D virtual crack closure methods. In the manual the input to and the output of the surf3d program are described. This manual also demonstrates the use of the program and describes the calculation of the stress-intensity factors. Several examples with sample data files are included with the manual. To facilitate modeling of the user's crack configuration and loading, a companion program (a preprocessor program) that generates the data for the surf3d called gensurf was also developed. The gensurf program is a three dimensional mesh generator program that requires minimal input and that builds a complete data file for surf3d. The program surf3d is operational on Unix machines such as CRAY Y-MP, CRAY-2, and Convex C-220.
Fractal Analysis of Rock Joint Profiles
NASA Astrophysics Data System (ADS)
Audy, Ondřej; Ficker, Tomáš
2017-10-01
Surface reliefs of rock joints are analyzed in geotechnics when shear strength of rocky slopes is estimated. The rock joint profiles actually are self-affine fractal curves and computations of their fractal dimensions require special methods. Many papers devoted to the fractal properties of these profiles were published in the past but only a few of those papers employed a convenient computational method that would have guaranteed a sound value of that dimension. As a consequence, anomalously low dimensions were presented. This contribution deals with two computational modifications that lead to sound fractal dimensions of the self-affine rock joint profiles. These are the modified box-counting method and the modified yard-stick method sometimes called the compass method. Both these methods are frequently applied to self-similar fractal curves but the self-affine profile curves due to their self-affine nature require modified computational procedures implemented in computer programs.
ERIC Educational Resources Information Center
Pederson, Kathleen Marshall
The status of research on computer-assisted language learning (CALL) is explored beginning with a historical perspective of research on the language laboratory, followed by analyses of applied research on CALL. A theoretical base is provided to illustrate the need for more basic research on CALL that considers computer capabilities, learner…
NASA Technical Reports Server (NTRS)
Bergeron, H. P.; Haynie, A. T.; Mcdede, J. B.
1980-01-01
A general aviation single pilot instrument flight rule simulation capability was developed. Problems experienced by single pilots flying in IFR conditions were investigated. The simulation required a three dimensional spatial navaid environment of a flight navigational area. A computer simulation of all the navigational aids plus 12 selected airports located in the Washington/Norfolk area was developed. All programmed locations in the list were referenced to a Cartesian coordinate system with the origin located at a specified airport's reference point. All navigational aids with their associated frequencies, call letters, locations, and orientations plus runways and true headings are included in the data base. The simulation included a TV displayed out-the-window visual scene of country and suburban terrain and a scaled model runway complex. Any of the programmed runways, with all its associated navaids, can be referenced to a runway on the airport in this visual scene. This allows a simulation of a full mission scenario including breakout and landing.
Blended Learning Implementation in “Guru Pembelajar” Program
NASA Astrophysics Data System (ADS)
Mahdan, D.; Kamaludin, M.; Wendi, H. F.; Simanjuntak, M. V.
2018-02-01
The rapid development of information and communication technology (ICT), especially the internet, computers and communication devices requires the innovation in learning; one of which is Blended Learning. The concept of Blended Learning is the mixing of face-to-face learning models by learning online. Blended learning used in the learner teacher program organized by the Indonesian department of education and culture that a program to improve the competence of teachers, called “Guru Pembelajar” (GP). Blended learning model is perfect for learning for teachers, due to limited distance and time because online learning can be done anywhere and anytime. but the problems that arise from the implementation of this activity are many teachers who do not follow the activities because teachers, especially the elderly do not want to follow the activities because they cannot use computers and the internet, applications that are difficult to understand by participants, unstable internet connection in the area where the teacher lives and facilities and infrastructure are not adequate.
SNIF-ACT: A Cognitive Model of User Navigation on the World Wide Web
2007-01-03
opinions of others on a particular topic or problems. Obviously, our model was not able to answer these questions directly, and more research is... Research Center 3333 Coyote Hill Rd Palo Alto, CA 94304, USA Manuscript submitted to Human-Computer Interaction Date: Jan 03, 2007...models. Rational analysis is a variant form of an approach called methodological adaptationism that has also shaped research programs in behavioral
Database Design and Management in Engineering Optimization.
1988-02-01
scientific and engineer- Q.- ’ method In the mid-19SOs along with modern digital com- ing applications. The paper highlights the difference puters, have made...is continuously tion software can call standard subroutines from the DBMS redefined in an application program, DDL must have j libary to define...operations. .. " type data usually encountered in engineering applications. GFDGT: Computes the number of digits needed to display " "’ A user
On Fixed Points of Strictly Causal Functions
2013-04-08
were defined to be the functions that are strictly contracting with respect to the Cantor metric (also called the Baire distance) on signals over non...in Computer Science, pages 447–484. Springer Berlin / Heidelberg, 1992. [36] George Markowsky. Chain-complete posets and directed sets with...Journal of Logic Programming, 42(2):59–70, 2000. [53] George M. Reed and A. William Roscoe. A timed model for communicating sequential processes. In
smwrGraphs—An R package for graphing hydrologic data, version 1.1.2
Lorenz, David L.; Diekoff, Aliesha L.
2017-01-31
This report describes an R package called smwrGraphs, which consists of a collection of graphing functions for hydrologic data within R, a programming language and software environment for statistical computing. The functions in the package have been developed by the U.S. Geological Survey to create high-quality graphs for publication or presentation of hydrologic data that meet U.S. Geological Survey graphics guidelines.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schroeder, P.R.; Gibson, A.C.; Dardeau, E.A.
This technical note has a twofold purpose: to describe a technique for comparing the predicted quality of surface runoff from confined dredged material disposal areas with applicable water quality standards and to document a computer program called RUNQUAL, written for that purpose as a part of the Automated Dredging and Disposal Alternatives Management System (ADDAMS).
NASA Technical Reports Server (NTRS)
Betts, W. S., Jr.
1972-01-01
A computer program called HOPI was developed to predict reorientation flow dynamics, wherein liquids move from one end of a closed, partially filled, rigid container to the other end under the influence of container acceleration. The program uses the simplified marker and cell numerical technique and, using explicit finite-differencing, solves the Navier-Stokes equations for an incompressible viscous fluid. The effects of turbulence are also simulated in the program. HOPI can consider curved as well as straight walled boundaries. Both free-surface and confined flows can be calculated. The program was used to simulate five liquid reorientation cases. Three of these cases simulated actual NASA LeRC drop tower test conditions while two cases simulated full-scale Centaur tank conditions. It was concluded that while HOPI can be used to analytically determine the fluid motion in a typical settling problem, there is a current need to optimize HOPI. This includes both reducing the computer usage time and also reducing the core storage required for a given size problem.
Manual for Getdata Version 3.1: a FORTRAN Utility Program for Time History Data
NASA Technical Reports Server (NTRS)
Maine, Richard E.
1987-01-01
This report documents version 3.1 of the GetData computer program. GetData is a utility program for manipulating files of time history data, i.e., data giving the values of parameters as functions of time. The most fundamental capability of GetData is extracting selected signals and time segments from an input file and writing the selected data to an output file. Other capabilities include converting file formats, merging data from several input files, time skewing, interpolating to common output times, and generating calculated output signals as functions of the input signals. This report also documents the interface standards for the subroutines used by GetData to read and write the time history files. All interface to the data files is through these subroutines, keeping the main body of GetData independent of the precise details of the file formats. Different file formats can be supported by changes restricted to these subroutines. Other computer programs conforming to the interface standards can call the same subroutines to read and write files in compatible formats.
NASA Technical Reports Server (NTRS)
Fortenbaugh, R. L.
1980-01-01
Instructions for using Vertical Attitude Takeoff and Landing Aircraft Simulation (VATLAS), the digital simulation program for application to vertical attitude takeoff and landing (VATOL) aircraft developed for installation on the NASA Ames CDC 7600 computer system are described. The framework for VATLAS is the Off-Line Simulation (OLSIM) routine. The OLSIM routine provides a flexible framework and standardized modules which facilitate the development of off-line aircraft simulations. OLSIM runs under the control of VTOLTH, the main program, which calls the proper modules for executing user specified options. These options include trim, stability derivative calculation, time history generation, and various input-output options.
Automated procedures for sizing aerospace vehicle structures /SAVES/
NASA Technical Reports Server (NTRS)
Giles, G. L.; Blackburn, C. L.; Dixon, S. C.
1972-01-01
Results from a continuing effort to develop automated methods for structural design are described. A system of computer programs presently under development called SAVES is intended to automate the preliminary structural design of a complete aerospace vehicle. Each step in the automated design process of the SAVES system of programs is discussed, with emphasis placed on use of automated routines for generation of finite-element models. The versatility of these routines is demonstrated by structural models generated for a space shuttle orbiter, an advanced technology transport,n hydrogen fueled Mach 3 transport. Illustrative numerical results are presented for the Mach 3 transport wing.
An Integer Programming Model for the Management of a Forest in the North of Portugal
NASA Astrophysics Data System (ADS)
Cerveira, Adelaide; Fonseca, Teresa; Mota, Artur; Martins, Isabel
2011-09-01
This study aims to develop an approach for the management of a forest of maritime pine located in the north region of Portugal. The forest is classified into five public lands, the so-called baldios, extending over 4432 ha. These baldios are co-managed by the Official Forest Services and the local communities mainly for timber production purposes. The forest planning involves non-spatial and spatial constraints. Spatial constraints dictate a maximum clearcut area and an exclusion time. An integer programming model is presented and the computational results are discussed.
A 3D visualization system for molecular structures
NASA Technical Reports Server (NTRS)
Green, Terry J.
1989-01-01
The properties of molecules derive in part from their structures. Because of the importance of understanding molecular structures various methodologies, ranging from first principles to empirical technique, were developed for computing the structure of molecules. For large molecules such as polymer model compounds, the structural information is difficult to comprehend by examining tabulated data. Therefore, a molecular graphics display system, called MOLDS, was developed to help interpret the data. MOLDS is a menu-driven program developed to run on the LADC SNS computer systems. This program can read a data file generated by the modeling programs or data can be entered using the keyboard. MOLDS has the following capabilities: draws the 3-D representation of a molecule using stick, ball and ball, or space filled model from Cartesian coordinates, draws different perspective views of the molecule; rotates the molecule on the X, Y, Z axis or about some arbitrary line in space, zooms in on a small area of the molecule in order to obtain a better view of a specific region; and makes hard copy representation of molecules on a graphic printer. In addition, MOLDS can be easily updated and readily adapted to run on most computer systems.
NASA Technical Reports Server (NTRS)
Katz, Daniel
2004-01-01
PVM Wrapper is a software library that makes it possible for code that utilizes the Parallel Virtual Machine (PVM) software library to run using the message-passing interface (MPI) software library, without needing to rewrite the entire code. PVM and MPI are the two most common software libraries used for applications that involve passing of messages among parallel computers. Since about 1996, MPI has been the de facto standard. Codes written when PVM was popular often feature patterns of {"initsend," "pack," "send"} and {"receive," "unpack"} calls. In many cases, these calls are not contiguous and one set of calls may even exist over multiple subroutines. These characteristics make it difficult to obtain equivalent functionality via a single MPI "send" call. Because PVM Wrapper is written to run with MPI- 1.2, some PVM functions are not permitted and must be replaced - a task that requires some programming expertise. The "pvm_spawn" and "pvm_parent" function calls are not replaced, but a programmer can use "mpirun" and knowledge of the ranks of parent and child tasks with supplied macroinstructions to enable execution of codes that use "pvm_spawn" and "pvm_parent."
Petascale Simulation Initiative Tech Base: FY2007 Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
May, J; Chen, R; Jefferson, D
The Petascale Simulation Initiative began as an LDRD project in the middle of Fiscal Year 2004. The goal of the project was to develop techniques to allow large-scale scientific simulation applications to better exploit the massive parallelism that will come with computers running at petaflops per second. One of the major products of this work was the design and prototype implementation of a programming model and a runtime system that lets applications extend data-parallel applications to use task parallelism. By adopting task parallelism, applications can use processing resources more flexibly, exploit multiple forms of parallelism, and support more sophisticated multiscalemore » and multiphysics models. Our programming model was originally called the Symponents Architecture but is now known as Cooperative Parallelism, and the runtime software that supports it is called Coop. (However, we sometimes refer to the programming model as Coop for brevity.) We have documented the programming model and runtime system in a submitted conference paper [1]. This report focuses on the specific accomplishments of the Cooperative Parallelism project (as we now call it) under Tech Base funding in FY2007. Development and implementation of the model under LDRD funding alone proceeded to the point of demonstrating a large-scale materials modeling application using Coop on more than 1300 processors by the end of FY2006. Beginning in FY2007, the project received funding from both LDRD and the Computation Directorate Tech Base program. Later in the year, after the three-year term of the LDRD funding ended, the ASC program supported the project with additional funds. The goal of the Tech Base effort was to bring Coop from a prototype to a production-ready system that a variety of LLNL users could work with. Specifically, the major tasks that we planned for the project were: (1) Port SARS [former name of the Coop runtime system] to another LLNL platform, probably Thunder or Peloton (depending on when Peloton becomes available); (2) Improve SARS's robustness and ease-of-use, and develop user documentation; and (3) Work with LLNL code teams to help them determine how Symponents could benefit their applications. The original funding request was $296,000 for the year, and we eventually received $252,000. The remainder of this report describes our efforts and accomplishments for each of the goals listed above.« less
Using CLIPS in a distributed system: The Network Control Center (NCC) expert system
NASA Technical Reports Server (NTRS)
Wannemacher, Tom
1990-01-01
This paper describes an intelligent troubleshooting system for the Help Desk domain. It was developed on an IBM-compatible 80286 PC using Microsoft C and CLIPS and an AT&T 3B2 minicomputer using the UNIFY database and a combination of shell script, C programs and SQL queries. The two computers are linked by a lan. The functions of this system are to help non-technical NCC personnel handle trouble calls, to keep a log of problem calls with complete, concise information, and to keep a historical database of problems. The database helps identify hardware and software problem areas and provides a source of new rules for the troubleshooting knowledge base.
Computer simulator for a mobile telephone system
NASA Technical Reports Server (NTRS)
Schilling, D. L.
1981-01-01
A software simulator was developed to assist NASA in the design of the land mobile satellite service. Structured programming techniques were used by developing the algorithm using an ALCOL-like pseudo language and then encoding the algorithm into FORTRAN 4. The basic input data to the system is a sine wave signal although future plans call for actual sampled voice as the input signal. The simulator is capable of studying all the possible combinations of types and modes of calls through the use of five communication scenarios: single hop systems; double hop, signal gateway system; double hop, double gateway system; mobile to wireline system; and wireline to mobile system. The transmitter, fading channel, and interference source simulation are also discussed.
Ligand-protein docking using a quantum stochastic tunneling optimization method.
Mancera, Ricardo L; Källblad, Per; Todorov, Nikolay P
2004-04-30
A novel hybrid optimization method called quantum stochastic tunneling has been recently introduced. Here, we report its implementation within a new docking program called EasyDock and a validation with the CCDC/Astex data set of ligand-protein complexes using the PLP score to represent the ligand-protein potential energy surface and ScreenScore to score the ligand-protein binding energies. When taking the top energy-ranked ligand binding mode pose, we were able to predict the correct crystallographic ligand binding mode in up to 75% of the cases. By using this novel optimization method run times for typical docking simulations are significantly shortened. Copyright 2004 Wiley Periodicals, Inc. J Comput Chem 25: 858-864, 2004
Accuracy of CNV Detection from GWAS Data.
Zhang, Dandan; Qian, Yudong; Akula, Nirmala; Alliey-Rodriguez, Ney; Tang, Jinsong; Gershon, Elliot S; Liu, Chunyu
2011-01-13
Several computer programs are available for detecting copy number variants (CNVs) using genome-wide SNP arrays. We evaluated the performance of four CNV detection software suites--Birdsuite, Partek, HelixTree, and PennCNV-Affy--in the identification of both rare and common CNVs. Each program's performance was assessed in two ways. The first was its recovery rate, i.e., its ability to call 893 CNVs previously identified in eight HapMap samples by paired-end sequencing of whole-genome fosmid clones, and 51,440 CNVs identified by array Comparative Genome Hybridization (aCGH) followed by validation procedures, in 90 HapMap CEU samples. The second evaluation was program performance calling rare and common CNVs in the Bipolar Genome Study (BiGS) data set (1001 bipolar cases and 1033 controls, all of European ancestry) as measured by the Affymetrix SNP 6.0 array. Accuracy in calling rare CNVs was assessed by positive predictive value, based on the proportion of rare CNVs validated by quantitative real-time PCR (qPCR), while accuracy in calling common CNVs was assessed by false positive/false negative rates based on qPCR validation results from a subset of common CNVs. Birdsuite recovered the highest percentages of known HapMap CNVs containing >20 markers in two reference CNV datasets. The recovery rate increased with decreased CNV frequency. In the tested rare CNV data, Birdsuite and Partek had higher positive predictive values than the other software suites. In a test of three common CNVs in the BiGS dataset, Birdsuite's call was 98.8% consistent with qPCR quantification in one CNV region, but the other two regions showed an unacceptable degree of accuracy. We found relatively poor consistency between the two "gold standards," the sequence data of Kidd et al., and aCGH data of Conrad et al. Algorithms for calling CNVs especially common ones need substantial improvement, and a "gold standard" for detection of CNVs remains to be established.
NASA Technical Reports Server (NTRS)
Siclari, Michael J.
1988-01-01
A computer code called NCOREL (for Nonconical Relaxation) has been developed to solve for supersonic full potential flows over complex geometries. The method first solves for the conical at the apex and then marches downstream in a spherical coordinate system. Implicit relaxation techniques are used to numerically solve the full potential equation at each subsequent crossflow plane. Many improvements have been made to the original code including more reliable numerics for computing wing-body flows with multiple embedded shocks, inlet flow through simulation, wake model and entropy corrections. Line relaxation or approximate factorization schemes are optionally available. Improved internal grid generation using analytic conformal mappings, supported by a simple geometric Harris wave drag input that was originally developed for panel methods and internal geometry package are some of the new features.
Using an architectural approach to integrate heterogeneous, distributed software components
NASA Technical Reports Server (NTRS)
Callahan, John R.; Purtilo, James M.
1995-01-01
Many computer programs cannot be easily integrated because their components are distributed and heterogeneous, i.e., they are implemented in diverse programming languages, use different data representation formats, or their runtime environments are incompatible. In many cases, programs are integrated by modifying their components or interposing mechanisms that handle communication and conversion tasks. For example, remote procedure call (RPC) helps integrate heterogeneous, distributed programs. When configuring such programs, however, mechanisms like RPC must be used explicitly by software developers in order to integrate collections of diverse components. Each collection may require a unique integration solution. This paper describes improvements to the concepts of software packaging and some of our experiences in constructing complex software systems from a wide variety of components in different execution environments. Software packaging is a process that automatically determines how to integrate a diverse collection of computer programs based on the types of components involved and the capabilities of available translators and adapters in an environment. Software packaging provides a context that relates such mechanisms to software integration processes and reduces the cost of configuring applications whose components are distributed or implemented in different programming languages. Our software packaging tool subsumes traditional integration tools like UNIX make by providing a rule-based approach to software integration that is independent of execution environments.
Gaffney, Hannah; Mansell, Warren; Edwards, Rachel; Wright, Jason
2014-11-01
Computerized self-help that has an interactive, conversational format holds several advantages, such as flexibility across presenting problems and ease of use. We designed a new program called MYLO that utilizes the principles of METHOD of Levels (MOL) therapy--based upon Perceptual Control Theory (PCT). We tested the efficacy of MYLO, tested whether the psychological change mechanisms described by PCT mediated its efficacy, and evaluated effects of client expectancy. Forty-eight student participants were randomly assigned to MYLO or a comparison program ELIZA. Participants discussed a problem they were currently experiencing with their assigned program and completed measures of distress, resolution and expectancy preintervention, postintervention and at 2-week follow-up. MYLO and ELIZA were associated with reductions in distress, depression, anxiety and stress. MYLO was considered more helpful and led to greater problem resolution. The psychological change processes predicted higher ratings of MYLO's helpfulness and reductions in distress. Positive expectancies towards computer-based problem solving correlated with MYLO's perceived helpfulness and greater problem resolution, and this was partly mediated by the psychological change processes identified. The findings provide provisional support for the acceptability of the MYLO program in a non-clinical sample although its efficacy as an innovative computer-based aid to problem solving remains unclear. Nevertheless, the findings provide tentative early support for the mechanisms of psychological change identified within PCT and highlight the importance of client expectations on predicting engagement in computer-based self-help.
NASA Technical Reports Server (NTRS)
Baruah, P. K.; Bussoletti, J. E.; Chiang, D. T.; Massena, W. A.; Nelson, F. D.; Furdon, D. J.; Tsurusaki, K.
1981-01-01
The Maintenance Document is a guide to the PAN AIR software system, a system which computes the subsonic or supersonic linear potential flow about a body of nearly arbitrary shape, using a higher order panel method. The document describes the over-all system and each program module of the system. Sufficient detail is given for program maintenance, updating and modification. It is assumed that the reader is familiar with programming and CDC (Control Data Corporation) computer systems. The PAN AIR system was written in FORTRAN 4 language except for a few COMPASS language subroutines which exist in the PAN AIR library. Structured programming techniques were used to provide code documentation and maintainability. The operating systems accommodated are NOS 1.2, NOS/BE and SCOPE 2.1.3 on the CDC 6600, 7600 and Cyber 175 computing systems. The system is comprised of a data management system, a program library, an execution control module and nine separate FORTRAN technical modules. Each module calculates part of the posed PAN AIR problem. The data base manager is used to communicate between modules and within modules. The technical modules must be run in a prescribed fashion for each PAN AIR problem. In order to ease the problem of supplying the many JCL cards required to execute the modules, a separate module called MEC (Module Execution Control) was created to automatically supply most of the JCL cards. In addition to the MEC generated JCL, there is an additional set of user supplied JCL cards to initiate the JCL sequence stored on the system.
Introduction of the ASP3D Computer Program for Unsteady Aerodynamic and Aeroelastic Analyses
NASA Technical Reports Server (NTRS)
Batina, John T.
2005-01-01
A new computer program has been developed called ASP3D (Advanced Small Perturbation 3D), which solves the small perturbation potential flow equation in an advanced form including mass-consistent surface and trailing wake boundary conditions, and entropy, vorticity, and viscous effects. The purpose of the program is for unsteady aerodynamic and aeroelastic analyses, especially in the nonlinear transonic flight regime. The program exploits the simplicity of stationary Cartesian meshes with the movement or deformation of the configuration under consideration incorporated into the solution algorithm through a planar surface boundary condition. The new ASP3D code is the result of a decade of developmental work on improvements to the small perturbation formulation, performed while the author was employed as a Senior Research Scientist in the Configuration Aerodynamics Branch at the NASA Langley Research Center. The ASP3D code is a significant improvement to the state-of-the-art for transonic aeroelastic analyses over the CAP-TSD code (Computational Aeroelasticity Program Transonic Small Disturbance), which was developed principally by the author in the mid-1980s. The author is in a unique position as the developer of both computer programs to compare, contrast, and ultimately make conclusions regarding the underlying formulations and utility of each code. The paper describes the salient features of the ASP3D code including the rationale for improvements in comparison with CAP-TSD. Numerous results are presented to demonstrate the ASP3D capability. The general conclusion is that the new ASP3D capability is superior to the older CAP-TSD code because of the myriad improvements developed and incorporated.
Some NACA Muroc personnel with snowman
NASA Technical Reports Server (NTRS)
1949-01-01
The late 1940s saw increased flight activity, and more women computers were needed at the NACA Muroc Flight Test Unit than the ones who had originally arrived in 1946. A call went out to the NACA Langley, Lewis, and Ames laboratories for more women computers. Pictured in this photograph with the Snowman are some of the women computers who responded to the call for help in 1948 along with Roxanah, Emily, Dorothy, who were already here. Standing left to right: Mary (Tut) Hedgepeth, from Langley; Lilly Ann Bajus, Lewis; Roxanah Yancey, Emily Stephens, Jane Collons (Procurement), Leona Corbett (Personnel), Angel Dunn, Langley. Kneeling left to right: Dorothy (Dottie) Crawford Roth, Lewis; Dorothy Clift Hughes, and Gertrude (Trudy) Wilken Valentine, Lewis. In National Advisory Committee for Aeronautics (NACA) terminology of 1946, computers were employees who performed laborious and time-consuming mathematical calculations and data reduction from long strips of records generated by onboard aircraft instrumentation. Virtually without exception, computers were female; at least part of the rationale seems to have been the notion that the work was long and tedious, and men were not thought to have the patience to do it. Though equipment changed over the years and most computers eventually found themselves programming and operating electronic computers, as well as doing other data processing tasks, being a computer initially meant long hours with a slide rule, hunched over illuminated light boxes measuring line traces from grainy and obscure strips of oscillograph film. Computers suffered terrible eyestrain, and those who didn't begin by wearing glasses did so after a few years. But they were initially essential employees at the Muroc Flight Test Unit and NACA High-Speed Flight Research Station, taking the oscillograph flight records and 'reducing' the data on them to make them useful to research engineers, who analyzed the data.
NASA Technical Reports Server (NTRS)
Murthy, T. Sreekanta; Kvaternik, Raymond G.
1991-01-01
A NASA/industry rotorcraft structural dynamics program known as Design Analysis Methods for VIBrationS (DAMVIBS) was initiated at Langley Research Center in 1984 with the objective of establishing the technology base needed by the industry for developing an advanced finite-element-based vibrations design analysis capability for airframe structures. As a part of the in-house activities contributing to that program, a study was undertaken to investigate the use of formal, nonlinear programming-based, numerical optimization techniques for airframe vibrations design work. Considerable progress has been made in connection with that study since its inception in 1985. This paper presents a unified summary of the experiences and results of that study. The formulation and solution of airframe optimization problems are discussed. Particular attention is given to describing the implementation of a new computational procedure based on MSC/NASTRAN and CONstrained function MINimization (CONMIN) in a computer program system called DYNOPT for the optimization of airframes subject to strength, frequency, dynamic response, and fatigue constraints. The results from the application of the DYNOPT program to the Bell AH-1G helicopter are presented and discussed.
Cost-Benefit Analysis of a Support Program for Nursing Staff.
Moran, Dane; Wu, Albert W; Connors, Cheryl; Chappidi, Meera R; Sreedhara, Sushama K; Selter, Jessica H; Padula, William V
2017-04-27
A peer-support program called Resilience In Stressful Events (RISE) was designed to help hospital staff cope with stressful patient-related events. The aim of this study was to evaluate the impact of the RISE program by conducting an economic evaluation of its cost benefit. A Markov model with a 1-year time horizon was developed to compare the cost benefit with and without the RISE program from a provider (hospital) perspective. Nursing staff who used the RISE program between 2015 and 2016 at a 1000-bed, private hospital in the United States were included in the analysis. The cost of running the RISE program, nurse turnover, and nurse time off were modeled. Data on costs were obtained from literature review and hospital data. Probabilities of quitting or taking time off with or without the RISE program were estimated using survey data. Net monetary benefit (NMB) and budget impact of having the RISE program were computed to determine cost benefit to the hospital. Expected model results of the RISE program found a net monetary benefit savings of US $22,576.05 per nurse who initiated a RISE call. These savings were determined to be 99.9% consistent on the basis of a probabilistic sensitivity analysis. The budget impact analysis revealed that a hospital could save US $1.81 million each year because of the RISE program. The RISE program resulted in substantial cost savings to the hospital. Hospitals should be encouraged by these findings to implement institution-wide support programs for medical staff, based on a high demand for this type of service and the potential for cost savings.
Potential Paradigms and Possible Problems for CALL.
ERIC Educational Resources Information Center
Phillips, Martin
1987-01-01
Describes three models of CALL (computer assisted language learning) activity--games, the expert system, and the prosthetic approaches. A case is made for CALL development within a more instrumental view of the role of computers. (Author/CB)
Analytical and Computational Properties of Distributed Approaches to MDO
NASA Technical Reports Server (NTRS)
Alexandrov, Natalia M.; Lewis, Robert Michael
2000-01-01
Historical evolution of engineering disciplines and the complexity of the MDO problem suggest that disciplinary autonomy is a desirable goal in formulating and solving MDO problems. We examine the notion of disciplinary autonomy and discuss the analytical properties of three approaches to formulating and solving MDO problems that achieve varying degrees of autonomy by distributing the problem along disciplinary lines. Two of the approaches-Optimization by Linear Decomposition and Collaborative Optimization-are based on bi-level optimization and reflect what we call a structural perspective. The third approach, Distributed Analysis Optimization, is a single-level approach that arises from what we call an algorithmic perspective. The main conclusion of the paper is that disciplinary autonomy may come at a price: in the bi-level approaches, the system-level constraints introduced to relax the interdisciplinary coupling and enable disciplinary autonomy can cause analytical and computational difficulties for optimization algorithms. The single-level alternative we discuss affords a more limited degree of autonomy than that of the bi-level approaches, but without the computational difficulties of the bi-level methods. Key Words: Autonomy, bi-level optimization, distributed optimization, multidisciplinary optimization, multilevel optimization, nonlinear programming, problem integration, system synthesis
ERIC Educational Resources Information Center
Ward, Monica
2017-01-01
The term Intelligent Computer Assisted Language Learning (ICALL) covers many different aspects of CALL that add something extra to a CALL resource. This could be with the use of computational linguistics or Artificial Intelligence (AI). ICALL tends to be not very well understood within the CALL community. There may also be the slight fear factor…
Multipole Algorithms for Molecular Dynamics Simulation on High Performance Computers.
NASA Astrophysics Data System (ADS)
Elliott, William Dewey
1995-01-01
A fundamental problem in modeling large molecular systems with molecular dynamics (MD) simulations is the underlying N-body problem of computing the interactions between all pairs of N atoms. The simplest algorithm to compute pair-wise atomic interactions scales in runtime {cal O}(N^2), making it impractical for interesting biomolecular systems, which can contain millions of atoms. Recently, several algorithms have become available that solve the N-body problem by computing the effects of all pair-wise interactions while scaling in runtime less than {cal O}(N^2). One algorithm, which scales {cal O}(N) for a uniform distribution of particles, is called the Greengard-Rokhlin Fast Multipole Algorithm (FMA). This work describes an FMA-like algorithm called the Molecular Dynamics Multipole Algorithm (MDMA). The algorithm contains several features that are new to N-body algorithms. MDMA uses new, efficient series expansion equations to compute general 1/r^{n } potentials to arbitrary accuracy. In particular, the 1/r Coulomb potential and the 1/r^6 portion of the Lennard-Jones potential are implemented. The new equations are based on multivariate Taylor series expansions. In addition, MDMA uses a cell-to-cell interaction region of cells that is closely tied to worst case error bounds. The worst case error bounds for MDMA are derived in this work also. These bounds apply to other multipole algorithms as well. Several implementation enhancements are described which apply to MDMA as well as other N-body algorithms such as FMA and tree codes. The mathematics of the cell -to-cell interactions are converted to the Fourier domain for reduced operation count and faster computation. A relative indexing scheme was devised to locate cells in the interaction region which allows efficient pre-computation of redundant information and prestorage of much of the cell-to-cell interaction. Also, MDMA was integrated into the MD program SIgMA to demonstrate the performance of the program over several simulation timesteps. One MD application described here highlights the utility of including long range contributions to Lennard-Jones potential in constant pressure simulations. Another application shows the time dependence of long range forces in a multiple time step MD simulation.
What Does CALL Have to Offer Computer Science and What Does Computer Science Have to Offer CALL?
ERIC Educational Resources Information Center
Cushion, Steve
2006-01-01
We will argue that CALL can usefully be viewed as a subset of computer software engineering and can profit from adopting some of the recent progress in software development theory. The unified modelling language has become the industry standard modelling technique and the accompanying unified process is rapidly gaining acceptance. The manner in…
COMPUTATIONAL TOXICOLOGY-WHERE IS THE DATA? ...
This talk will briefly describe the state of the data world for computational toxicology and one approach to improve the situation, called ACToR (Aggregated Computational Toxicology Resource). This talk will briefly describe the state of the data world for computational toxicology and one approach to improve the situation, called ACToR (Aggregated Computational Toxicology Resource).
NASA Technical Reports Server (NTRS)
Tamkin, Glenn S. (Inventor); Duffy, Daniel Q. (Inventor); Schnase, John L. (Inventor)
2016-01-01
A system, method and computer-readable storage devices for providing a climate data analytic services application programming interface distribution package. The example system can provide various components. The system provides a climate data analytic services application programming interface library that enables software applications running on a client device to invoke the capabilities of a climate data analytic service. The system provides a command-line interface that provides a means of interacting with a climate data analytic service by issuing commands directly to the system's server interface. The system provides sample programs that call on the capabilities of the application programming interface library and can be used as templates for the construction of new client applications. The system can also provide test utilities, build utilities, service integration utilities, and documentation.
NASA Astrophysics Data System (ADS)
Foster, K.
1994-09-01
This document is a description of a computer program called Format( )MEDIC( )Input. The purpose of this program is to allow the user to quickly reformat wind velocity data in the Model Evaluation Database (MEDb) into a reasonable 'first cut' set of MEDIC input files (MEDIC.nml, StnLoc.Met, and Observ.Met). The user is cautioned that these resulting input files must be reviewed for correctness and completeness. This program will not format MEDb data into a Problem Station Library or Problem Metdata File. A description of how the program reformats the data is provided, along with a description of the required and optional user input and a description of the resulting output files. A description of the MEDb is not provided here but can be found in the RAS Division Model Evaluation Database Description document.
Automating Disk Forensic Processing with SleuthKit, XML and Python
2009-05-01
1 Automating Disk Forensic Processing with SleuthKit, XML and Python Simson L. Garfinkel Abstract We have developed a program called fiwalk which...files themselves. We show how it is relatively simple to create automated disk forensic applications using a Python module we have written that reads...software that the portable device may contain. Keywords: Computer Forensics; XML; Sleuth Kit; Python I. INTRODUCTION In recent years we have found many
Wessex Helicopter/Sonar Dynamics Study. ARL Program Description and Operation.
1979-02-01
s): 5. Document Date: S illiams, Neil V. February, 1979 Guy, Christopher R. Williams, Maxwell J. 6. Type of Report and Period Covered: Gilbert, Neil...form with the aid of an analog computer type of block diagram, comprising a number of linked modules (called blocks), each one representing a particular...three types of statement, viz. configuration, parameter and function statements. The configuration statements describe the blocks used and specify the
The Fixed-Point Theory of Strictly Causal Functions
2013-06-09
functions were defined to be the functions that are strictly contracting with respect to the Cantor metric (also called the Baire distance) on signals...of Lecture Notes in Computer Science, pages 447–484. Springer Berlin / Heidelberg, 1992. [36] George Markowsky. Chain-complete posets and directed...Journal of Logic Programming, 42(2):59–70, 2000. [52] George M. Reed and A. William Roscoe. A timed model for communicating sequential processes. In Laurent
Stanford L. Arner
1974-01-01
One procedure in analyzing the responses to factors in an experiment is to construct profiles of the means. Profiles are plots of the sample means of each level of one factor at each level of one or more of the other factors in the experiment. The construction of these profiles and tests of hypotheses about the mean responses is called profile analysis. This paper...
2010-02-02
Act of 1991.................................................................... 11 Next Generation Internet Research Act of 1998...Computing Act of 1991 P.L. 102-194) and the Next Generation Internet Research Act of 1998 (P.L. 105-305). The laws call for a President’s Information...planning and coordination. The second, the Next Generation Internet Research Act of 1998, P.L. 105-305,21 amended the original law to expand the mission of
A simple and inexpensive method of preoperative computer imaging for rhinoplasty.
Ewart, Christopher J; Leonard, Christopher J; Harper, J Garrett; Yu, Jack
2006-01-01
GOALS/PURPOSE: Despite concerns of legal liability, preoperative computer imaging has become a popular tool for the plastic surgeon. The ability to project possible surgical outcomes can facilitate communication between the patient and surgeon. It can be an effective tool in the education and training of residents. Unfortunately, these imaging programs are expensive and have a steep learning curve. The purpose of this paper is to present a relatively inexpensive method of preoperative computer imaging with a reasonable learning curve. The price of currently available imaging programs was acquired through an online search, and inquiries were made to the software distributors. Their prices were compared to Adobe PhotoShop, which has special filters called "liquify" and "photocopy." It was used in the preoperative computer planning of 2 patients who presented for rhinoplasty at our institution. Projected images were created based on harmonious discussions between the patient and physician. Importantly, these images were presented to the patient as potential results, with no guarantees as to actual outcomes. Adobe PhotoShop can be purchased for 900-5800 dollars less than the leading computer imaging software for cosmetic rhinoplasty. Effective projected images were created using the "liquify" and "photocopy" filters in PhotoShop. Both patients had surgical planning and operations based on these images. They were satisfied with the results. Preoperative computer imaging can be a very effective tool for the plastic surgeon by providing improved physician-patient communication, increased patient confidence, and enhanced surgical planning. Adobe PhotoShop is a relatively inexpensive program that can provide these benefits using only 1 or 2 features.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hollingsworth, Jeff
2014-07-31
The purpose of this project was to develop tools and techniques to improve the ability of computational scientists to investigate and correct problems (bugs) in their programs. Specifically, the University of Maryland component of this project focused on the problems associated with the finite number of bits available in a computer to represent numeric values. In large scale scientific computation, numbers are frequently added to and multiplied with each other billions of times. Thus even small errors due to the representation of numbers can accumulate into big errors. However, using too many bits to represent a number results in additionalmore » computation, memory, and energy costs. Thus it is critical to find the right size for numbers. This project focused on several aspects of this general problem. First, we developed a tool to look for cancelations, the catastrophic loss of precision in numbers due to the addition of two numbers whose actual values are close to each other, but whose representation in a computer is identical or nearly so. Second, we developed a suite of tools to allow programmers to identify exactly how much precision is required for each operation in their program. This tool allows programmers to both verify that enough precision is available, but more importantly find cases where extra precision could be eliminated to allow the program to use less memory, computer time, or energy. These tools use advanced binary modification techniques to allow the analysis of actual optimized code. The system, called Craft, has been applied to a number of benchmarks and real applications.« less
TBGG- INTERACTIVE ALGEBRAIC GRID GENERATION
NASA Technical Reports Server (NTRS)
Smith, R. E.
1994-01-01
TBGG, Two-Boundary Grid Generation, applies an interactive algebraic grid generation technique in two dimensions. The program incorporates mathematical equations that relate the computational domain to the physical domain. TBGG has application to a variety of problems using finite difference techniques, such as computational fluid dynamics. Examples include the creation of a C-type grid about an airfoil and a nozzle configuration in which no left or right boundaries are specified. The underlying two-boundary technique of grid generation is based on Hermite cubic interpolation between two fixed, nonintersecting boundaries. The boundaries are defined by two ordered sets of points, referred to as the top and bottom. Left and right side boundaries may also be specified, and call upon linear blending functions to conform interior interpolation to the side boundaries. Spacing between physical grid coordinates is determined as a function of boundary data and uniformly spaced computational coordinates. Control functions relating computational coordinates to parametric intermediate variables that affect the distance between grid points are embedded in the interpolation formulas. A versatile control function technique with smooth cubic spline functions is also presented. The TBGG program is written in FORTRAN 77. It works best in an interactive graphics environment where computational displays and user responses are quickly exchanged. The program has been implemented on a CDC Cyber 170 series computer using NOS 2.4 operating system, with a central memory requirement of 151,700 (octal) 60 bit words. TBGG requires a Tektronix 4015 terminal and the DI-3000 Graphics Library of Precision Visuals, Inc. TBGG was developed in 1986.
HR 7578 - A K dwarf double-lined spectroscopic binary with peculiar abundances
NASA Technical Reports Server (NTRS)
Fekel, F. C., Jr.; Beavers, W. I.
1983-01-01
The number of double-lined K and M dwarf binaries which is currently known is quite small, only a dozen or less of each type. The HR 7578 system was classified as dK5 on the Mount Wilson system and as K2 V on the MK ystem. A summary of radial-velocity measurements including the observatory and weight of each observation is given in a table. The star with the stronger lines has been called component A. The final orbital element solution with all observations appropriately weighted was computed with a differential corrections computer program described by Barker et al. (1967). The program had been modified for the double-lined case. Of particular interest are the very large eccentricity of the system and the large minimum masses for each component. These large minimum masses suggest that eclipses may be detectable despite the relatively long period and small radii of the stars.
Analytical modeling of helicopter static and dynamic induced velocity in GRASP
NASA Technical Reports Server (NTRS)
Kunz, Donald L.; Hodges, Dewey H.
1987-01-01
The methodology used by the General Rotorcraft Aeromechanical Stability Program (GRASP) to model the characteristics of the flow through a helicopter rotor in hovering or axial flight is described. Since the induced flow plays a significant role in determining the aeroelastic properties of rotorcraft, the computation of the induced flow is an important aspect of the program. Because of the combined finite-element/multibody methodology used as the basis for GRASP, the implementation of induced velocity calculations presented an unusual challenge to the developers. To preserve the modelling flexibility and generality of the code, it was necessary to depart from the traditional methods of computing the induced velocity. This is accomplished by calculating the actuator disc contributions to the rotor loads in a separate element called the air mass element, and then performing the calculations of the aerodynamic forces on individual blade elements within the aeroelastic beam element.
MUSCLE: multiple sequence alignment with high accuracy and high throughput.
Edgar, Robert C
2004-01-01
We describe MUSCLE, a new computer program for creating multiple alignments of protein sequences. Elements of the algorithm include fast distance estimation using kmer counting, progressive alignment using a new profile function we call the log-expectation score, and refinement using tree-dependent restricted partitioning. The speed and accuracy of MUSCLE are compared with T-Coffee, MAFFT and CLUSTALW on four test sets of reference alignments: BAliBASE, SABmark, SMART and a new benchmark, PREFAB. MUSCLE achieves the highest, or joint highest, rank in accuracy on each of these sets. Without refinement, MUSCLE achieves average accuracy statistically indistinguishable from T-Coffee and MAFFT, and is the fastest of the tested methods for large numbers of sequences, aligning 5000 sequences of average length 350 in 7 min on a current desktop computer. The MUSCLE program, source code and PREFAB test data are freely available at http://www.drive5. com/muscle.
Two-Dimensional Finite Element Ablative Thermal Response Analysis of an Arcjet Stagnation Test
NASA Technical Reports Server (NTRS)
Dec, John A.; Laub, Bernard; Braun, Robert D.
2011-01-01
The finite element ablation and thermal response (FEAtR, hence forth called FEAR) design and analysis program simulates the one, two, or three-dimensional ablation, internal heat conduction, thermal decomposition, and pyrolysis gas flow of thermal protection system materials. As part of a code validation study, two-dimensional axisymmetric results from FEAR are compared to thermal response data obtained from an arc-jet stagnation test in this paper. The results from FEAR are also compared to the two-dimensional axisymmetric computations from the two-dimensional implicit thermal response and ablation program under the same arcjet conditions. The ablating material being used in this arcjet test is phenolic impregnated carbon ablator with an LI-2200 insulator as backup material. The test is performed at the NASA, Ames Research Center Interaction Heating Facility. Spatially distributed computational fluid dynamics solutions for the flow field around the test article are used for the surface boundary conditions.
Software for biomedical engineering signal processing laboratory experiments.
Tompkins, Willis J; Wilson, J
2009-01-01
In the early 1990's we developed a special computer program called UW DigiScope to provide a mechanism for anyone interested in biomedical digital signal processing to study the field without requiring any other instrument except a personal computer. There are many digital filtering and pattern recognition algorithms used in processing biomedical signals. In general, students have very limited opportunity to have hands-on access to the mechanisms of digital signal processing. In a typical course, the filters are designed non-interactively, which does not provide the student with significant understanding of the design constraints of such filters nor their actual performance characteristics. UW DigiScope 3.0 is the first major update since version 2.0 was released in 1994. This paper provides details on how the new version based on MATLAB! works with signals, including the filter design tool that is the programming interface between UW DigiScope and processing algorithms.
NASA Technical Reports Server (NTRS)
Byrnes, D. V.; Carney, P. C.; Underwood, J. W.; Vogt, E. D.
1974-01-01
The six month effort was responsible for the development, test, conversion, and documentation of computer software for the mission analysis of missions to halo orbits about libration points in the earth-sun system. The software consisting of two programs called NOMNAL and ERRAN is part of the Space Trajectories Error Analysis Programs. The program NOMNAL targets a transfer trajectory from earth on a given launch date to a specified halo orbit on a required arrival date. Either impulsive or finite thrust insertion maneuvers into halo orbit are permitted by the program. The transfer trajectory is consistent with a realistic launch profile input by the user. The second program ERRAN conducts error analyses of the targeted transfer trajectory. Measurements including range, doppler, star-planet angles, and apparent planet diameter are processed in a Kalman-Schmidt filter to determine the trajectory knowledge uncertainty.
ERIC Educational Resources Information Center
Erdem, Cahit; Saykili, Abdullah; Kocyigit, Mehmet
2018-01-01
This study primarily aims to adapt the Foreign Language Learning (FLL), Computer assisted Learning (CAL) and Computer assisted Language Learning (CALL) scales developed by Vandewaetere and Desmet into Turkish context. The instrument consists of three scales which are "the attitude towards CALL questionnaire" ("A-CALL")…
Onboard Short Term Plan Viewer
NASA Technical Reports Server (NTRS)
Hall, Tim; LeBlanc, Troy; Ulman, Brian; McDonald, Aaron; Gramm, Paul; Chang, Li-Min; Keerthi, Suman; Kivlovitz, Dov; Hadlock, Jason
2011-01-01
Onboard Short Term Plan Viewer (OSTPV) is a computer program for electronic display of mission plans and timelines, both aboard the International Space Station (ISS) and in ISS ground control stations located in several countries. OSTPV was specifically designed both (1) for use within the limited ISS computing environment and (2) to be compatible with computers used in ground control stations. OSTPV supplants a prior system in which, aboard the ISS, timelines were printed on paper and incorporated into files that also contained other paper documents. Hence, the introduction of OSTPV has both reduced the consumption of resources and saved time in updating plans and timelines. OSTPV accepts, as input, the mission timeline output of a legacy, print-oriented, UNIX-based program called "Consolidated Planning System" and converts the timeline information for display in an interactive, dynamic, Windows Web-based graphical user interface that is used by both the ISS crew and ground control teams in real time. OSTPV enables the ISS crew to electronically indicate execution of timeline steps, launch electronic procedures, and efficiently report to ground control teams on the statuses of ISS activities, all by use of laptop computers aboard the ISS.
Lien, Rebecca K; Schillo, Barbara A; Mast, Jay L; Lukowski, Amy V; Greenseid, Lija O; Keith, Jennifer D; Keller, Paula A
2016-01-01
Tobacco users in all 50 states have access to quitline telephone counseling and cessation medications. While studies show multiple calls relate to quit success, most participants do not complete a full call series. To date, quitline program use studies have analyzed single factors-such as number of calls or counseling minutes. This study combines multiple factors of quitline program use across 2 states to describe how participants use a 5-call program; assess whether intensity of program use is associated with participant subgroups; and assess whether key outcomes (quitting, satisfaction) are associated with intensity. This observational study examines data for quitline participants in Minnesota (n = 2844) and Pennsylvania (n = 14 359) in 2011 and 2012. A subset of participants was surveyed 7 months after registration to assess key outcomes (response rates: Minnesota 65%; Pennsylvania 60%). Quitline utilization data were used to identify program use variables: nicotine replacement therapy provision, number of counseling calls, number of counseling minutes, days from first to last counseling call, and days from registration to first counseling call. Ten program use groups were created using all 5 program use variables, from lowest (1) to highest (10) intensity. Results were similar for both states. Only 11% of Minnesota and 8% of Pennsylvania participants completed all 5 calls. Intensity of quitline program use was associated with several participant characteristics including health conditions and age. Both quit status and program satisfaction were associated with program use intensity. Quit rates peaked in group 9, participants who received the full 5-call program. Quitlines should focus on engaging participants in multiple calls to improve quit outcomes. In addition, it is important to leverage multiple program use factors for a fuller understanding of how quitline participants use a program.
Runtime optimization of an application executing on a parallel computer
None
2014-11-25
Identifying a collective operation within an application executing on a parallel computer; identifying a call site of the collective operation; determining whether the collective operation is root-based; if the collective operation is not root-based: establishing a tuning session and executing the collective operation in the tuning session; if the collective operation is root-based, determining whether all compute nodes executing the application identified the collective operation at the same call site; if all compute nodes identified the collective operation at the same call site, establishing a tuning session and executing the collective operation in the tuning session; and if all compute nodes executing the application did not identify the collective operation at the same call site, executing the collective operation without establishing a tuning session.
Runtime optimization of an application executing on a parallel computer
Faraj, Daniel A; Smith, Brian E
2014-11-18
Identifying a collective operation within an application executing on a parallel computer; identifying a call site of the collective operation; determining whether the collective operation is root-based; if the collective operation is not root-based: establishing a tuning session and executing the collective operation in the tuning session; if the collective operation is root-based, determining whether all compute nodes executing the application identified the collective operation at the same call site; if all compute nodes identified the collective operation at the same call site, establishing a tuning session and executing the collective operation in the tuning session; and if all compute nodes executing the application did not identify the collective operation at the same call site, executing the collective operation without establishing a tuning session.
Runtime optimization of an application executing on a parallel computer
Faraj, Daniel A.; Smith, Brian E.
2013-01-29
Identifying a collective operation within an application executing on a parallel computer; identifying a call site of the collective operation; determining whether the collective operation is root-based; if the collective operation is not root-based: establishing a tuning session and executing the collective operation in the tuning session; if the collective operation is root-based, determining whether all compute nodes executing the application identified the collective operation at the same call site; if all compute nodes identified the collective operation at the same call site, establishing a tuning session and executing the collective operation in the tuning session; and if all compute nodes executing the application did not identify the collective operation at the same call site, executing the collective operation without establishing a tuning session.
NASA Astrophysics Data System (ADS)
Friedrich, J.
1999-08-01
As lecturers, our main concern and goal is to develop more attractive and efficient ways of communicating up-to-date scientific knowledge to our students and facilitate an in-depth understanding of physical phenomena. Computer-based instruction is very promising to help both teachers and learners in their difficult task, which involves complex cognitive psychological processes. This complexity is reflected in high demands on the design and implementation methods used to create computer-assisted learning (CAL) programs. Due to their concepts, flexibility, maintainability and extended library resources, object-oriented modeling techniques are very suitable to produce this type of pedagogical tool. Computational fluid dynamics (CFD) enjoys not only a growing importance in today's research, but is also very powerful for teaching and learning fluid dynamics. For this purpose, an educational PC program for university level called 'CFDLab 1.1' for Windows™ was developed with an interactive graphical user interface (GUI) for multitasking and point-and-click operations. It uses the dual reciprocity boundary element method as a versatile numerical scheme, allowing to handle a variety of relevant governing equations in two dimensions on personal computers due to its simple pre- and postprocessing including 2D Laplace, Poisson, diffusion, transient convection-diffusion.
ACToR-Aggregated Computational Resource | Science ...
ACToR (Aggregated Computational Toxicology Resource) is a database and set of software applications that bring into one central location many types and sources of data on environmental chemicals. Currently, the ACToR chemical database contains information on chemical structure, in vitro bioassays and in vivo toxicology assays derived from more than 150 sources including the U.S. Environmental Protection Agency (EPA), Centers for Disease Control (CDC), U.S. Food & Drug Administration (FDA), National Institutes of Health (NIH), state agencies, corresponding government agencies in Canada, Europe and Japan, universities, the World Health Organization (WHO) and non-governmental organizations (NGOs). At the EPA National Center for Computational Toxicology, ACToR helps manage large data sets being used in a high throughput environmental chemical screening and prioritization program called ToxCast(TM).
NASA Technical Reports Server (NTRS)
Nagy, S.
1988-01-01
Due to extraordinary distances scanned by modern telescopes, optical surfaces in such telescopes must be manufactured to unimaginable standards of perfection of a few thousandths of a centimeter. The detection of imperfections of less than 1/20 of a wavelength of light, for application in the building of the mirror for the Space Infrared Telescope Facility, was undertaken. Because the mirror must be kept very cold while in space, another factor comes into effect: cryogenics. The process to test a specific morror under cryogenic conditions is described; including the follow-up analysis accomplished through computer work. To better illustrate the process and analysis, a Pyrex Hex-Core mirror is followed through the process from the laser interferometry in the lab, to computer analysis via a computer program called FRINGE. This analysis via FRINGE is detailed.
ACToR - Aggregated Computational Toxicology Resource
DOE Office of Scientific and Technical Information (OSTI.GOV)
Judson, Richard; Richard, Ann; Dix, David
2008-11-15
ACToR (Aggregated Computational Toxicology Resource) is a database and set of software applications that bring into one central location many types and sources of data on environmental chemicals. Currently, the ACToR chemical database contains information on chemical structure, in vitro bioassays and in vivo toxicology assays derived from more than 150 sources including the U.S. Environmental Protection Agency (EPA), Centers for Disease Control (CDC), U.S. Food and Drug Administration (FDA), National Institutes of Health (NIH), state agencies, corresponding government agencies in Canada, Europe and Japan, universities, the World Health Organization (WHO) and non-governmental organizations (NGOs). At the EPA National Centermore » for Computational Toxicology, ACToR helps manage large data sets being used in a high-throughput environmental chemical screening and prioritization program called ToxCast{sup TM}.« less
CAD system for automatic analysis of CT perfusion maps
NASA Astrophysics Data System (ADS)
Hachaj, T.; Ogiela, M. R.
2011-03-01
In this article, authors present novel algorithms developed for the computer-assisted diagnosis (CAD) system for analysis of dynamic brain perfusion, computer tomography (CT) maps, cerebral blood flow (CBF), and cerebral blood volume (CBV). Those methods perform both quantitative analysis [detection and measurement and description with brain anatomy atlas (AA) of potential asymmetries/lesions] and qualitative analysis (semantic interpretation of visualized symptoms). The semantic interpretation (decision about type of lesion: ischemic/hemorrhagic, is the brain tissue at risk of infraction or not) of visualized symptoms is done by, so-called, cognitive inference processes allowing for reasoning on character of pathological regions based on specialist image knowledge. The whole system is implemented in.NET platform (C# programming language) and can be used on any standard PC computer with.NET framework installed.
Mendel-GPU: haplotyping and genotype imputation on graphics processing units
Chen, Gary K.; Wang, Kai; Stram, Alex H.; Sobel, Eric M.; Lange, Kenneth
2012-01-01
Motivation: In modern sequencing studies, one can improve the confidence of genotype calls by phasing haplotypes using information from an external reference panel of fully typed unrelated individuals. However, the computational demands are so high that they prohibit researchers with limited computational resources from haplotyping large-scale sequence data. Results: Our graphics processing unit based software delivers haplotyping and imputation accuracies comparable to competing programs at a fraction of the computational cost and peak memory demand. Availability: Mendel-GPU, our OpenCL software, runs on Linux platforms and is portable across AMD and nVidia GPUs. Users can download both code and documentation at http://code.google.com/p/mendel-gpu/. Contact: gary.k.chen@usc.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:22954633
NASA Astrophysics Data System (ADS)
Chang, Chao-Hsi; Wang, Jian-Xiong; Wu, Xing-Gang
2006-11-01
An upgraded version of the package BCVEGPY2.0: [C.-H. Chang, J.-X. Wang, X.-G. Wu, Comput. Phys. Commun. 174 (2006) 241] is presented, which works under LINUX system and is named as BCVEGPY2.1. With the version and a GNU C compiler additionally, users may simulate the B-events in various experimental environments very conveniently. It has been manipulated in better modularity and code reusability (less cross communication among various modules) than BCVEGPY2.0 has. Furthermore, in the upgraded version a special execution is arranged as that the GNU command make compiles a requested code with the help of a master makefile in main code directory, and then builds an executable file with the default name run. Finally, this paper may also be considered as an erratum, i.e., typo errors in BCVEGPY2.0 and corrections accordingly have been listed. New version program (BCVEGPY2.1) summaryTitle of program: BCVEGPY2.1 Catalogue identifier: ADTJ_v2_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADTJ_v2_1 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Reference to original program: BCVEGPY2.0 Reference in CPC: Comput. Phys. Commun. 174 (2006) 241 Does the new version supersede the old program: No Computer: Any LINUX based on PC with FORTRAN 77 or FORTRAN 90 and GNU C compiler as well Operating systems: LINUX Programming language used: FORTRAN 77/90 Memory required to execute with typical data: About 2.0 MB No. of lines in distributed program, including test data, etc.: 31 521 No. of bytes in distributed program, including test data, etc.: 1 310 179 Distribution format: tar.gz Nature of physical problem: Hadronic production of B meson itself and its excited states Method of solution: The code with option can generate weighted and unweighted events. An interface to PYTHIA is provided to meet the needs of jets hadronization in the production. Restrictions on the complexity of the problem: The hadronic production of (cb¯)-quarkonium in S-wave and P-wave states via the mechanism of gluon-gluon fusion are given by the so-called 'complete calculation' approach. Reasons for new version: Responding to the feedback from users, we rearrange the program in a convenient way and then it can be easily adopted by the users to do the simulations according to their own experimental environment (e.g. detector acceptances and experimental cuts). We have paid many efforts to rearrange the program into several modules with less cross communication among the modules, the main program is slimmed down and all the further actions are decoupled from the main program and can be easily called for various purposes. Typical running time: The typical running time is machine and user-parameters dependent. Typically, for production of the S-wave (cb¯)-quarkonium, when IDWTUP = 1, it takes about 20 hour on a 1.8 GHz Intel P4-processor machine to generate 1000 events; however, when IDWTUP = 3, to generate 10 6 events it takes about 40 minutes only. Of the production, the time for the P-wave (cb¯)-quarkonium will take almost two times longer than that for its S-wave quarkonium. Summary of the changes (improvements): (1) The structure and organization of the program have been changed a lot. The new version package BCVEGPY2.1 has been divided into several modules with less cross communication among the modules (some old version source files are divided into several parts for the purpose). The main program is slimmed down and all the further actions are decoupled from the main program so that they can be easily called for various applications. All of the Fortran codes are organized in the main code directory named as bcvegpy2.1, which contains the main program, all of its prerequisite files and subsidiary 'folders' (subdirectory to the main code directory). The method for setting the parameter is the same as that of the previous versions [C.-H. Chang, C. Driouich, P. Eerola, X.-G. Wu, Comput. Phys. Commun. 159 (2004) 192, hep-ph/0309120. [1
SCEAPI: A unified Restful Web API for High-Performance Computing
NASA Astrophysics Data System (ADS)
Rongqiang, Cao; Haili, Xiao; Shasha, Lu; Yining, Zhao; Xiaoning, Wang; Xuebin, Chi
2017-10-01
The development of scientific computing is increasingly moving to collaborative web and mobile applications. All these applications need high-quality programming interface for accessing heterogeneous computing resources consisting of clusters, grid computing or cloud computing. In this paper, we introduce our high-performance computing environment that integrates computing resources from 16 HPC centers across China. Then we present a bundle of web services called SCEAPI and describe how it can be used to access HPC resources with HTTP or HTTPs protocols. We discuss SCEAPI from several aspects including architecture, implementation and security, and address specific challenges in designing compatible interfaces and protecting sensitive data. We describe the functions of SCEAPI including authentication, file transfer and job management for creating, submitting and monitoring, and how to use SCEAPI in an easy-to-use way. Finally, we discuss how to exploit more HPC resources quickly for the ATLAS experiment by implementing the custom ARC compute element based on SCEAPI, and our work shows that SCEAPI is an easy-to-use and effective solution to extend opportunistic HPC resources.
NASA Astrophysics Data System (ADS)
The Ocean Research Institute of the University of Tokyo and the National Science Foundation (NSF) have signed a Memorandum of Understanding for cooperation in the Ocean Drilling Program (ODP). The agreement calls for Japanese participation in ODP and an annual contribution of $2.5 million in U.S. currency for the project's 9 remaining years, according to NSF.ODP is an international project whose mission is to learn more about the formation and development of the earth through the collection and examination of core samples from beneath the ocean. The program uses the drillship JOIDES Resolution, which is equipped with laboratories and computer facilities. The Joint Oceanographic Institutions for Deep Earth Sampling (JOIDES), an international group of scientists, provides overall science planning and program advice regarding ODP's science goals and objectives.
NASA Technical Reports Server (NTRS)
Elliott, R. D.; Werner, N. M.; Baker, W. M.
1975-01-01
The Aerodynamic Data Analysis and Integration System (ADAIS), developed as a highly interactive computer graphics program capable of manipulating large quantities of data such that addressable elements of a data base can be called up for graphic display, compared, curve fit, stored, retrieved, differenced, etc., was described. The general nature of the system is evidenced by the fact that limited usage has already occurred with data bases consisting of thermodynamic, basic loads, and flight dynamics data. Productivity using ADAIS of five times that for conventional manual methods of wind tunnel data analysis is routinely achieved. In wind tunnel data analysis, data from one or more runs of a particular test may be called up and displayed along with data from one or more runs of a different test. Curves may be faired through the data points by any of four methods, including cubic spline and least squares polynomial fit up to seventh order.
2009.1 Revision of the Evaluated Nuclear Data Library (ENDL2009.1)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thompson, I. J.; Beck, B.; Descalles, M. A.
LLNL’s Computational Nuclear Data and Theory Group have created a 2009.1 revised release of the Evaluated Nuclear Data Library (ENDL2009.1). This library is designed to support LLNL’s current and future nuclear data needs and will be employed in nuclear reactor, nuclear security and stockpile stewardship simulations with ASC codes. The ENDL2009 database was the most complete nuclear database for Monte Carlo and deterministic transport of neutrons and charged particles. It was assembled with strong support from the ASC PEM and Attribution programs, leveraged with support from Campaign 4 and the DOE/Office of Science’s US Nuclear Data Program. This document listsmore » the revisions and fixes made in a new release called ENDL2009.1, by comparing with the existing data in the original release which is now called ENDL2009.0. These changes are made in conjunction with the revisions for ENDL2011.1, so that both the .1 releases are as free as possible of known defects.« less
Transparent Ada rendezvous in a fault tolerant distributed system
NASA Technical Reports Server (NTRS)
Racine, Roger
1986-01-01
There are many problems associated with distributing an Ada program over a loosely coupled communication network. Some of these problems involve the various aspects of the distributed rendezvous. The problems addressed involve supporting the delay statement in a selective call and supporting the else clause in a selective call. Most of these difficulties are compounded by the need for an efficient communication system. The difficulties are compounded even more by considering the possibility of hardware faults occurring while the program is running. With a hardware fault tolerant computer system, it is possible to design a distribution scheme and communication software which is efficient and allows Ada semantics to be preserved. An Ada design for the communications software of one such system will be presented, including a description of the services provided in the seven layers of an International Standards Organization (ISO) Open System Interconnect (OSI) model communications system. The system capabilities (hardware and software) that allow this communication system will also be described.
NASA Astrophysics Data System (ADS)
Stock, Joachim W.; Kitzmann, Daniel; Patzer, A. Beate C.; Sedlmayr, Erwin
2018-06-01
For the calculation of complex neutral/ionized gas phase chemical equilibria, we present a semi-analytical versatile and efficient computer program, called FastChem. The applied method is based on the solution of a system of coupled nonlinear (and linear) algebraic equations, namely the law of mass action and the element conservation equations including charge balance, in many variables. Specifically, the system of equations is decomposed into a set of coupled nonlinear equations in one variable each, which are solved analytically whenever feasible to reduce computation time. Notably, the electron density is determined by using the method of Nelder and Mead at low temperatures. The program is written in object-oriented C++ which makes it easy to couple the code with other programs, although a stand-alone version is provided. FastChem can be used in parallel or sequentially and is available under the GNU General Public License version 3 at https://github.com/exoclime/FastChem together with several sample applications. The code has been successfully validated against previous studies and its convergence behavior has been tested even for extreme physical parameter ranges down to 100 K and up to 1000 bar. FastChem converges stable and robust in even most demanding chemical situations, which posed sometimes extreme challenges for previous algorithms.
Programmed Evolution for Optimization of Orthogonal Metabolic Output in Bacteria
Eckdahl, Todd T.; Campbell, A. Malcolm; Heyer, Laurie J.; Poet, Jeffrey L.; Blauch, David N.; Snyder, Nicole L.; Atchley, Dustin T.; Baker, Erich J.; Brown, Micah; Brunner, Elizabeth C.; Callen, Sean A.; Campbell, Jesse S.; Carr, Caleb J.; Carr, David R.; Chadinha, Spencer A.; Chester, Grace I.; Chester, Josh; Clarkson, Ben R.; Cochran, Kelly E.; Doherty, Shannon E.; Doyle, Catherine; Dwyer, Sarah; Edlin, Linnea M.; Evans, Rebecca A.; Fluharty, Taylor; Frederick, Janna; Galeota-Sprung, Jonah; Gammon, Betsy L.; Grieshaber, Brandon; Gronniger, Jessica; Gutteridge, Katelyn; Henningsen, Joel; Isom, Bradley; Itell, Hannah L.; Keffeler, Erica C.; Lantz, Andrew J.; Lim, Jonathan N.; McGuire, Erin P.; Moore, Alexander K.; Morton, Jerrad; Nakano, Meredith; Pearson, Sara A.; Perkins, Virginia; Parrish, Phoebe; Pierson, Claire E.; Polpityaarachchige, Sachith; Quaney, Michael J.; Slattery, Abagael; Smith, Kathryn E.; Spell, Jackson; Spencer, Morgan; Taye, Telavive; Trueblood, Kamay; Vrana, Caroline J.; Whitesides, E. Tucker
2015-01-01
Current use of microbes for metabolic engineering suffers from loss of metabolic output due to natural selection. Rather than combat the evolution of bacterial populations, we chose to embrace what makes biological engineering unique among engineering fields – evolving materials. We harnessed bacteria to compute solutions to the biological problem of metabolic pathway optimization. Our approach is called Programmed Evolution to capture two concepts. First, a population of cells is programmed with DNA code to enable it to compute solutions to a chosen optimization problem. As analog computers, bacteria process known and unknown inputs and direct the output of their biochemical hardware. Second, the system employs the evolution of bacteria toward an optimal metabolic solution by imposing fitness defined by metabolic output. The current study is a proof-of-concept for Programmed Evolution applied to the optimization of a metabolic pathway for the conversion of caffeine to theophylline in E. coli. Introduced genotype variations included strength of the promoter and ribosome binding site, plasmid copy number, and chaperone proteins. We constructed 24 strains using all combinations of the genetic variables. We used a theophylline riboswitch and a tetracycline resistance gene to link theophylline production to fitness. After subjecting the mixed population to selection, we measured a change in the distribution of genotypes in the population and an increased conversion of caffeine to theophylline among the most fit strains, demonstrating Programmed Evolution. Programmed Evolution inverts the standard paradigm in metabolic engineering by harnessing evolution instead of fighting it. Our modular system enables researchers to program bacteria and use evolution to determine the combination of genetic control elements that optimizes catabolic or anabolic output and to maintain it in a population of cells. Programmed Evolution could be used for applications in energy, pharmaceuticals, chemical commodities, biomining, and bioremediation. PMID:25714374
Programmed evolution for optimization of orthogonal metabolic output in bacteria.
Eckdahl, Todd T; Campbell, A Malcolm; Heyer, Laurie J; Poet, Jeffrey L; Blauch, David N; Snyder, Nicole L; Atchley, Dustin T; Baker, Erich J; Brown, Micah; Brunner, Elizabeth C; Callen, Sean A; Campbell, Jesse S; Carr, Caleb J; Carr, David R; Chadinha, Spencer A; Chester, Grace I; Chester, Josh; Clarkson, Ben R; Cochran, Kelly E; Doherty, Shannon E; Doyle, Catherine; Dwyer, Sarah; Edlin, Linnea M; Evans, Rebecca A; Fluharty, Taylor; Frederick, Janna; Galeota-Sprung, Jonah; Gammon, Betsy L; Grieshaber, Brandon; Gronniger, Jessica; Gutteridge, Katelyn; Henningsen, Joel; Isom, Bradley; Itell, Hannah L; Keffeler, Erica C; Lantz, Andrew J; Lim, Jonathan N; McGuire, Erin P; Moore, Alexander K; Morton, Jerrad; Nakano, Meredith; Pearson, Sara A; Perkins, Virginia; Parrish, Phoebe; Pierson, Claire E; Polpityaarachchige, Sachith; Quaney, Michael J; Slattery, Abagael; Smith, Kathryn E; Spell, Jackson; Spencer, Morgan; Taye, Telavive; Trueblood, Kamay; Vrana, Caroline J; Whitesides, E Tucker
2015-01-01
Current use of microbes for metabolic engineering suffers from loss of metabolic output due to natural selection. Rather than combat the evolution of bacterial populations, we chose to embrace what makes biological engineering unique among engineering fields - evolving materials. We harnessed bacteria to compute solutions to the biological problem of metabolic pathway optimization. Our approach is called Programmed Evolution to capture two concepts. First, a population of cells is programmed with DNA code to enable it to compute solutions to a chosen optimization problem. As analog computers, bacteria process known and unknown inputs and direct the output of their biochemical hardware. Second, the system employs the evolution of bacteria toward an optimal metabolic solution by imposing fitness defined by metabolic output. The current study is a proof-of-concept for Programmed Evolution applied to the optimization of a metabolic pathway for the conversion of caffeine to theophylline in E. coli. Introduced genotype variations included strength of the promoter and ribosome binding site, plasmid copy number, and chaperone proteins. We constructed 24 strains using all combinations of the genetic variables. We used a theophylline riboswitch and a tetracycline resistance gene to link theophylline production to fitness. After subjecting the mixed population to selection, we measured a change in the distribution of genotypes in the population and an increased conversion of caffeine to theophylline among the most fit strains, demonstrating Programmed Evolution. Programmed Evolution inverts the standard paradigm in metabolic engineering by harnessing evolution instead of fighting it. Our modular system enables researchers to program bacteria and use evolution to determine the combination of genetic control elements that optimizes catabolic or anabolic output and to maintain it in a population of cells. Programmed Evolution could be used for applications in energy, pharmaceuticals, chemical commodities, biomining, and bioremediation.
Murphy, Mary; Smith, Lucia; Palma, Anton; Lounsbury, David; Bijur, Polly; Chambers, Paul; Gallagher, E John
2013-01-01
Injuries from motor vehicle crashes are a significant public health problem. The emergency department (ED) provides a setting that may be used to screen for behaviors that increase risk for motor vehicle crashes and provide brief interventions to people who might otherwise not have access to screening and intervention. The purpose of the present study was to (1) assess the feasibility of using a computer-assisted screening program to educate ED patients about risky driving behaviors, (2) evaluate patient acceptance of the computer-based traffic safety educational intervention during an ED visit, and (3) assess postintervention changes in risky driving behaviors. Pre/posteducational intervention involving medically stable adult ED patients in a large urban academic ED serving over 100,000 patients annually. Patients completed a self-administered, computer-based program that queried patients on risky driving behaviors (texting, talking, and other forms of distracted driving) and alcohol use. The computer provided patients with educational information on the dangers of these behaviors and data were collected on patient satisfaction with the program. Staff called patients 1 month post-ED visit for a repeat query. One hundred forty-nine patients participated, and 111 completed 1-month follow up (75%); the mean age was 39 (range: 21-70), 59 percent were Hispanic, and 52 percent were male. Ninety-seven percent of patients reported that the program was easy to use and that they were comfortable receiving this education via computer during their ED visit. All driving behaviors significantly decreased in comparison to baseline with the following reductions reported: talking on the phone, 30 percent; aggressive driving, 30 percent; texting while driving, 19 percent; drowsy driving, 16 percent; driving while multitasking, 12 percent; and drinking and driving, 9 percent. Overall, patients were very satisfied receiving educational information about these behaviors via computer during their ED visits and found the program easy to use. We found a high prevalence of self-reported risky driving behaviors in our ED population. At 1-month follow-up, patients reported a significant decrease in these behaviors. This study indicates that a low-intensity, computer-based educational intervention during an ED visit may be a useful approach to educate patients about safe driving behaviors and safe drinking limits and help promote behavior change.
ERIC Educational Resources Information Center
Ambrose, Regina Maria; Palpanathan, Shanthini
2017-01-01
Computer-assisted language learning (CALL) has evolved through various stages in both technology as well as the pedagogical use of technology (Warschauer & Healey, 1998). Studies show that the CALL trend has facilitated students in their English language writing with useful tools such as computer based activities and word processing. Students…
Blind source computer device identification from recorded VoIP calls for forensic investigation.
Jahanirad, Mehdi; Anuar, Nor Badrul; Wahab, Ainuddin Wahid Abdul
2017-03-01
The VoIP services provide fertile ground for criminal activity, thus identifying the transmitting computer devices from recorded VoIP call may help the forensic investigator to reveal useful information. It also proves the authenticity of the call recording submitted to the court as evidence. This paper extended the previous study on the use of recorded VoIP call for blind source computer device identification. Although initial results were promising but theoretical reasoning for this is yet to be found. The study suggested computing entropy of mel-frequency cepstrum coefficients (entropy-MFCC) from near-silent segments as an intrinsic feature set that captures the device response function due to the tolerances in the electronic components of individual computer devices. By applying the supervised learning techniques of naïve Bayesian, linear logistic regression, neural networks and support vector machines to the entropy-MFCC features, state-of-the-art identification accuracy of near 99.9% has been achieved on different sets of computer devices for both call recording and microphone recording scenarios. Furthermore, unsupervised learning techniques, including simple k-means, expectation-maximization and density-based spatial clustering of applications with noise (DBSCAN) provided promising results for call recording dataset by assigning the majority of instances to their correct clusters. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
1987-01-01
In a complex computer environment there is ample opportunity for error, a mistake by a programmer, or a software-induced undesirable side effect. In insurance, errors can cost a company heavily, so protection against inadvertent change is a must for the efficient firm. The data processing center at Transport Life Insurance Company has taken a step to guard against accidental changes by adopting a software package called EQNINT (Equations Interpreter Program). EQNINT cross checks the basic formulas in a program against the formulas that make up the major production system. EQNINT assures that formulas are coded correctly and helps catch errors before they affect the customer service or its profitability.
User's operating procedures. Volume 2: Scout project financial analysis program
NASA Technical Reports Server (NTRS)
Harris, C. G.; Haris, D. K.
1985-01-01
A review is presented of the user's operating procedures for the Scout Project Automatic Data system, called SPADS. SPADS is the result of the past seven years of software development on a Prime mini-computer located at the Scout Project Office, NASA Langley Research Center, Hampton, Virginia. SPADS was developed as a single entry, multiple cross-reference data management and information retrieval system for the automation of Project office tasks, including engineering, financial, managerial, and clerical support. This volume, two (2) of three (3), provides the instructions to operate the Scout Project Financial Analysis program in data retrieval and file maintenance via the user friendly menu drivers.
Parallel processors and nonlinear structural dynamics algorithms and software
NASA Technical Reports Server (NTRS)
Belytschko, Ted; Gilbertsen, Noreen D.; Neal, Mark O.; Plaskacz, Edward J.
1989-01-01
The adaptation of a finite element program with explicit time integration to a massively parallel SIMD (single instruction multiple data) computer, the CONNECTION Machine is described. The adaptation required the development of a new algorithm, called the exchange algorithm, in which all nodal variables are allocated to the element with an exchange of nodal forces at each time step. The architectural and C* programming language features of the CONNECTION Machine are also summarized. Various alternate data structures and associated algorithms for nonlinear finite element analysis are discussed and compared. Results are presented which demonstrate that the CONNECTION Machine is capable of outperforming the CRAY XMP/14.
Implementation of the fugitive emissions system program: The OxyChem experience
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deshmukh, A.
An overview is provided for the Fugitive Emissions System (FES) that has been implemented at Occidental Chemical in conjunction with the computer-based maintenance system called PassPort{reg_sign} developed by Indus Corporation. The goal of PassPort{reg_sign} FES program has been to interface with facilities data, equipment information, work standards and work orders. Along the way, several implementation hurdles had to be overcome before a monitoring and regulatory system could be standardized for the appropriate maintenance, process and environmental groups. This presentation includes step-by-step account of several case studies that developed during the implementation of the FES system.
Parallel computation with the force
NASA Technical Reports Server (NTRS)
Jordan, H. F.
1985-01-01
A methodology, called the force, supports the construction of programs to be executed in parallel by a force of processes. The number of processes in the force is unspecified, but potentially very large. The force idea is embodied in a set of macros which produce multiproceossor FORTRAN code and has been studied on two shared memory multiprocessors of fairly different character. The method has simplified the writing of highly parallel programs within a limited class of parallel algorithms and is being extended to cover a broader class. The individual parallel constructs which comprise the force methodology are discussed. Of central concern are their semantics, implementation on different architectures and performance implications.
The PyRosetta Toolkit: a graphical user interface for the Rosetta software suite.
Adolf-Bryfogle, Jared; Dunbrack, Roland L
2013-01-01
The Rosetta Molecular Modeling suite is a command-line-only collection of applications that enable high-resolution modeling and design of proteins and other molecules. Although extremely useful, Rosetta can be difficult to learn for scientists with little computational or programming experience. To that end, we have created a Graphical User Interface (GUI) for Rosetta, called the PyRosetta Toolkit, for creating and running protocols in Rosetta for common molecular modeling and protein design tasks and for analyzing the results of Rosetta calculations. The program is highly extensible so that developers can add new protocols and analysis tools to the PyRosetta Toolkit GUI.
Rule-based programming paradigm: a formal basis for biological, chemical and physical computation.
Krishnamurthy, V; Krishnamurthy, E V
1999-03-01
A rule-based programming paradigm is described as a formal basis for biological, chemical and physical computations. In this paradigm, the computations are interpreted as the outcome arising out of interaction of elements in an object space. The interactions can create new elements (or same elements with modified attributes) or annihilate old elements according to specific rules. Since the interaction rules are inherently parallel, any number of actions can be performed cooperatively or competitively among the subsets of elements, so that the elements evolve toward an equilibrium or unstable or chaotic state. Such an evolution may retain certain invariant properties of the attributes of the elements. The object space resembles Gibbsian ensemble that corresponds to a distribution of points in the space of positions and momenta (called phase space). It permits the introduction of probabilities in rule applications. As each element of the ensemble changes over time, its phase point is carried into a new phase point. The evolution of this probability cloud in phase space corresponds to a distributed probabilistic computation. Thus, this paradigm can handle tor deterministic exact computation when the initial conditions are exactly specified and the trajectory of evolution is deterministic. Also, it can handle probabilistic mode of computation if we want to derive macroscopic or bulk properties of matter. We also explain how to support this rule-based paradigm using relational-database like query processing and transactions.
Simulation of a master-slave event set processor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Comfort, J.C.
1984-03-01
Event set manipulation may consume a considerable amount of the computation time spent in performing a discrete-event simulation. One way of minimizing this time is to allow event set processing to proceed in parallel with the remainder of the simulation computation. The paper describes a multiprocessor simulation computer, in which all non-event set processing is performed by the principal processor (called the host). Event set processing is coordinated by a front end processor (the master) and actually performed by several other functionally identical processors (the slaves). A trace-driven simulation program modeling this system was constructed, and was run with tracemore » output taken from two different simulation programs. Output from this simulation suggests that a significant reduction in run time may be realized by this approach. Sensitivity analysis was performed on the significant parameters to the system (number of slave processors, relative processor speeds, and interprocessor communication times). A comparison between actual and simulation run times for a one-processor system was used to assist in the validation of the simulation. 7 references.« less
NASA Technical Reports Server (NTRS)
Putnam, L. E.
1979-01-01
A Neumann solution for inviscid external flow was coupled to a modified Reshotko-Tucker integral boundary-layer technique, the control volume method of Presz for calculating flow in the separated region, and an inviscid one-dimensional solution for the jet exhaust flow in order to predict axisymmetric nozzle afterbody pressure distributions and drag. The viscous and inviscid flows are solved iteratively until convergence is obtained. A computer algorithm of this procedure was written and is called DONBOL. A description of the computer program and a guide to its use is given. Comparisons of the predictions of this method with experiments show that the method accurately predicts the pressure distributions of boattail afterbodies which have the jet exhaust flow simulated by solid bodies. For nozzle configurations which have the jet exhaust simulated by high-pressure air, the present method significantly underpredicts the magnitude of nozzle pressure drag. This deficiency results because the method neglects the effects of jet plume entrainment. This method is limited to subsonic free-stream Mach numbers below that for which the flow over the body of revolution becomes sonic.
NASA Technical Reports Server (NTRS)
Flourens, F.; Morel, T.; Gauthier, D.; Serafin, D.
1991-01-01
Numerical techniques such as Finite Difference Time Domain (FDTD) computer programs, which were first developed to analyze the external electromagnetic environment of an aircraft during a wave illumination, a lightning event, or any kind of current injection, are now very powerful investigative tools. The program called GORFF-VE, was extended to compute the inner electromagnetic fields that are generated by the penetration of the outer fields through large apertures made in the all metallic body. Then, the internal fields can drive the electrical response of a cable network. The coupling between the inside and the outside of the helicopter is implemented using Huygen's principle. Moreover, the spectacular increase of computer resources, as calculations speed and memory capacity, allows the modellization structures as complex as these of helicopters with accuracy. This numerical model was exploited, first, to analyze the electromagnetic environment of an in-flight helicopter for several injection configurations, and second, to design a coaxial return path to simulate the lightning aircraft interaction with a strong current injection. The E field and current mappings are the result of these calculations.
NASA Astrophysics Data System (ADS)
Canal, Fernando; Garcia-Mateos, Jorge; Rodriguez-Larena, Jorge; Rivera, Alejandro; Aparicio, E.
2000-12-01
Medical therapeutic applications using lasers involves understanding the light tissue interaction, in particular the rate ofphotochemical and thermal reactions. Tissue is composed ofa mix ofturbid media. Light propagation in turbid media can be described by the so-called Equation of Radiative Transfer, an integro-differential equation where scattering, absorption and internal reflection are significant factors in determining the light distribution in tissue. The Equation of Radiative Transfer however can not commonly be solved analytically.' In order to visualize and simulate the effects of laser light on heart tissues (myocardium) in relation to the treatment of irregular heart rates or so called arrhythmias, a fast interactive computer program has been developed in Java.
Strategic Computing Computer Vision: Taking Image Understanding To The Next Plateau
NASA Astrophysics Data System (ADS)
Simpson, R. L., Jr.
1987-06-01
The overall objective of the Strategic Computing (SC) Program of the Defense Advanced Research Projects Agency (DARPA) is to develop and demonstrate a new generation of machine intelligence technology which can form the basis for more capable military systems in the future and also maintain a position of world leadership for the US in computer technology. Begun in 1983, SC represents a focused research strategy for accelerating the evolution of new technology and its rapid prototyping in realistic military contexts. Among the very ambitious demonstration prototypes being developed within the SC Program are: 1) the Pilot's Associate which will aid the pilot in route planning, aerial target prioritization, evasion of missile threats, and aircraft emergency safety procedures during flight; 2) two battle management projects one for the for the Army, which is just getting started, called the AirLand Battle Management program (ALBM) which will use knowledge-based systems technology to assist in the generation and evaluation of tactical options and plans at the Corps level; 3) the other more established program for the Navy is the Fleet Command Center Battle Management Program (FCCBIVIP) at Pearl Harbor. The FCCBMP is employing knowledge-based systems and natural language technology in a evolutionary testbed situated in an operational command center to demonstrate and evaluate intelligent decision-aids which can assist in the evaluation of fleet readiness and explore alternatives during contingencies; and 4) the Autonomous Land Vehicle (ALV) which integrates in a major robotic testbed the technologies for dynamic image understanding, knowledge-based route planning with replanning during execution, hosted on new advanced parallel architectures. The goal of the Strategic Computing computer vision technology base (SCVision) is to develop generic technology that will enable the construction of complete, robust, high performance image understanding systems to support a wide range of DoD applications. Possible applications include autonomous vehicle navigation, photointerpretation, smart weapons, and robotic manipulation. This paper provides an overview of the technical and program management plans being used in evolving this critical national technology.
Programming for 1.6 Millon cores: Early experiences with IBM's BG/Q SMP architecture
NASA Astrophysics Data System (ADS)
Glosli, James
2013-03-01
With the stall in clock cycle improvements a decade ago, the drive for computational performance has continues along a path of increasing core counts on a processor. The multi-core evolution has been expressed in both a symmetric multi processor (SMP) architecture and cpu/GPU architecture. Debates rage in the high performance computing (HPC) community which architecture best serves HPC. In this talk I will not attempt to resolve that debate but perhaps fuel it. I will discuss the experience of exploiting Sequoia, a 98304 node IBM Blue Gene/Q SMP at Lawrence Livermore National Laboratory. The advantages and challenges of leveraging the computational power BG/Q will be detailed through the discussion of two applications. The first application is a Molecular Dynamics code called ddcMD. This is a code developed over the last decade at LLNL and ported to BG/Q. The second application is a cardiac modeling code called Cardioid. This is a code that was recently designed and developed at LLNL to exploit the fine scale parallelism of BG/Q's SMP architecture. Through the lenses of these efforts I'll illustrate the need to rethink how we express and implement our computational approaches. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
NASA Technical Reports Server (NTRS)
1987-01-01
Manugistics, Inc. (formerly AVYX, Inc.) has introduced a new programming language for IBM and IBM compatible computers called TREES-pls. It is a resource management tool originating from the space shuttle, that can be used in such applications as scheduling, resource allocation project control, information management, and artificial intelligence. Manugistics, Inc. was looking for a flexible tool that can be applied to many problems with minimal adaptation. Among the non-government markets are aerospace, other manufacturing, transportation, health care, food and beverage and professional services.
Thermal Tracker: The Secret Lives of Bats and Birds Revealed
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
Offshore wind developers and stakeholders can accelerate the sustainable, widespread deployment of offshore wind using a new open-source software program, called ThermalTracker. Researchers can now collect the data they need to better understand the potential effects of offshore wind turbines on bird and bat populations. This plug and play software can be used with any standard desktop computer, thermal camera, and statistical software to identify species and behaviors of animals in offshore locations.
Integrated Reconfigurable Intelligent Systems (IRIS) for Complex Naval Systems
2010-02-21
RKF45] and Adams Variable Step- Size Predictor - Corrector methods). While such algorithms naturally are usually used to numerically solve differential...verified by yet another function call. Due to their nature, such methods are referred to as predictor - corrector methods. While computationally expensive...CONTRACT NUMBER N00014-09- C -0394 5b. GRANT NUMBER N/A 5c. PROGRAM ELEMENT NUMBER N/A 6. Author(s) Dr. Dimitri N. Mavris Dr. Yongchang Li 5d
Electromagnetic Scattering from a Homogeneous Body of Revolution
1977-11-01
was supported by the Rome Air Development Center through the Deputy of Electronic Technology under Contract/ No. F19628-76-C-0300,’and through the... Air Force Post Doctoral Program under Contract No. F30602-75-0121. DFPARITfENT OF / ELECTRICAL AND COMPUTER ENGINEERIN(C SYRACUSE UNIVERS1 TY SYRACUSE...material cylinders by Chang and Harrington [21, and to material bodies of revolu- tion by Wu [3]. We will call this choice the PMCHW formulation
The h-p Version of the Finite Element Method with Quasiuniform Meshes.
1986-05-01
Noetic Technologies, St. Louis).1 The theoretical aspects have been studied only recently. The first theoretical paper appeared in 1981 (see [6...mapping approach, the -esllt3 are also valid for curvilinear elements. r." ./" . • • .o i - .. • • .. 32 6. APPLICATIONS In this section we will study the...which were performed with a computer program called PROBE [20], [22] developed by V Noetic Technologies Corporation, St Louis. We will consider a
Program for Generating Graphs and Charts
NASA Technical Reports Server (NTRS)
Ackerson, C. T.
1986-01-01
Office Automation Pilot (OAP) Graphics Database system offers IBM personal computer user assistance in producing wide variety of graphs and charts and convenient data-base system, called chart base, for creating and maintaining data associated with graphs and charts. Thirteen different graphics packages available. Access graphics capabilities obtained in similar manner. User chooses creation, revision, or chartbase-maintenance options from initial menu; Enters or modifies data displayed on graphic chart. OAP graphics data-base system written in Microsoft PASCAL.
Software design to calculate and simulate the mechanical response of electromechanical lifts
NASA Astrophysics Data System (ADS)
Herrera, I.; Romero, E.
2016-05-01
Lift engineers and lift companies which are involved in the design process of new products or in the research and development of improved components demand a predictive tool of the lift slender system response before testing expensive prototypes. A method for solving the movement of any specified lift system by means of a computer program is presented. The mechanical response of the lift operating in a user defined installation and configuration, for a given excitation and other configuration parameters of real electric motors and its control system, is derived. A mechanical model with 6 degrees of freedom is used. The governing equations are integrated step by step through the Meden-Kutta algorithm in the MATLAB platform. Input data consists on the set point speed for a standard trip and the control parameters of a number of controllers and lift drive machines. The computer program computes and plots very accurately the vertical displacement, velocity, instantaneous acceleration and jerk time histories of the car, counterweight, frame, passengers/loads and lift drive in a standard trip between any two floors of the desired installation. The resulting torque, rope tension and deviation of the velocity plot with respect to the setpoint speed are shown. The software design is implemented in a demo release of the computer program called ElevaCAD. Further on, the program offers the possibility to select the configuration of the lift system and the performance parameters of each component. In addition to the overall system response, detailed information of transients, vibrations of the lift components, ride quality levels, modal analysis and frequency spectrum (FFT) are plotted.
Application of a sensitivity analysis technique to high-order digital flight control systems
NASA Technical Reports Server (NTRS)
Paduano, James D.; Downing, David R.
1987-01-01
A sensitivity analysis technique for multiloop flight control systems is studied. This technique uses the scaled singular values of the return difference matrix as a measure of the relative stability of a control system. It then uses the gradients of these singular values with respect to system and controller parameters to judge sensitivity. The sensitivity analysis technique is first reviewed; then it is extended to include digital systems, through the derivation of singular-value gradient equations. Gradients with respect to parameters which do not appear explicitly as control-system matrix elements are also derived, so that high-order systems can be studied. A complete review of the integrated technique is given by way of a simple example: the inverted pendulum problem. The technique is then demonstrated on the X-29 control laws. Results show linear models of real systems can be analyzed by this sensitivity technique, if it is applied with care. A computer program called SVA was written to accomplish the singular-value sensitivity analysis techniques. Thus computational methods and considerations form an integral part of many of the discussions. A user's guide to the program is included. The SVA is a fully public domain program, running on the NASA/Dryden Elxsi computer.
NASA Astrophysics Data System (ADS)
Ehmann, Andreas F.; Downie, J. Stephen
2005-09-01
The objective of the International Music Information Retrieval Systems Evaluation Laboratory (IMIRSEL) project is the creation of a large, secure corpus of audio and symbolic music data accessible to the music information retrieval (MIR) community for the testing and evaluation of various MIR techniques. As part of the IMIRSEL project, a cross-platform JAVA based visual programming environment called Music to Knowledge (M2K) is being developed for a variety of music information retrieval related tasks. The primary objective of M2K is to supply the MIR community with a toolset that provides the ability to rapidly prototype algorithms, as well as foster the sharing of techniques within the MIR community through the use of a standardized set of tools. Due to the relatively large size of audio data and the computational costs associated with some digital signal processing and machine learning techniques, M2K is also designed to support distributed computing across computing clusters. In addition, facilities to allow the integration of non-JAVA based (e.g., C/C++, MATLAB, etc.) algorithms and programs are provided within M2K. [Work supported by the Andrew W. Mellon Foundation and NSF Grants No. IIS-0340597 and No. IIS-0327371.
Prudic, David E.
1989-01-01
Computer models are widely used to simulate groundwater flow for evaluating and managing the groundwater resource of many aquifers, but few are designed to also account for surface flow in streams. A computer program was written for use in the US Geological Survey modular finite difference groundwater flow model to account for the amount of flow in streams and to simulate the interaction between surface streams and groundwater. The new program is called the Streamflow-Routing Package. The Streamflow-Routing Package is not a true surface water flow model, but rather is an accounting program that tracks the flow in one or more streams which interact with groundwater. The program limits the amount of groundwater recharge to the available streamflow. It permits two or more streams to merge into one with flow in the merged stream equal to the sum of the tributary flows. The program also permits diversions from streams. The groundwater flow model with the Streamflow-Routing Package has an advantage over the analytical solution in simulating the interaction between aquifer and stream because it can be used to simulate complex systems that cannot be readily solved analytically. The Streamflow-Routing Package does not include a time function for streamflow but rather streamflow entering the modeled area is assumed to be instantly available to downstream reaches during each time period. This assumption is generally reasonable because of the relatively slow rate of groundwater flow. Another assumption is that leakage between streams and aquifers is instantaneous. This assumption may not be reasonable if the streams and aquifers are separated by a thick unsaturated zone. Documentation of the Streamflow-Routing Package includes data input instructions; flow charts, narratives, and listings of the computer program for each of four modules; and input data sets and printed results for two test problems, and one example problem. (Lantz-PTT)
NASA Technical Reports Server (NTRS)
Rule, W. K.; Hayashida, K. B.
1992-01-01
The development of a computer program to predict the degradation of the insulating capabilities of the multilayer insulation (MLI) blanket of Space Station Freedom due to a hypervelocity impact with a space debris particle is described. A finite difference scheme is used for the calculations. The computer program was written in Microsoft BASIC. Also described is a test program that was undertaken to validate the numerical model. Twelve MLI specimens were impacted at hypervelocities with simulated debris particles using a light gas gun at Marshall Space Flight Center. The impact-damaged MLI specimens were then tested for insulating capability in the space environment of the Sunspot thermal vacuum chamber at MSFC. Two undamaged MLI specimens were also tested for comparison with the test results of the damaged specimens. The numerical model was found to adequately predict behavior of the MLI specimens in the Sunspot chamber. A parameter, called diameter ratio, was developed to relate the nominal MLI impact damage to the apparent (for thermal analysis purposes) impact damage based on the hypervelocity impact conditions of a specimen.
The Navy/NASA Engine Program (NNEP89): A user's manual
NASA Technical Reports Server (NTRS)
Plencner, Robert M.; Snyder, Christopher A.
1991-01-01
An engine simulation computer code called NNEP89 was written to perform 1-D steady state thermodynamic analysis of turbine engine cycles. By using a very flexible method of input, a set of standard components are connected at execution time to simulate almost any turbine engine configuration that the user could imagine. The code was used to simulate a wide range of engine cycles from turboshafts and turboprops to air turborockets and supersonic cruise variable cycle engines. Off design performance is calculated through the use of component performance maps. A chemical equilibrium model is incorporated to adequately predict chemical dissociation as well as model virtually any fuel. NNEP89 is written in standard FORTRAN77 with clear structured programming and extensive internal documentation. The standard FORTRAN77 programming allows it to be installed onto most mainframe computers and workstations without modification. The NNEP89 code was derived from the Navy/NASA Engine program (NNEP). NNEP89 provides many improvements and enhancements to the original NNEP code and incorporates features which make it easier to use for the novice user. This is a comprehensive user's guide for the NNEP89 code.
Value-Range Analysis of C Programs
NASA Astrophysics Data System (ADS)
Simon, Axel
In 1988, Robert T. Morris exploited a so-called buffer-overflow bug in finger (a dæmon whose job it is to return information on local users) to mount a denial-of-service attack on hundreds of VAX and Sun-3 computers [159]. He created what is nowadays called a worm; that is, a crafted stream of bytes that, when sent to a computer over the network, utilises a buffer-overflow bug in the software of that computer to execute code encoded in the byte stream. In the case of a worm, this code will send the very same byte stream to other computers on the network, thereby creating an avalanche of network traffic that ultimately renders the network and all computers involved in replicating the worm inaccessible. Besides duplicating themselves, worms can alter data on the host that they are running on. The most famous example in recent years was the MSBlaster32 worm, which altered the configuration database on many Microsoft Windows machines, thereby forcing the computers to reboot incessantly. Although this worm was rather benign, it caused huge damage to businesses who were unable to use their IT infrastructure for hours or even days after the appearance of the worm. A more malicious worm is certainly conceivable [187] due to the fact that worms are executed as part of a dæmon (also known as "service" on Windows machines) and thereby run at a privileged level, allowing access to any data stored on the remote computer. While the deletion of data presents a looming threat to valuable information, even more serious uses are espionage and theft, in particular because worms do not have to affect the running system and hence may be impossible to detect.
Janssen, Stefan; Schudoma, Christian; Steger, Gerhard; Giegerich, Robert
2011-11-03
Many bioinformatics tools for RNA secondary structure analysis are based on a thermodynamic model of RNA folding. They predict a single, "optimal" structure by free energy minimization, they enumerate near-optimal structures, they compute base pair probabilities and dot plots, representative structures of different abstract shapes, or Boltzmann probabilities of structures and shapes. Although all programs refer to the same physical model, they implement it with considerable variation for different tasks, and little is known about the effects of heuristic assumptions and model simplifications used by the programs on the outcome of the analysis. We extract four different models of the thermodynamic folding space which underlie the programs RNAFOLD, RNASHAPES, and RNASUBOPT. Their differences lie within the details of the energy model and the granularity of the folding space. We implement probabilistic shape analysis for all models, and introduce the shape probability shift as a robust measure of model similarity. Using four data sets derived from experimentally solved structures, we provide a quantitative evaluation of the model differences. We find that search space granularity affects the computed shape probabilities less than the over- or underapproximation of free energy by a simplified energy model. Still, the approximations perform similar enough to implementations of the full model to justify their continued use in settings where computational constraints call for simpler algorithms. On the side, we observe that the rarely used level 2 shapes, which predict the complete arrangement of helices, multiloops, internal loops and bulges, include the "true" shape in a rather small number of predicted high probability shapes. This calls for an investigation of new strategies to extract high probability members from the (very large) level 2 shape space of an RNA sequence. We provide implementations of all four models, written in a declarative style that makes them easy to be modified. Based on our study, future work on thermodynamic RNA folding may make a choice of model based on our empirical data. It can take our implementations as a starting point for further program development.
US GeoData: Digital cartographic and geographic data
,
1985-01-01
The increasing use of computers for storing and analyzing earth science information has sparked a growth in the demand for various types of cartographic data in digital form. The production of map data in computerized form is called digital cartography, and it involves the collection, storage, processing, analysis, and display of map data with the aid of computers. The U.S. Geological Survey, the Nation's largest earth science research agency, has expanded its national mapping program to incorporate operations associated with digital cartography, including the collection of planimetric, elevation, and geographic names information in digital form. This digital information is available for use in meeting the multipurpose needs and applications of the map user community.