Gschwind, Michael K
2013-04-16
Mechanisms for generating and executing programs for a floating point (FP) only single instruction multiple data (SIMD) instruction set architecture (ISA) are provided. A computer program product comprising a computer recordable medium having a computer readable program recorded thereon is provided. The computer readable program, when executed on a computing device, causes the computing device to receive one or more instructions and execute the one or more instructions using logic in an execution unit of the computing device. The logic implements a floating point (FP) only single instruction multiple data (SIMD) instruction set architecture (ISA), based on data stored in a vector register file of the computing device. The vector register file is configured to store both scalar and floating point values as vectors having a plurality of vector elements.
Ma, Li; Runesha, H Birali; Dvorkin, Daniel; Garbe, John R; Da, Yang
2008-01-01
Background Genome-wide association studies (GWAS) using single nucleotide polymorphism (SNP) markers provide opportunities to detect epistatic SNPs associated with quantitative traits and to detect the exact mode of an epistasis effect. Computational difficulty is the main bottleneck for epistasis testing in large scale GWAS. Results The EPISNPmpi and EPISNP computer programs were developed for testing single-locus and epistatic SNP effects on quantitative traits in GWAS, including tests of three single-locus effects for each SNP (SNP genotypic effect, additive and dominance effects) and five epistasis effects for each pair of SNPs (two-locus interaction, additive × additive, additive × dominance, dominance × additive, and dominance × dominance) based on the extended Kempthorne model. EPISNPmpi is the parallel computing program for epistasis testing in large scale GWAS and achieved excellent scalability for large scale analysis and portability for various parallel computing platforms. EPISNP is the serial computing program based on the EPISNPmpi code for epistasis testing in small scale GWAS using commonly available operating systems and computer hardware. Three serial computing utility programs were developed for graphical viewing of test results and epistasis networks, and for estimating CPU time and disk space requirements. Conclusion The EPISNPmpi parallel computing program provides an effective computing tool for epistasis testing in large scale GWAS, and the epiSNP serial computing programs are convenient tools for epistasis analysis in small scale GWAS using commonly available computer hardware. PMID:18644146
Encouraging more women into computer science: Initiating a single-sex intervention program in Sweden
NASA Astrophysics Data System (ADS)
Brandell, Gerd; Carlsson, Svante; Ekblom, Håkan; Nord, Ann-Charlotte
1997-11-01
The process of starting a new program in computer science and engineering, heavily based on applied mathematics and only open to women, is described in this paper. The program was introduced into an educational system without any tradition in single-sex education. Important observations made during the process included the considerable interest in mathematics and curiosity about computer science found among female students at the secondary school level, and the acceptance of the single-sex program by the staff, administration, and management of the university as well as among male and female students. The process described highlights the importance of preparing the environment for a totally new type of educational program.
A real-time digital computer program for the simulation of a single rotor helicopter
NASA Technical Reports Server (NTRS)
Houck, J. A.; Gibson, L. H.; Steinmetz, G. G.
1974-01-01
A computer program was developed for the study of a single-rotor helicopter on the Langley Research Center real-time digital simulation system. Descriptions of helicopter equations and data, program subroutines (including flow charts and listings), real-time simulation system routines, and program operation are included. Program usage is illustrated by standard check cases and a representative flight case.
Computing single step operators of logic programming in radial basis function neural networks
NASA Astrophysics Data System (ADS)
Hamadneh, Nawaf; Sathasivam, Saratha; Choon, Ong Hong
2014-07-01
Logic programming is the process that leads from an original formulation of a computing problem to executable programs. A normal logic program consists of a finite set of clauses. A valuation I of logic programming is a mapping from ground atoms to false or true. The single step operator of any logic programming is defined as a function (Tp:I→I). Logic programming is well-suited to building the artificial intelligence systems. In this study, we established a new technique to compute the single step operators of logic programming in the radial basis function neural networks. To do that, we proposed a new technique to generate the training data sets of single step operators. The training data sets are used to build the neural networks. We used the recurrent radial basis function neural networks to get to the steady state (the fixed point of the operators). To improve the performance of the neural networks, we used the particle swarm optimization algorithm to train the networks.
Adolescents' Chunking of Computer Programs.
ERIC Educational Resources Information Center
Magliaro, Susan; Burton, John K.
To investigate what children learn during computer programming instruction, students attending a summer computer camp were asked to recall either single lines or chunks of computer programs from either coherent or scrambled programs. The 16 subjects, ages 12 to 17, were divided into three instructional groups: (1) beginners, who were taught to…
ERIC Educational Resources Information Center
Brandell, Gerd; Carlsson, Svante; Eklbom, Hakan; Nord, Ann-Charlotte
1997-01-01
Describes the process of starting a new program in computer science and engineering that is heavily based on applied mathematics and only open to women. Emphasizes that success requires considerable interest in mathematics and curiosity about computer science among female students at the secondary level and the acceptance of the single-sex program…
NASA Technical Reports Server (NTRS)
Bendura, R. J.; Renfroe, P. G.
1974-01-01
A detailed discussion of the application of a previously method to determine vehicle flight attitude using a single camera onboard the vehicle is presented with emphasis on the digital computer program format and data reduction techniques. Application requirements include film and earth-related coordinates of at least two landmarks (or features), location of the flight vehicle with respect to the earth, and camera characteristics. Included in this report are a detailed discussion of the program input and output format, a computer program listing, a discussion of modifications made to the initial method, a step-by-step basic data reduction procedure, and several example applications. The computer program is written in FORTRAN 4 language for the Control Data 6000 series digital computer.
ERIC Educational Resources Information Center
Shacham, Mordechai; Cutlip, Michael B.; Brauner, Neima
2009-01-01
A continuing challenge to the undergraduate chemical engineering curriculum is the time-effective incorporation and use of computer-based tools throughout the educational program. Computing skills in academia and industry require some proficiency in programming and effective use of software packages for solving 1) single-model, single-algorithm…
Calculation of cosmic ray induced single event upsets: Program CRUP (Cosmic Ray Upset Program)
NASA Astrophysics Data System (ADS)
Shapiro, P.
1983-09-01
This report documents PROGRAM CRUP, COSMIC RAY UPSET PROGRAM. The computer program calculates cosmic ray induced single-event error rates in microelectronic circuits exposed to several representative cosmic-ray environments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shipman, Galen M.
These are the slides for a presentation on programming models in HPC, at the Los Alamos National Laboratory's Parallel Computing Summer School. The following topics are covered: Flynn's Taxonomy of computer architectures; single instruction single data; single instruction multiple data; multiple instruction multiple data; address space organization; definition of Trinity (Intel Xeon-Phi is a MIMD architecture); single program multiple data; multiple program multiple data; ExMatEx workflow overview; definition of a programming model, programming languages, runtime systems; programming model and environments; MPI (Message Passing Interface); OpenMP; Kokkos (Performance Portable Thread-Parallel Programming Model); Kokkos abstractions, patterns, policies, and spaces; RAJA, a systematicmore » approach to node-level portability and tuning; overview of the Legion Programming Model; mapping tasks and data to hardware resources; interoperability: supporting task-level models; Legion S3D execution and performance details; workflow, integration of external resources into the programming model.« less
DOT National Transportation Integrated Search
1975-12-01
Frequency domain computer programs developed or acquired by TSC for the analysis of rail vehicle dynamics are described in two volumes. Volume I defines the general analytical capabilities required for computer programs applicable to single rail vehi...
Computer program for single input-output, single-loop feedback systems
NASA Technical Reports Server (NTRS)
1976-01-01
Additional work is reported on a completely automatic computer program for the design of single input/output, single loop feedback systems with parameter uncertainly, to satisfy time domain bounds on the system response to step commands and disturbances. The inputs to the program are basically the specified time-domain response bounds, the form of the constrained plant transfer function and the ranges of the uncertain parameters of the plant. The program output consists of the transfer functions of the two free compensation networks, in the form of the coefficients of the numerator and denominator polynomials, and the data on the prescribed bounds and the extremes actually obtained for the system response to commands and disturbances.
Flexible Animation Computer Program
NASA Technical Reports Server (NTRS)
Stallcup, Scott S.
1990-01-01
FLEXAN (Flexible Animation), computer program animating structural dynamics on Evans and Sutherland PS300-series graphics workstation with VAX/VMS host computer. Typical application is animation of spacecraft undergoing structural stresses caused by thermal and vibrational effects. Displays distortions in shape of spacecraft. Program displays single natural mode of vibration, mode history, or any general deformation of flexible structure. Written in FORTRAN 77.
Single-Sex Computer Classes: An Effective Alternative.
ERIC Educational Resources Information Center
Swain, Sandra L.; Harvey, Douglas M.
2002-01-01
Advocates single-sex computer instruction as a temporary alternative educational program to provide middle school and secondary school girls with access to computers, to present girls with opportunities to develop positive attitudes towards technology, and to make available a learning environment conducive to girls gaining technological skills.…
Single-node orbit analsyis with radiation heat transfer only
NASA Technical Reports Server (NTRS)
Peoples, J. A.
1977-01-01
The steady-state temperature of a single node which dissipates energy by radiation only is discussed for a nontime varying thermal environment. Relationships are developed to illustrate how shields can be utilized to represent a louver system. A computer program is presented which can assess periodic temperature characteristics of a single node in a time varying thermal environment having energy dissipation by radiation only. The computer program performs thermal orbital analysis for five combinations of plate, shields, and louvers.
Programs for skyline planning.
Ward W. Carson
1975-01-01
This paper describes four computer programs for the logging engineer's use in planning log harvesting by skyline systems. One program prepares terrain profile plots from maps mounted on a digitizer; the other programs prepare load-carrying capability and other information for single and multispan standing skylines and single span running skylines. In general, the...
Mount, D W; Conrad, B
1986-01-01
We have previously described programs for a variety of types of sequence analysis (1-4). These programs have now been integrated into a single package. They are written in the standard C programming language and run on virtually any computer system with a C compiler, such as the IBM/PC and other computers running under the MS/DOS and UNIX operating systems. The programs are widely distributed and may be obtained from the authors as described below. PMID:3753780
Partitioning problems in parallel, pipelined and distributed computing
NASA Technical Reports Server (NTRS)
Bokhari, S.
1985-01-01
The problem of optimally assigning the modules of a parallel program over the processors of a multiple computer system is addressed. A Sum-Bottleneck path algorithm is developed that permits the efficient solution of many variants of this problem under some constraints on the structure of the partitions. In particular, the following problems are solved optimally for a single-host, multiple satellite system: partitioning multiple chain structured parallel programs, multiple arbitrarily structured serial programs and single tree structured parallel programs. In addition, the problems of partitioning chain structured parallel programs across chain connected systems and across shared memory (or shared bus) systems are also solved under certain constraints. All solutions for parallel programs are equally applicable to pipelined programs. These results extend prior research in this area by explicitly taking concurrency into account and permit the efficient utilization of multiple computer architectures for a wide range of problems of practical interest.
Computing Spacecraft Solar-Cell Damage by Charged Particles
NASA Technical Reports Server (NTRS)
Gaddy, Edward M.
2006-01-01
General EQFlux is a computer program that converts the measure of the damage done to solar cells in outer space by impingement of electrons and protons having many different kinetic energies into the measure of the damage done by an equivalent fluence of electrons, each having kinetic energy of 1 MeV. Prior to the development of General EQFlux, there was no single computer program offering this capability: For a given type of solar cell, it was necessary to either perform the calculations manually or to use one of three Fortran programs, each of which was applicable to only one type of solar cell. The problem in developing General EQFlux was to rewrite and combine the three programs into a single program that could perform the calculations for three types of solar cells and run in a Windows environment with a Windows graphical user interface. In comparison with the three prior programs, General EQFlux is easier to use.
A computer program for the design and analysis of low-speed airfoils, supplement
NASA Technical Reports Server (NTRS)
Eppler, R.; Somers, D. M.
1980-01-01
Three new options were incorporated into an existing computer program for the design and analysis of low speed airfoils. These options permit the analysis of airfoils having variable chord (variable geometry), a boundary layer displacement iteration, and the analysis of the effect of single roughness elements. All three options are described in detail and are included in the FORTRAN IV computer program.
Confidence Region for the Evaluation of HF DF Single Site Location Systems.
1983-09-02
CONTRACT ORt GRANT NUMBER(@) M.H. Reilly and J. Coran S. PERFORMING ORGANIZATION NAME AND ADDRESS WD PROGRAM ELEMENT. PROJECTAS Naval Research...1 DETERMINATION OF THE CONFIDENCE REGION....................2 COMPUTER PROGRAM FOR THE CONFIDENCE ELLIPSE..............5 EXAMPLES OF COMPUTER... PROGRAM OUTPUT......................6 DISCUSSION ................................................... 7 ACKNOWLEDGMENTS
NASA Technical Reports Server (NTRS)
Knauber, R. N.
1982-01-01
A FORTRAN coded computer program which computes the capture transient of a launch vehicle upper stage at the ignition and/or separation event is presented. It is for a single degree-of-freedom on-off reaction jet attitude control system. The Monte Carlo method is used to determine the statistical value of key parameters at the outcome of the event. Aerodynamic and booster induced disturbances, vehicle and control system characteristics, and initial conditions are treated as random variables. By appropriate selection of input data pitch, yaw and roll axes can be analyzed. Transient response of a single deterministic case can be computed. The program is currently set up on a CDC CYBER 175 computer system but is compatible with ANSI FORTRAN computer language. This routine has been used over the past fifteen (15) years for the SCOUT Launch Vehicle and has been run on RECOMP III, IBM 7090, IBM 360/370, CDC6600 and CDC CYBER 175 computers with little modification.
Calculation of Cosmic Ray Induced Single Event Upsets: Program CRUP, Cosmic Ray Upset Program
1983-09-14
1.., 0 .j ~ u M ~ t R A’- ~~ ’ .~ ; I .: ’ 1 J., ) ’- CALCULATION OF COSMIC RAY INDUCED SINGLE EVEI’o"T UPSETS: PROGRAM CRUP , COSMIC RAY UPSET...neceuety end Identity by blo..;k number) 0Thls report documents PROGR.Al\\1 CRUP , COSMIC RAY UPSET PROGRAM. The computer program calculates cosmic...34. » » •-, " 1 » V »1T"~ Calculation of Cosmic Ray Induced Single Event Upsets: PROGRAM CRUP , COSMIC RAY UPSET PROGRAM I. INTRODUCTION Since the
Plasmid mapping computer program.
Nolan, G P; Maina, C V; Szalay, A A
1984-01-01
Three new computer algorithms are described which rapidly order the restriction fragments of a plasmid DNA which has been cleaved with two restriction endonucleases in single and double digestions. Two of the algorithms are contained within a single computer program (called MPCIRC). The Rule-Oriented algorithm, constructs all logical circular map solutions within sixty seconds (14 double-digestion fragments) when used in conjunction with the Permutation method. The program is written in Apple Pascal and runs on an Apple II Plus Microcomputer with 64K of memory. A third algorithm is described which rapidly maps double digests and uses the above two algorithms as adducts. Modifications of the algorithms for linear mapping are also presented. PMID:6320105
Skylab S-191 spectrometer single spectral scan analysis program. [user manual
NASA Technical Reports Server (NTRS)
Downes, E. L.
1974-01-01
Documentation and user information for the S-191 single spectral scan analysis program are reported. A breakdown of the computational algorithms is supplied, followed by the program listing and examples of sample output. A copy of the flow chart which describes the driver routine in the body of the main program segment is included.
NASA Technical Reports Server (NTRS)
Huffman, S.
1977-01-01
Detailed instructions on the use of two computer-aided-design programs for designing the energy storage inductor for single winding and two winding dc to dc converters are provided. Step by step procedures are given to illustrate the formatting of user input data. The procedures are illustrated by eight sample design problems which include the user input and the computer program output.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hamadneh, Nawaf; Sathasivam, Saratha; Choon, Ong Hong
Logic programming is the process that leads from an original formulation of a computing problem to executable programs. A normal logic program consists of a finite set of clauses. A valuation I of logic programming is a mapping from ground atoms to false or true. The single step operator of any logic programming is defined as a function (T{sub p}:I→I). Logic programming is well-suited to building the artificial intelligence systems. In this study, we established a new technique to compute the single step operators of logic programming in the radial basis function neural networks. To do that, we proposed amore » new technique to generate the training data sets of single step operators. The training data sets are used to build the neural networks. We used the recurrent radial basis function neural networks to get to the steady state (the fixed point of the operators). To improve the performance of the neural networks, we used the particle swarm optimization algorithm to train the networks.« less
Digital computer programs for generating oblique orthographic projections and contour plots
NASA Technical Reports Server (NTRS)
Giles, G. L.
1975-01-01
User and programer documentation is presented for two programs for automatic plotting of digital data. One of the programs generates oblique orthographic projections of three-dimensional numerical models and the other program generates contour plots of data distributed in an arbitrary planar region. A general description of the computational algorithms, user instructions, and complete listings of the programs is given. Several plots are included to illustrate various program options, and a single example is described to facilitate learning the use of the programs.
NASA Technical Reports Server (NTRS)
Sforzini, R. H.
1972-01-01
An analysis and a computer program are presented which represent a compromise between the more sophisticated programs using precise burning geometric relations and the textbook type of solutions. The program requires approximately 900 computer cards including a set of 20 input data cards required for a typical problem. The computer operating time for a single configuration is approximately 1 minute and 30 seconds on the IBM 360 computer. About l minute and l5 seconds of the time is compilation time so that additional configurations input at the same time require approximately 15 seconds each. The program uses approximately 11,000 words on the IBM 360. The program is written in FORTRAN 4 and is readily adaptable for use on a number of different computers: IBM 7044, IBM 7094, and Univac 1108.
Integration of Major Computer Program Packages into Experimental Courses: A Freshman Experience.
ERIC Educational Resources Information Center
Lipschitz, Irving
1981-01-01
Describes the use of the Gaussian 70 computer programs to carry out quantum chemical calculations, including single calculations, geometry, optimization, and potential surface scans. Includes a summary of student activities and benefits for students in an honors freshman chemistry course. (SK)
ERIC Educational Resources Information Center
Baird, Irene C.; Towns, Kathryn
PROBE (Potential Reentry Opportunities in Business and Education), a program conducted in Harrisburg and Lebanon, Pennsylvania, incorporated technological training with effective communication skills preparation for single female welfare parents. Goals of the program were to provide 20 single-parent welfare women with marketable computer and…
NASA Technical Reports Server (NTRS)
Rich, R. P.
1970-01-01
The documentation problem is discussed that arises in getting all the many items included in a computer program prepared in a timely fashion and keeping them all correct and mutually consistent during the life of the program. The proposed approach to the problem is to collect all the necessary information into a single document, which is maintained with computer assistance during the life of the program and from which the required subdocuments can be extracted as desired. Implementation of this approach requires a package of programs for computer editorial assistance and is facilitated by certain programming practices that are discussed.
TCP/IP Interface for the Satellite Orbit Analysis Program (SOAP)
NASA Technical Reports Server (NTRS)
Carnright, Robert; Stodden, David; Coggi, John
2009-01-01
The Transmission Control Protocol/ Internet protocol (TCP/IP) interface for the Satellite Orbit Analysis Program (SOAP) provides the means for the software to establish real-time interfaces with other software. Such interfaces can operate between two programs, either on the same computer or on different computers joined by a network. The SOAP TCP/IP module employs a client/server interface where SOAP is the server and other applications can be clients. Real-time interfaces between software offer a number of advantages over embedding all of the common functionality within a single program. One advantage is that they allow each program to divide the computation labor between processors or computers running the separate applications. Secondly, each program can be allowed to provide its own expertise domain with other programs able to use this expertise.
ERIC Educational Resources Information Center
Lichten, William
A three-part program investigated the use of computers at an inner-city high school. An attempt was made to introduce a digital computer for instructional purposes at the high school. A single portable teletype terminal and a simple programing language, BASIC, were used. It was found that a wide variety of students could benefit from this…
Building Blocks. An Annotated Bibliography for Single Parent Programming.
ERIC Educational Resources Information Center
Wiley-Thomas, Cheryl, Comp.; Norden, Tamara, Ed.
This booklet lists 645 books, articles, curriculum materials, computer software, and videos that educational professionals can use to develop programs for single parents (especially teen parents). Many of the listings are annotated; all contain information on author, title, publisher name and city, and date of publication or production. The…
Research on Electrically Driven Single Photon Emitter by Diamond for Quantum Cryptography
2015-03-24
by diamond for quantum cryptography 5a. CONTRACT NUMBER FA2386-14-1-4037 5b. GRANT NUMBE R Grant 14IOA093_144037 5c. PROGRAM ELEMENT...emerged as a highly competitive platform for applications in quantum cryptography , quantum computing, spintronics, and sensing or metrology...15. SUBJECT TERMS Diamond LED, Nitrogen Vacancy Complex, Quantum Computing, Quantum Cryptography , Single Spin Single Photon 16. SECURITY
Computational techniques for solar wind flows past terrestrial planets: Theory and computer programs
NASA Technical Reports Server (NTRS)
Stahara, S. S.; Chaussee, D. S.; Trudinger, B. C.; Spreiter, J. R.
1977-01-01
The interaction of the solar wind with terrestrial planets can be predicted using a computer program based on a single fluid, steady, dissipationless, magnetohydrodynamic model to calculate the axisymmetric, supersonic, super-Alfvenic solar wind flow past both magnetic and nonmagnetic planets. The actual calculations are implemented by an assemblage of computer codes organized into one program. These include finite difference codes which determine the gas-dynamic solution, together with a variety of special purpose output codes for determining and automatically plotting both flow field and magnetic field results. Comparisons are made with previous results, and results are presented for a number of solar wind flows. The computational programs developed are documented and are presented in a general user's manual which is included.
Microgravity computing codes. User's guide
NASA Astrophysics Data System (ADS)
1982-01-01
Codes used in microgravity experiments to compute fluid parameters and to obtain data graphically are introduced. The computer programs are stored on two diskettes, compatible with the floppy disk drives of the Apple 2. Two versions of both disks are available (DOS-2 and DOS-3). The codes are written in BASIC and are structured as interactive programs. Interaction takes place through the keyboard of any Apple 2-48K standard system with single floppy disk drive. The programs are protected against wrong commands given by the operator. The programs are described step by step in the same order as the instructions displayed on the monitor. Most of these instructions are shown, with samples of computation and of graphics.
ERIC Educational Resources Information Center
Lent, John
1984-01-01
This article describes a computer network system that connects several microcomputers to a single disk drive and one copy of software. Many schools are switching to networks as a cheaper and more efficient means of computer instruction. Teachers may be faced with copywriting problems when reproducing programs. (DF)
Computing tools for implementing standards for single-case designs.
Chen, Li-Ting; Peng, Chao-Ying Joanne; Chen, Ming-E
2015-11-01
In the single-case design (SCD) literature, five sets of standards have been formulated and distinguished: design standards, assessment standards, analysis standards, reporting standards, and research synthesis standards. This article reviews computing tools that can assist researchers and practitioners in meeting the analysis standards recommended by the What Works Clearinghouse: Procedures and Standards Handbook-the WWC standards. These tools consist of specialized web-based calculators or downloadable software for SCD data, and algorithms or programs written in Excel, SAS procedures, SPSS commands/Macros, or the R programming language. We aligned these tools with the WWC standards and evaluated them for accuracy and treatment of missing data, using two published data sets. All tools were tested to be accurate. When missing data were present, most tools either gave an error message or conducted analysis based on the available data. Only one program used a single imputation method. This article concludes with suggestions for an inclusive computing tool or environment, additional research on the treatment of missing data, and reasonable and flexible interpretations of the WWC standards. © The Author(s) 2015.
Attenuation of thermal neutrons by an imperfect single crystal
NASA Astrophysics Data System (ADS)
Naguib, K.; Adib, M.
1996-06-01
A semi-empirical formula is given which allows one to calculate the total thermal cross section of an imperfect single crystal as a function of crystal constants, temperature and neutron energy E, in the energy range between 3 meV and 10 eV. The formula also includes the contribution of the parasitic Bragg scattering to the total cross section that takes into account the crystal mosaic spread value and its orientation with respect to the neutron beam direction. A computer program (ISCANF) was developed to calculate the total attenuation of neutrons using the proposed formula. The ISCANF program was applied to investigate the neutron attenuation through a copper single crystal. The calculated values of the neutron transmission through the imperfect copper single crystal were fitted to the measured ones in the energy range 3 - 40 meV at different crystal orientations. The result of fitting shows that use of the computer program ISCANF allows one to predict the behaviour of the total cross section of an imperfect copper single crystal for the whole energy range.
Description of CASCOMP Comprehensive Airship Sizing and Performance Computer Program, Volume 2
NASA Technical Reports Server (NTRS)
Davis, J.
1975-01-01
The computer program CASCOMP, which may be used in comparative design studies of lighter than air vehicles by rapidly providing airship size and mission performance data, was prepared and documented. The program can be used to define design requirements such as weight breakdown, required propulsive power, and physical dimensions of airships which are designed to meet specified mission requirements. The program is also useful in sensitivity studies involving both design trade-offs and performance trade-offs. The input to the program primarily consists of a series of single point values such as hull overall fineness ratio, number of engines, airship hull and empennage drag coefficients, description of the mission profile, and weights of fixed equipment, fixed useful load and payload. In order to minimize computation time, the program makes ample use of optional computation paths.
Development of small scale cluster computer for numerical analysis
NASA Astrophysics Data System (ADS)
Zulkifli, N. H. N.; Sapit, A.; Mohammed, A. N.
2017-09-01
In this study, two units of personal computer were successfully networked together to form a small scale cluster. Each of the processor involved are multicore processor which has four cores in it, thus made this cluster to have eight processors. Here, the cluster incorporate Ubuntu 14.04 LINUX environment with MPI implementation (MPICH2). Two main tests were conducted in order to test the cluster, which is communication test and performance test. The communication test was done to make sure that the computers are able to pass the required information without any problem and were done by using simple MPI Hello Program where the program written in C language. Additional, performance test was also done to prove that this cluster calculation performance is much better than single CPU computer. In this performance test, four tests were done by running the same code by using single node, 2 processors, 4 processors, and 8 processors. The result shows that with additional processors, the time required to solve the problem decrease. Time required for the calculation shorten to half when we double the processors. To conclude, we successfully develop a small scale cluster computer using common hardware which capable of higher computing power when compare to single CPU processor, and this can be beneficial for research that require high computing power especially numerical analysis such as finite element analysis, computational fluid dynamics, and computational physics analysis.
NASA Technical Reports Server (NTRS)
Forney, J. A.; Walker, D.; Lanier, M.
1979-01-01
Computer program, SHCOST, was used to perform economic analyses of operational test sites. The program allows consideration of the economic parameters which are important to the solar system user. A life cycle cost and cash flow comparison is made between a solar heating system and a conventional system. The program assists in sizing the solar heating system. A sensitivity study and plot capability allow the user to select the most cost effective system configuration.
NASA Technical Reports Server (NTRS)
Bowman, L. M.
1984-01-01
An interactive steady state frequency response computer program with graphics is documented. Single or multiple forces may be applied to the structure using a modal superposition approach to calculate response. The method can be reapplied to linear, proportionally damped structures in which the damping may be viscous or structural. The theoretical approach and program organization are described. Example problems, user instructions, and a sample interactive session are given to demonstate the program's capability in solving a variety of problems.
ERIC Educational Resources Information Center
Ramsberger, Gail; Marie, Basem
2007-01-01
Purpose: This study examined the benefits of a self-administered, clinician-guided, computer-based, cued naming therapy. Results of intense and nonintense treatment schedules were compared. Method: A single-participant design with multiple baselines across behaviors and varied treatment intensity for 2 trained lists was replicated over 4…
Heterogeneous computing architecture for fast detection of SNP-SNP interactions.
Sluga, Davor; Curk, Tomaz; Zupan, Blaz; Lotric, Uros
2014-06-25
The extent of data in a typical genome-wide association study (GWAS) poses considerable computational challenges to software tools for gene-gene interaction discovery. Exhaustive evaluation of all interactions among hundreds of thousands to millions of single nucleotide polymorphisms (SNPs) may require weeks or even months of computation. Massively parallel hardware within a modern Graphic Processing Unit (GPU) and Many Integrated Core (MIC) coprocessors can shorten the run time considerably. While the utility of GPU-based implementations in bioinformatics has been well studied, MIC architecture has been introduced only recently and may provide a number of comparative advantages that have yet to be explored and tested. We have developed a heterogeneous, GPU and Intel MIC-accelerated software module for SNP-SNP interaction discovery to replace the previously single-threaded computational core in the interactive web-based data exploration program SNPsyn. We report on differences between these two modern massively parallel architectures and their software environments. Their utility resulted in an order of magnitude shorter execution times when compared to the single-threaded CPU implementation. GPU implementation on a single Nvidia Tesla K20 runs twice as fast as that for the MIC architecture-based Xeon Phi P5110 coprocessor, but also requires considerably more programming effort. General purpose GPUs are a mature platform with large amounts of computing power capable of tackling inherently parallel problems, but can prove demanding for the programmer. On the other hand the new MIC architecture, albeit lacking in performance reduces the programming effort and makes it up with a more general architecture suitable for a wider range of problems.
Heterogeneous computing architecture for fast detection of SNP-SNP interactions
2014-01-01
Background The extent of data in a typical genome-wide association study (GWAS) poses considerable computational challenges to software tools for gene-gene interaction discovery. Exhaustive evaluation of all interactions among hundreds of thousands to millions of single nucleotide polymorphisms (SNPs) may require weeks or even months of computation. Massively parallel hardware within a modern Graphic Processing Unit (GPU) and Many Integrated Core (MIC) coprocessors can shorten the run time considerably. While the utility of GPU-based implementations in bioinformatics has been well studied, MIC architecture has been introduced only recently and may provide a number of comparative advantages that have yet to be explored and tested. Results We have developed a heterogeneous, GPU and Intel MIC-accelerated software module for SNP-SNP interaction discovery to replace the previously single-threaded computational core in the interactive web-based data exploration program SNPsyn. We report on differences between these two modern massively parallel architectures and their software environments. Their utility resulted in an order of magnitude shorter execution times when compared to the single-threaded CPU implementation. GPU implementation on a single Nvidia Tesla K20 runs twice as fast as that for the MIC architecture-based Xeon Phi P5110 coprocessor, but also requires considerably more programming effort. Conclusions General purpose GPUs are a mature platform with large amounts of computing power capable of tackling inherently parallel problems, but can prove demanding for the programmer. On the other hand the new MIC architecture, albeit lacking in performance reduces the programming effort and makes it up with a more general architecture suitable for a wider range of problems. PMID:24964802
NASA Technical Reports Server (NTRS)
Aggarwal, Arun K.
1993-01-01
Spherical roller bearings have typically been used in applications with speeds limited to about 5000 rpm and loads limited for operation at less than about 0.25 million DN. However, spherical roller bearings are now being designed for high load and high speed applications including aerospace applications. A computer program, SASHBEAN, was developed to provide an analytical tool to design, analyze, and predict the performance of high speed, single row, angular contact (including zero contact angle), spherical roller bearings. The material presented is the mathematical formulation and analytical methods used to develop computer program SASHBEAN. For a given set of operating conditions, the program calculates the bearings ring deflections (axial and radial), roller deflections, contact areas stresses, depth and magnitude of maximum shear stresses, axial thrust, rolling element and cage rotational speeds, lubrication parameters, fatigue lives, and rates of heat generation. Centrifugal forces and gyroscopic moments are fully considered. The program is also capable of performing steady-state and time-transient thermal analyses of the bearing system.
NASA Technical Reports Server (NTRS)
Bennett, R. L.
1975-01-01
The analytical techniques and computer program developed in the fully-coupled rotor vibration study are described. The rotor blade natural frequency and mode shape analysis was implemented in a digital computer program designated DF1758. The program computes collective, cyclic, and scissor modes for a single blade within a specified range of frequency for specified values of rotor RPM and collective angle. The analysis includes effects of blade twist, cg offset from reference axis, and shear center offset from reference axis. Coupled inplane, out-of-plane, and torsional vibrations are considered. Normalized displacements, shear forces and moments may be printed out and Calcomp plots of natural frequencies as a function of rotor RPM may be produced.
NASA Technical Reports Server (NTRS)
Gaynor, T. L.; Bottrell, M. S.; Eagle, C. D.; Bachle, C. F.
1977-01-01
The feasibility of converting a spark ignition aircraft engine to the diesel cycle was investigated. Procedures necessary for converting a single cylinder GTS10-520 are described as well as a single cylinder diesel engine test program. The modification of the engine for the hot port cooling concept is discussed. A digital computer graphics simulation of a twin engine aircraft incorporating the diesel engine and Hot Fort concept is presented showing some potential gains in aircraft performance. Sample results of the computer program used in the simulation are included.
Design and Development of a Multiprogramming Operating System for Sixteen Bit Microprocessors.
1981-12-01
with the technical details of how services are programmed or produced, except perhaps when they fail to meet user requirements. Users are interested in...locations and loading decks. As the expense *and speed of computers increased, executive programs were created to allow several users to sequence...single user operating system as a companion to the 8080 microprocessor. CP/M (Control Program for Microcomputers) was a single user operating system that
Kustkova, H S
2012-01-01
In cerebrovascular diseases pefuzionnaya single photon emission computed tomography with lipophilic amines used for the diagnosis of functional disorders of cerebral blood flow. Quantitative calculations helps clarify the nature of vascular disease and clarify the adequacy and effectiveness of the treatment. In this modern program for SPECT ensure conduct not only as to the calculation of blood flow, but also make it possible to compute also the absolute values of cerebral blood flow.
; means (a) copies of the computer program commonly known as SUNREL, and all of the contents of files accordance with the Documentation. 1.2 "Computer" means an electronic device that accepts computer." 1.4 "Licensee" means the Individual Licensee. 1.5 "Licensed Single Site"
Architecture Adaptive Computing Environment
NASA Technical Reports Server (NTRS)
Dorband, John E.
2006-01-01
Architecture Adaptive Computing Environment (aCe) is a software system that includes a language, compiler, and run-time library for parallel computing. aCe was developed to enable programmers to write programs, more easily than was previously possible, for a variety of parallel computing architectures. Heretofore, it has been perceived to be difficult to write parallel programs for parallel computers and more difficult to port the programs to different parallel computing architectures. In contrast, aCe is supportable on all high-performance computing architectures. Currently, it is supported on LINUX clusters. aCe uses parallel programming constructs that facilitate writing of parallel programs. Such constructs were used in single-instruction/multiple-data (SIMD) programming languages of the 1980s, including Parallel Pascal, Parallel Forth, C*, *LISP, and MasPar MPL. In aCe, these constructs are extended and implemented for both SIMD and multiple- instruction/multiple-data (MIMD) architectures. Two new constructs incorporated in aCe are those of (1) scalar and virtual variables and (2) pre-computed paths. The scalar-and-virtual-variables construct increases flexibility in optimizing memory utilization in various architectures. The pre-computed-paths construct enables the compiler to pre-compute part of a communication operation once, rather than computing it every time the communication operation is performed.
ERIC Educational Resources Information Center
Adrian, Jose A.; Gonzalez, Mercedes; Buiza, Juan J.; Sage, Karen
2011-01-01
Purpose: To extend the use of the Spanish Computer-assisted Anomia Rehabilitation Program (CARP-2) for anomia from a single case to a group of 15 people with aphasia. To evaluate whether the treatment is active (Phase 1) for this group (Robey & Schultz, 1998), providing potential explanations as to why. Methods: Fifteen participants with chronic…
NASA Technical Reports Server (NTRS)
1984-01-01
NASA has planned a supercomputer for computational fluid dynamics research since the mid-1970's. With the approval of the Numerical Aerodynamic Simulation Program as a FY 1984 new start, Congress requested an assessment of the program's objectives, projected short- and long-term uses, program design, computer architecture, user needs, and handling of proprietary and classified information. Specifically requested was an examination of the merits of proceeding with multiple high speed processor (HSP) systems contrasted with a single high speed processor system. The panel found NASA's objectives and projected uses sound and the projected distribution of users as realistic as possible at this stage. The multiple-HSP, whereby new, more powerful state-of-the-art HSP's would be integrated into a flexible network, was judged to present major advantages over any single HSP system.
ERIC Educational Resources Information Center
Li, Elton; Stoecker, Arthur
1995-01-01
Describes a computer software program where students define alternative policy sets and compare their effects on the welfare of consumers, producers, and the public sector. Policy sets may be a single tax or quota or a mix of taxes, subsidies, and/or price supports implemented in the marketing chain. (MJP)
Molecular-Beam-Epitaxy Program
NASA Technical Reports Server (NTRS)
Sparks, Patricia D.
1988-01-01
Molecular Beam Epitaxy (MBE) computer program developed to aid in design of single- and double-junction cascade cells made of silicon. Cascade cell has efficiency 1 or 2 percent higher than single cell, with twice the open-circuit voltage. Input parameters include doping density, diffusion lengths, thicknesses of regions, solar spectrum, absorption coefficients of silicon (data included for 101 wavelengths), and surface recombination velocities. Results include maximum power, short-circuit current, and open-circuit voltage. Program written in FORTRAN IV.
Perspex machine: V. Compilation of C programs
NASA Astrophysics Data System (ADS)
Spanner, Matthew P.; Anderson, James A. D. W.
2006-01-01
The perspex machine arose from the unification of the Turing machine with projective geometry. The original, constructive proof used four special, perspective transformations to implement the Turing machine in projective geometry. These four transformations are now generalised and applied in a compiler, implemented in Pop11, that converts a subset of the C programming language into perspexes. This is interesting both from a geometrical and a computational point of view. Geometrically, it is interesting that program source can be converted automatically to a sequence of perspective transformations and conditional jumps, though we find that the product of homogeneous transformations with normalisation can be non-associative. Computationally, it is interesting that program source can be compiled for a Reduced Instruction Set Computer (RISC), the perspex machine, that is a Single Instruction, Zero Exception (SIZE) computer.
Computational techniques in gamma-ray skyshine analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
George, D.L.
1988-12-01
Two computer codes were developed to analyze gamma-ray skyshine, the scattering of gamma photons by air molecules. A review of previous gamma-ray skyshine studies discusses several Monte Carlo codes, programs using a single-scatter model, and the MicroSkyshine program for microcomputers. A benchmark gamma-ray skyshine experiment performed at Kansas State University is also described. A single-scatter numerical model was presented which traces photons from the source to their first scatter, then applies a buildup factor along a direct path from the scattering point to a detector. The FORTRAN code SKY, developed with this model before the present study, was modified tomore » use Gauss quadrature, recent photon attenuation data and a more accurate buildup approximation. The resulting code, SILOGP, computes response from a point photon source on the axis of a silo, with and without concrete shielding over the opening. Another program, WALLGP, was developed using the same model to compute response from a point gamma source behind a perfectly absorbing wall, with and without shielding overhead. 29 refs., 48 figs., 13 tabs.« less
An application of artificial intelligence to the interpretation of mass spectra.
NASA Technical Reports Server (NTRS)
Buchanan, B. G.; Duffield, A. M.; Robertson, A. V.
1971-01-01
Description of the DENDRAL (Dendritic Algorithm) project, the objectives of which were to base the computer program on an alogorithm that generates an exhaustive, nonredundant list of all the structural isomers of a given chemical composition, and to devise a computer program that would perform an organic structure determination, given a molecular formula and a mass spectrum. This program is called 'Heuristic DENDRAL' and it operates by using the known structure/spectrum correlations to constrain the DENDRAL isomer generator to produce a single isomer for that composition. The collaboration of chemists and computer scientists has produced a tool of some practical utility from the chemical viewpoint, and an interesting program from the viewpoint of artificial intelligence.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swindeman, M. J.; Jetter, R. I.; Sham, T. -L.
One of the objectives of the high temperature design methodology activities is to develop and validate both improvements and the basic features of ASME Boiler and Pressure Vessel Code, Section III, Rules for Construction of Nuclear Facility Components, Division 5, High Temperature Reactors, Subsection HB, Subpart B (HBB). The overall scope of this task is to develop a computer program to aid assessment procedures of components under specified loading conditions in accordance with the elevated temperature design requirements for Division 5 Class A components. There are many features and alternative paths of varying complexity in HBB. The initial focus ofmore » this computer program is a basic path through the various options for a single reference material, 316H stainless steel. However, the computer program is being structured for eventual incorporation all of the features and permitted materials of HBB. This report will first provide a description of the overall computer program, particular challenges in developing numerical procedures for the assessment, and an overall approach to computer program development. This is followed by a more comprehensive appendix, which is the draft computer program manual for the program development. The strain limits rules have been implemented in the computer program. The evaluation of creep-fatigue damage will be implemented in future work scope.« less
NASA Astrophysics Data System (ADS)
Xue, Xinwei; Cheryauka, Arvi; Tubbs, David
2006-03-01
CT imaging in interventional and minimally-invasive surgery requires high-performance computing solutions that meet operational room demands, healthcare business requirements, and the constraints of a mobile C-arm system. The computational requirements of clinical procedures using CT-like data are increasing rapidly, mainly due to the need for rapid access to medical imagery during critical surgical procedures. The highly parallel nature of Radon transform and CT algorithms enables embedded computing solutions utilizing a parallel processing architecture to realize a significant gain of computational intensity with comparable hardware and program coding/testing expenses. In this paper, using a sample 2D and 3D CT problem, we explore the programming challenges and the potential benefits of embedded computing using commodity hardware components. The accuracy and performance results obtained on three computational platforms: a single CPU, a single GPU, and a solution based on FPGA technology have been analyzed. We have shown that hardware-accelerated CT image reconstruction can be achieved with similar levels of noise and clarity of feature when compared to program execution on a CPU, but gaining a performance increase at one or more orders of magnitude faster. 3D cone-beam or helical CT reconstruction and a variety of volumetric image processing applications will benefit from similar accelerations.
A Computer Program for Flow-Log Analysis of Single Holes (FLASH)
Day-Lewis, F. D.; Johnson, C.D.; Paillet, Frederick L.; Halford, K.J.
2011-01-01
A new computer program, FLASH (Flow-Log Analysis of Single Holes), is presented for the analysis of borehole vertical flow logs. The code is based on an analytical solution for steady-state multilayer radial flow to a borehole. The code includes options for (1) discrete fractures and (2) multilayer aquifers. Given vertical flow profiles collected under both ambient and stressed (pumping or injection) conditions, the user can estimate fracture (or layer) transmissivities and far-field hydraulic heads. FLASH is coded in Microsoft Excel with Visual Basic for Applications routines. The code supports manual and automated model calibration. ?? 2011, The Author(s). Ground Water ?? 2011, National Ground Water Association.
Organization and use of a Software/Hardware Avionics Research Program (SHARP)
NASA Technical Reports Server (NTRS)
Karmarkar, J. S.; Kareemi, M. N.
1975-01-01
The organization and use is described of the software/hardware avionics research program (SHARP) developed to duplicate the automatic portion of the STOLAND simulator system, on a general-purpose computer system (i.e., IBM 360). The program's uses are: (1) to conduct comparative evaluation studies of current and proposed airborne and ground system concepts via single run or Monte Carlo simulation techniques, and (2) to provide a software tool for efficient algorithm evaluation and development for the STOLAND avionics computer.
Samant, Sanjiv S; Xia, Junyi; Muyan-Ozcelik, Pinar; Owens, John D
2008-08-01
The advent of readily available temporal imaging or time series volumetric (4D) imaging has become an indispensable component of treatment planning and adaptive radiotherapy (ART) at many radiotherapy centers. Deformable image registration (DIR) is also used in other areas of medical imaging, including motion corrected image reconstruction. Due to long computation time, clinical applications of DIR in radiation therapy and elsewhere have been limited and consequently relegated to offline analysis. With the recent advances in hardware and software, graphics processing unit (GPU) based computing is an emerging technology for general purpose computation, including DIR, and is suitable for highly parallelized computing. However, traditional general purpose computation on the GPU is limited because the constraints of the available programming platforms. As well, compared to CPU programming, the GPU currently has reduced dedicated processor memory, which can limit the useful working data set for parallelized processing. We present an implementation of the demons algorithm using the NVIDIA 8800 GTX GPU and the new CUDA programming language. The GPU performance will be compared with single threading and multithreading CPU implementations on an Intel dual core 2.4 GHz CPU using the C programming language. CUDA provides a C-like language programming interface, and allows for direct access to the highly parallel compute units in the GPU. Comparisons for volumetric clinical lung images acquired using 4DCT were carried out. Computation time for 100 iterations in the range of 1.8-13.5 s was observed for the GPU with image size ranging from 2.0 x 10(6) to 14.2 x 10(6) pixels. The GPU registration was 55-61 times faster than the CPU for the single threading implementation, and 34-39 times faster for the multithreading implementation. For CPU based computing, the computational time generally has a linear dependence on image size for medical imaging data. Computational efficiency is characterized in terms of time per megapixels per iteration (TPMI) with units of seconds per megapixels per iteration (or spmi). For the demons algorithm, our CPU implementation yielded largely invariant values of TPMI. The mean TPMIs were 0.527 spmi and 0.335 spmi for the single threading and multithreading cases, respectively, with <2% variation over the considered image data range. For GPU computing, we achieved TPMI =0.00916 spmi with 3.7% variation, indicating optimized memory handling under CUDA. The paradigm of GPU based real-time DIR opens up a host of clinical applications for medical imaging.
Using Pair Programming to Teach CAD Based Engineering Graphics
ERIC Educational Resources Information Center
Leland, Robert P.
2010-01-01
Pair programming was introduced into a course in engineering graphics that emphasizes solid modeling using SolidWorks. In pair programming, two students work at a single computer, and periodically trade off roles as driver (hands on the keyboard and mouse) and navigator (discuss strategy and design issues). Pair programming was used in a design…
Computer-composite mapping for geologists
van Driel, J.N.
1980-01-01
A computer program for overlaying maps has been tested and evaluated as a means for producing geologic derivative maps. Four maps of the Sugar House Quadrangle, Utah, were combined, using the Multi-Scale Data Analysis and Mapping Program, in a single composite map that shows the relative stability of the land surface during earthquakes. Computer-composite mapping can provide geologists with a powerful analytical tool and a flexible graphic display technique. Digitized map units can be shown singly, grouped with different units from the same map, or combined with units from other source maps to produce composite maps. The mapping program permits the user to assign various values to the map units and to specify symbology for the final map. Because of its flexible storage, easy manipulation, and capabilities of graphic output, the composite-mapping technique can readily be applied to mapping projects in sedimentary and crystalline terranes, as well as to maps showing mineral resource potential. ?? 1980 Springer-Verlag New York Inc.
PHREEQCI; a graphical user interface for the geochemical computer program PHREEQC
Charlton, Scott R.; Macklin, Clifford L.; Parkhurst, David L.
1997-01-01
PhreeqcI is a Windows-based graphical user interface for the geochemical computer program PHREEQC. PhreeqcI provides the capability to generate and edit input data files, run simulations, and view text files containing simulation results, all within the framework of a single interface. PHREEQC is a multipurpose geochemical program that can perform speciation, inverse, reaction-path, and 1D advective reaction-transport modeling. Interactive access to all of the capabilities of PHREEQC is available with PhreeqcI. The interface is written in Visual Basic and will run on personal computers under the Windows(3.1), Windows95, and WindowsNT operating systems.
NASA Technical Reports Server (NTRS)
Bailey, R. T.; Shih, T. I.-P.; Nguyen, H. L.; Roelke, R. J.
1990-01-01
An efficient computer program, called GRID2D/3D, was developed to generate single and composite grid systems within geometrically complex two- and three-dimensional (2- and 3-D) spatial domains that can deform with time. GRID2D/3D generates single grid systems by using algebraic grid generation methods based on transfinite interpolation in which the distribution of grid points within the spatial domain is controlled by stretching functions. All single grid systems generated by GRID2D/3D can have grid lines that are continuous and differentiable everywhere up to the second-order. Also, grid lines can intersect boundaries of the spatial domain orthogonally. GRID2D/3D generates composite grid systems by patching together two or more single grid systems. The patching can be discontinuous or continuous. For continuous composite grid systems, the grid lines are continuous and differentiable everywhere up to the second-order except at interfaces where different single grid systems meet. At interfaces where different single grid systems meet, the grid lines are only differentiable up to the first-order. For 2-D spatial domains, the boundary curves are described by using either cubic or tension spline interpolation. For 3-D spatial domains, the boundary surfaces are described by using either linear Coon's interpolation, bi-hyperbolic spline interpolation, or a new technique referred to as 3-D bi-directional Hermite interpolation. Since grid systems generated by algebraic methods can have grid lines that overlap one another, GRID2D/3D contains a graphics package for evaluating the grid systems generated. With the graphics package, the user can generate grid systems in an interactive manner with the grid generation part of GRID2D/3D. GRID2D/3D is written in FORTRAN 77 and can be run on any IBM PC, XT, or AT compatible computer. In order to use GRID2D/3D on workstations or mainframe computers, some minor modifications must be made in the graphics part of the program; no modifications are needed in the grid generation part of the program. The theory and method used in GRID2D/3D is described.
Janssen, Terry
2000-01-01
A system and method for facilitating decision-making comprising a computer program causing linkage of data representing a plurality of argument structure units into a hierarchical argument structure. Each argument structure unit comprises data corresponding to a hypothesis and its corresponding counter-hypothesis, data corresponding to grounds that provide a basis for inference of the hypothesis or its corresponding counter-hypothesis, data corresponding to a warrant linking the grounds to the hypothesis or its corresponding counter-hypothesis, and data corresponding to backing that certifies the warrant. The hierarchical argument structure comprises a top level argument structure unit and a plurality of subordinate level argument structure units. Each of the plurality of subordinate argument structure units comprises at least a portion of the grounds of the argument structure unit to which it is subordinate. Program code located on each of a plurality of remote computers accepts input from one of a plurality of contributors. Each input comprises data corresponding to an argument structure unit in the hierarchical argument structure and supports the hypothesis or its corresponding counter-hypothesis. A second programming code is adapted to combine the inputs into a single hierarchical argument structure. A third computer program code is responsive to the second computer program code and is adapted to represent a degree of support for the hypothesis and its corresponding counter-hypothesis in the single hierarchical argument structure.
Unified Engineering Software System
NASA Technical Reports Server (NTRS)
Purves, L. R.; Gordon, S.; Peltzman, A.; Dube, M.
1989-01-01
Collection of computer programs performs diverse functions in prototype engineering. NEXUS, NASA Engineering Extendible Unified Software system, is research set of computer programs designed to support full sequence of activities encountered in NASA engineering projects. Sequence spans preliminary design, design analysis, detailed design, manufacturing, assembly, and testing. Primarily addresses process of prototype engineering, task of getting single or small number of copies of product to work. Written in FORTRAN 77 and PROLOG.
Shuttle Electrical Power Analysis Program (SEPAP); single string circuit analysis report
NASA Technical Reports Server (NTRS)
Murdock, C. R.
1974-01-01
An evaluation is reported of the data obtained from an analysis of the distribution network characteristics of the shuttle during a spacelab mission. A description of the approach utilized in the development of the computer program and data base is provided and conclusions are drawn from the analysis of the data. Data sheets are provided for information to support the detailed discussion on each computer run.
Parallel hyperbolic PDE simulation on clusters: Cell versus GPU
NASA Astrophysics Data System (ADS)
Rostrup, Scott; De Sterck, Hans
2010-12-01
Increasingly, high-performance computing is looking towards data-parallel computational devices to enhance computational performance. Two technologies that have received significant attention are IBM's Cell Processor and NVIDIA's CUDA programming model for graphics processing unit (GPU) computing. In this paper we investigate the acceleration of parallel hyperbolic partial differential equation simulation on structured grids with explicit time integration on clusters with Cell and GPU backends. The message passing interface (MPI) is used for communication between nodes at the coarsest level of parallelism. Optimizations of the simulation code at the several finer levels of parallelism that the data-parallel devices provide are described in terms of data layout, data flow and data-parallel instructions. Optimized Cell and GPU performance are compared with reference code performance on a single x86 central processing unit (CPU) core in single and double precision. We further compare the CPU, Cell and GPU platforms on a chip-to-chip basis, and compare performance on single cluster nodes with two CPUs, two Cell processors or two GPUs in a shared memory configuration (without MPI). We finally compare performance on clusters with 32 CPUs, 32 Cell processors, and 32 GPUs using MPI. Our GPU cluster results use NVIDIA Tesla GPUs with GT200 architecture, but some preliminary results on recently introduced NVIDIA GPUs with the next-generation Fermi architecture are also included. This paper provides computational scientists and engineers who are considering porting their codes to accelerator environments with insight into how structured grid based explicit algorithms can be optimized for clusters with Cell and GPU accelerators. It also provides insight into the speed-up that may be gained on current and future accelerator architectures for this class of applications. Program summaryProgram title: SWsolver Catalogue identifier: AEGY_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGY_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GPL v3 No. of lines in distributed program, including test data, etc.: 59 168 No. of bytes in distributed program, including test data, etc.: 453 409 Distribution format: tar.gz Programming language: C, CUDA Computer: Parallel Computing Clusters. Individual compute nodes may consist of x86 CPU, Cell processor, or x86 CPU with attached NVIDIA GPU accelerator. Operating system: Linux Has the code been vectorised or parallelized?: Yes. Tested on 1-128 x86 CPU cores, 1-32 Cell Processors, and 1-32 NVIDIA GPUs. RAM: Tested on Problems requiring up to 4 GB per compute node. Classification: 12 External routines: MPI, CUDA, IBM Cell SDK Nature of problem: MPI-parallel simulation of Shallow Water equations using high-resolution 2D hyperbolic equation solver on regular Cartesian grids for x86 CPU, Cell Processor, and NVIDIA GPU using CUDA. Solution method: SWsolver provides 3 implementations of a high-resolution 2D Shallow Water equation solver on regular Cartesian grids, for CPU, Cell Processor, and NVIDIA GPU. Each implementation uses MPI to divide work across a parallel computing cluster. Additional comments: Sub-program numdiff is used for the test run.
Users' manual for the Langley high speed propeller noise prediction program (DFP-ATP)
NASA Technical Reports Server (NTRS)
Dunn, M. H.; Tarkenton, G. M.
1989-01-01
The use of the Dunn-Farassat-Padula Advanced Technology Propeller (DFP-ATP) noise prediction program which computes the periodic acoustic pressure signature and spectrum generated by propellers moving with supersonic helical tip speeds is described. The program has the capacity of predicting noise produced by a single-rotation propeller (SRP) or a counter-rotation propeller (CRP) system with steady or unsteady blade loading. The computational method is based on two theoretical formulations developed by Farassat. One formulation is appropriate for subsonic sources, and the other for transonic or supersonic sources. Detailed descriptions of user input, program output, and two test cases are presented, as well as brief discussions of the theoretical formulations and computational algorithms employed.
A study of sound generation in subsonic rotors, volume 2
NASA Technical Reports Server (NTRS)
Chalupnik, J. D.; Clark, L. T.
1975-01-01
Computer programs were developed for use in the analysis of sound generation by subsonic rotors. Program AIRFOIL computes the spectrum of radiated sound from a single airfoil immersed in a laminar flow field. Program ROTOR extends this to a rotating frame, and provides a model for sound generation in subsonic rotors. The program also computes tone sound generation due to steady state forces on the blades. Program TONE uses a moving source analysis to generate a time series for an array of forces moving in a circular path. The resultant time series are than Fourier transformed to render the results in spectral form. Program SDATA is a standard time series analysis package. It reads in two discrete time series and forms auto and cross covariances and normalizes these to form correlations. The program then transforms the covariances to yield auto and cross power spectra by means of a Fourier transformation.
TCW: Transcriptome Computational Workbench
Soderlund, Carol; Nelson, William; Willer, Mark; Gang, David R.
2013-01-01
Background The analysis of transcriptome data involves many steps and various programs, along with organization of large amounts of data and results. Without a methodical approach for storage, analysis and query, the resulting ad hoc analysis can lead to human error, loss of data and results, inefficient use of time, and lack of verifiability, repeatability, and extensibility. Methodology The Transcriptome Computational Workbench (TCW) provides Java graphical interfaces for methodical analysis for both single and comparative transcriptome data without the use of a reference genome (e.g. for non-model organisms). The singleTCW interface steps the user through importing transcript sequences (e.g. Illumina) or assembling long sequences (e.g. Sanger, 454, transcripts), annotating the sequences, and performing differential expression analysis using published statistical programs in R. The data, metadata, and results are stored in a MySQL database. The multiTCW interface builds a comparison database by importing sequence and annotation from one or more single TCW databases, executes the ESTscan program to translate the sequences into proteins, and then incorporates one or more clusterings, where the clustering options are to execute the orthoMCL program, compute transitive closure, or import clusters. Both singleTCW and multiTCW allow extensive query and display of the results, where singleTCW displays the alignment of annotation hits to transcript sequences, and multiTCW displays multiple transcript alignments with MUSCLE or pairwise alignments. The query programs can be executed on the desktop for fastest analysis, or from the web for sharing the results. Conclusion It is now affordable to buy a multi-processor machine, and easy to install Java and MySQL. By simply downloading the TCW, the user can interactively analyze, query and view their data. The TCW allows in-depth data mining of the results, which can lead to a better understanding of the transcriptome. TCW is freely available from www.agcol.arizona.edu/software/tcw. PMID:23874959
TCW: transcriptome computational workbench.
Soderlund, Carol; Nelson, William; Willer, Mark; Gang, David R
2013-01-01
The analysis of transcriptome data involves many steps and various programs, along with organization of large amounts of data and results. Without a methodical approach for storage, analysis and query, the resulting ad hoc analysis can lead to human error, loss of data and results, inefficient use of time, and lack of verifiability, repeatability, and extensibility. The Transcriptome Computational Workbench (TCW) provides Java graphical interfaces for methodical analysis for both single and comparative transcriptome data without the use of a reference genome (e.g. for non-model organisms). The singleTCW interface steps the user through importing transcript sequences (e.g. Illumina) or assembling long sequences (e.g. Sanger, 454, transcripts), annotating the sequences, and performing differential expression analysis using published statistical programs in R. The data, metadata, and results are stored in a MySQL database. The multiTCW interface builds a comparison database by importing sequence and annotation from one or more single TCW databases, executes the ESTscan program to translate the sequences into proteins, and then incorporates one or more clusterings, where the clustering options are to execute the orthoMCL program, compute transitive closure, or import clusters. Both singleTCW and multiTCW allow extensive query and display of the results, where singleTCW displays the alignment of annotation hits to transcript sequences, and multiTCW displays multiple transcript alignments with MUSCLE or pairwise alignments. The query programs can be executed on the desktop for fastest analysis, or from the web for sharing the results. It is now affordable to buy a multi-processor machine, and easy to install Java and MySQL. By simply downloading the TCW, the user can interactively analyze, query and view their data. The TCW allows in-depth data mining of the results, which can lead to a better understanding of the transcriptome. TCW is freely available from www.agcol.arizona.edu/software/tcw.
Large-scale structural optimization
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, J.
1983-01-01
Problems encountered by aerospace designers in attempting to optimize whole aircraft are discussed, along with possible solutions. Large scale optimization, as opposed to component-by-component optimization, is hindered by computational costs, software inflexibility, concentration on a single, rather than trade-off, design methodology and the incompatibility of large-scale optimization with single program, single computer methods. The software problem can be approached by placing the full analysis outside of the optimization loop. Full analysis is then performed only periodically. Problem-dependent software can be removed from the generic code using a systems programming technique, and then embody the definitions of design variables, objective function and design constraints. Trade-off algorithms can be used at the design points to obtain quantitative answers. Finally, decomposing the large-scale problem into independent subproblems allows systematic optimization of the problems by an organization of people and machines.
NASA Technical Reports Server (NTRS)
Spilker, R. L.; Witmer, E. A.; French, S. E.; Rodal, J. J. A.
1980-01-01
Two computer programs are described for predicting the transient large deflection elastic viscoplastic responses of thin single layer, initially flat unstiffened or integrally stiffened, Kirchhoff-Lov ductile metal panels. The PLATE 1 program pertains to structural responses produced by prescribed externally applied transient loading or prescribed initial velocity distributions. The collision imparted velocity method PLATE 1 program concerns structural responses produced by impact of an idealized nondeformable fragment. Finite elements are used to represent the structure in both programs. Strain hardening and strain rate effects of initially isotropic material are considered.
Quadratic Programming for Allocating Control Effort
NASA Technical Reports Server (NTRS)
Singh, Gurkirpal
2005-01-01
A computer program calculates an optimal allocation of control effort in a system that includes redundant control actuators. The program implements an iterative (but otherwise single-stage) algorithm of the quadratic-programming type. In general, in the quadratic-programming problem, one seeks the values of a set of variables that minimize a quadratic cost function, subject to a set of linear equality and inequality constraints. In this program, the cost function combines control effort (typically quantified in terms of energy or fuel consumed) and control residuals (differences between commanded and sensed values of variables to be controlled). In comparison with prior control-allocation software, this program offers approximately equal accuracy but much greater computational efficiency. In addition, this program offers flexibility, robustness to actuation failures, and a capability for selective enforcement of control requirements. The computational efficiency of this program makes it suitable for such complex, real-time applications as controlling redundant aircraft actuators or redundant spacecraft thrusters. The program is written in the C language for execution in a UNIX operating system.
GASP-PL/I Simulation of Integrated Avionic System Processor Architectures. M.S. Thesis
NASA Technical Reports Server (NTRS)
Brent, G. A.
1978-01-01
A development study sponsored by NASA was completed in July 1977 which proposed a complete integration of all aircraft instrumentation into a single modular system. Instead of using the current single-function aircraft instruments, computers compiled and displayed inflight information for the pilot. A processor architecture called the Team Architecture was proposed. This is a hardware/software approach to high-reliability computer systems. A follow-up study of the proposed Team Architecture is reported. GASP-PL/1 simulation models are used to evaluate the operating characteristics of the Team Architecture. The problem, model development, simulation programs and results at length are presented. Also included are program input formats, outputs and listings.
Hypercluster Parallel Processor
NASA Technical Reports Server (NTRS)
Blech, Richard A.; Cole, Gary L.; Milner, Edward J.; Quealy, Angela
1992-01-01
Hypercluster computer system includes multiple digital processors, operation of which coordinated through specialized software. Configurable according to various parallel-computing architectures of shared-memory or distributed-memory class, including scalar computer, vector computer, reduced-instruction-set computer, and complex-instruction-set computer. Designed as flexible, relatively inexpensive system that provides single programming and operating environment within which one can investigate effects of various parallel-computing architectures and combinations on performance in solution of complicated problems like those of three-dimensional flows in turbomachines. Hypercluster software and architectural concepts are in public domain.
GPU-accelerated phase-field simulation of dendritic solidification in a binary alloy
NASA Astrophysics Data System (ADS)
Yamanaka, Akinori; Aoki, Takayuki; Ogawa, Satoi; Takaki, Tomohiro
2011-03-01
The phase-field simulation for dendritic solidification of a binary alloy has been accelerated by using a graphic processing unit (GPU). To perform the phase-field simulation of the alloy solidification on GPU, a program code was developed with computer unified device architecture (CUDA). In this paper, the implementation technique of the phase-field model on GPU is presented. Also, we evaluated the acceleration performance of the three-dimensional solidification simulation by using a single NVIDIA TESLA C1060 GPU and the developed program code. The results showed that the GPU calculation for 5763 computational grids achieved the performance of 170 GFLOPS by utilizing the shared memory as a software-managed cache. Furthermore, it can be demonstrated that the computation with the GPU is 100 times faster than that with a single CPU core. From the obtained results, we confirmed the feasibility of realizing a real-time full three-dimensional phase-field simulation of microstructure evolution on a personal desktop computer.
NASA Technical Reports Server (NTRS)
Eckert, W. T.; Mort, K. W.; Jope, J.
1976-01-01
General guidelines are given for the design of diffusers, contractions, corners, and the inlets and exits of non-return tunnels. A system of equations, reflecting the current technology, has been compiled and assembled into a computer program (a user's manual for this program is included) for determining the total pressure losses. The formulation presented is applicable to compressible flow through most closed- or open-throat, single-, double-, or non-return wind tunnels. A comparison of estimated performance with that actually achieved by several existing facilities produced generally good agreement.
NavP: Structured and Multithreaded Distributed Parallel Programming
NASA Technical Reports Server (NTRS)
Pan, Lei; Xu, Jingling
2006-01-01
This slide presentation reviews some of the issues around distributed parallel programming. It compares and contrast two methods of programming: Single Program Multiple Data (SPMD) with the Navigational Programming (NAVP). It then reviews the distributed sequential computing (DSC) method and the methodology of NavP. Case studies are presented. It also reviews the work that is being done to enable the NavP system.
Program for computer aided reliability estimation
NASA Technical Reports Server (NTRS)
Mathur, F. P. (Inventor)
1972-01-01
A computer program for estimating the reliability of self-repair and fault-tolerant systems with respect to selected system and mission parameters is presented. The computer program is capable of operation in an interactive conversational mode as well as in a batch mode and is characterized by maintenance of several general equations representative of basic redundancy schemes in an equation repository. Selected reliability functions applicable to any mathematical model formulated with the general equations, used singly or in combination with each other, are separately stored. One or more system and/or mission parameters may be designated as a variable. Data in the form of values for selected reliability functions is generated in a tabular or graphic format for each formulated model.
Injecting Artificial Memory Errors Into a Running Computer Program
NASA Technical Reports Server (NTRS)
Bornstein, Benjamin J.; Granat, Robert A.; Wagstaff, Kiri L.
2008-01-01
Single-event upsets (SEUs) or bitflips are computer memory errors caused by radiation. BITFLIPS (Basic Instrumentation Tool for Fault Localized Injection of Probabilistic SEUs) is a computer program that deliberately injects SEUs into another computer program, while the latter is running, for the purpose of evaluating the fault tolerance of that program. BITFLIPS was written as a plug-in extension of the open-source Valgrind debugging and profiling software. BITFLIPS can inject SEUs into any program that can be run on the Linux operating system, without needing to modify the program s source code. Further, if access to the original program source code is available, BITFLIPS offers fine-grained control over exactly when and which areas of memory (as specified via program variables) will be subjected to SEUs. The rate of injection of SEUs is controlled by specifying either a fault probability or a fault rate based on memory size and radiation exposure time, in units of SEUs per byte per second. BITFLIPS can also log each SEU that it injects and, if program source code is available, report the magnitude of effect of the SEU on a floating-point value or other program variable.
Life prediction and constitutive models for engine hot section anisotropic materials program
NASA Technical Reports Server (NTRS)
Nissley, D. M.; Meyer, T. G.
1992-01-01
This report presents the results from a 35 month period of a program designed to develop generic constitutive and life prediction approaches and models for nickel-based single crystal gas turbine airfoils. The program is composed of a base program and an optional program. The base program addresses the high temperature coated single crystal regime above the airfoil root platform. The optional program investigates the low temperature uncoated single crystal regime below the airfoil root platform including the notched conditions of the airfoil attachment. Both base and option programs involve experimental and analytical efforts. Results from uniaxial constitutive and fatigue life experiments of coated and uncoated PWA 1480 single crystal material form the basis for the analytical modeling effort. Four single crystal primary orientations were used in the experiments: (001), (011), (111), and (213). Specific secondary orientations were also selected for the notched experiments in the optional program. Constitutive models for an overlay coating and PWA 1480 single crystal material were developed based on isothermal hysteresis loop data and verified using thermomechanical (TMF) hysteresis loop data. A fatigue life approach and life models were selected for TMF crack initiation of coated PWA 1480. An initial life model used to correlate smooth and notched fatigue data obtained in the option program shows promise. Computer software incorporating the overlay coating and PWA 1480 constitutive models was developed.
PLATO--AN AUTOMATED TEACHING DEVICE.
ERIC Educational Resources Information Center
BITZER, D.; AND OTHERS
PLATO (PROGRAMED LOGIC FOR AUTOMATIC TEACHING OPERATION) IS A DEVICE FOR TEACHING A NUMBER OF STUDENTS INDIVIDUALLY BY MEANS OF A SINGLE, CENTRAL PURPOSE, DIGITAL COMPUTER. THE GENERAL ORGANIZATION OF EQUIPMENT CONSISTS OF A KEYSET FOR STUDENT RESPONSES, THE COMPUTER, STORAGE DEVICE (ELECTRIC BLACKBOARD), SLIDE SELECTOR (ELECTRICAL BOOK), AND TV…
Advanced Simulation & Computing FY15 Implementation Plan Volume 2, Rev. 0.5
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCoy, Michel; Archer, Bill; Matzen, M. Keith
2014-09-16
The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of experimental facilities and programs, and the computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities andmore » computational resources that support annual stockpile assessment and certification, study advanced nuclear weapons design and manufacturing processes, analyze accident scenarios and weapons aging, and provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balance of resource, including technical staff, hardware, simulation software, and computer science solutions. As the program approaches the end of its second decade, ASC is intently focused on increasing predictive capabilities in a three-dimensional (3D) simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (sufficient resolution, dimensionality, and scientific details), quantify critical margins and uncertainties, and resolve increasingly difficult analyses needed for the SSP. Where possible, the program also enables the use of high-performance simulation and computing tools to address broader national security needs, such as foreign nuclear weapon assessments and counternuclear terrorism.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hohn, Michael; Adams, Paul
2006-09-05
The L3 system is a computational steering environment for image processing and scientific computing. It consists of an interactive graphical language and interface. Its purpose is to help advanced users in controlling their computational software and assist in the management of data accumulated during numerical experiments. L3 provides a combination of features not found in other environments; these are: - textual and graphical construction of programs - persistence of programs and associated data - direct mapping between the scripts, the parameters, and the produced data - implicit hierarchial data organization - full programmability, including conditionals and functions - incremental executionmore » of programs The software includes the l3 language and the graphical environment. The language is a single-assignment functional language; the implementation consists of lexer, parser, interpreter, storage handler, and editing support, The graphical environment is an event-driven nested list viewer/editor providing graphical elements corresponding to the language. These elements are both the represenation of a users program and active interfaces to the values computed by that program.« less
Eyewitness to history: Landmarks in the development of computerized electrocardiography.
Rautaharju, Pentti M
2016-01-01
The use of digital computers for ECG processing was pioneered in the early 1960s by two immigrants to the US, Hubert Pipberger, who initiated a collaborative VA project to collect an ECG-independent Frank lead data base, and Cesar Caceres at NIH who selected for his ECAN program standard 12-lead ECGs processed as single leads. Ray Bonner in the early 1970s placed his IBM 5880 program in a cart to print ECGs with interpretation, and computer-ECG programs were developed by Telemed, Marquette, HP-Philips and Mortara. The "Common Standards for quantitative Electrocardiography (CSE)" directed by Jos Willems evaluated nine ECG programs and eight cardiologists in clinically-defined categories. The total accuracy by a representative "average" cardiologist (75.5%) was 5.8% higher than that of the average program (69.7, p<0.001). Future comparisons of computer-based and expert reader performance are likely to show evolving results with continuing improvement of computer-ECG algorithms and changing expertise of ECG interpreters. Copyright © 2016 Elsevier Inc. All rights reserved.
Discrete sensitivity derivatives of the Navier-Stokes equations with a parallel Krylov solver
NASA Technical Reports Server (NTRS)
Ajmani, Kumud; Taylor, Arthur C., III
1994-01-01
This paper solves an 'incremental' form of the sensitivity equations derived by differentiating the discretized thin-layer Navier Stokes equations with respect to certain design variables of interest. The equations are solved with a parallel, preconditioned Generalized Minimal RESidual (GMRES) solver on a distributed-memory architecture. The 'serial' sensitivity analysis code is parallelized by using the Single Program Multiple Data (SPMD) programming model, domain decomposition techniques, and message-passing tools. Sensitivity derivatives are computed for low and high Reynolds number flows over a NACA 1406 airfoil on a 32-processor Intel Hypercube, and found to be identical to those computed on a single-processor Cray Y-MP. It is estimated that the parallel sensitivity analysis code has to be run on 40-50 processors of the Intel Hypercube in order to match the single-processor processing time of a Cray Y-MP.
Programs for Testing Processor-in-Memory Computing Systems
NASA Technical Reports Server (NTRS)
Katz, Daniel S.
2006-01-01
The Multithreaded Microbenchmarks for Processor-In-Memory (PIM) Compilers, Simulators, and Hardware are computer programs arranged in a series for use in testing the performances of PIM computing systems, including compilers, simulators, and hardware. The programs at the beginning of the series test basic functionality; the programs at subsequent positions in the series test increasingly complex functionality. The programs are intended to be used while designing a PIM system, and can be used to verify that compilers, simulators, and hardware work correctly. The programs can also be used to enable designers of these system components to examine tradeoffs in implementation. Finally, these programs can be run on non-PIM hardware (either single-threaded or multithreaded) using the POSIX pthreads standard to verify that the benchmarks themselves operate correctly. [POSIX (Portable Operating System Interface for UNIX) is a set of standards that define how programs and operating systems interact with each other. pthreads is a library of pre-emptive thread routines that comply with one of the POSIX standards.
SYSTID - A flexible tool for the analysis of communication systems.
NASA Technical Reports Server (NTRS)
Dawson, C. T.; Tranter, W. H.
1972-01-01
Description of the System Time Domain Simulation (SYSTID) computer-aided analysis program which is specifically structured for communication systems analysis. The SYSTID program is user oriented so that very little knowledge of computer techniques and very little programming ability are required for proper application. The program is designed so that the user can go from a system block diagram to an accurate simulation by simply programming a single English language statement for each block in the system. The mathematical and functional models available in the SYSTID library are presented. An example problem is given which illustrates the ease of modeling communication systems. Examples of the outputs available are presented, and proposed improvements are summarized.
MEKS: A program for computation of inclusive jet cross sections at hadron colliders
NASA Astrophysics Data System (ADS)
Gao, Jun; Liang, Zhihua; Soper, Davison E.; Lai, Hung-Liang; Nadolsky, Pavel M.; Yuan, C.-P.
2013-06-01
EKS is a numerical program that predicts differential cross sections for production of single-inclusive hadronic jets and jet pairs at next-to-leading order (NLO) accuracy in a perturbative QCD calculation. We describe MEKS 1.0, an upgraded EKS program with increased numerical precision, suitable for comparisons to the latest experimental data from the Large Hadron Collider and Tevatron. The program integrates the regularized patron-level matrix elements over the kinematical phase space for production of two and three partons using the VEGAS algorithm. It stores the generated weighted events in finely binned two-dimensional histograms for fast offline analysis. A user interface allows one to customize computation of inclusive jet observables. Results of a benchmark comparison of the MEKS program and the commonly used FastNLO program are also documented. Program SummaryProgram title: MEKS 1.0 Catalogue identifier: AEOX_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOX_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland. Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 9234 No. of bytes in distributed program, including test data, etc.: 51997 Distribution format: tar.gz Programming language: Fortran (main program), C (CUBA library and analysis program). Computer: All. Operating system: Any UNIX-like system. RAM: ˜300 MB Classification: 11.1. External routines: LHAPDF (https://lhapdf.hepforge.org/) Nature of problem: Computation of differential cross sections for inclusive production of single hadronic jets and jet pairs at next-to-leading order accuracy in perturbative quantum chromodynamics. Solution method: Upon subtraction of infrared singularities, the hard-scattering matrix elements are integrated over available phase space using an optimized VEGAS algorithm. Weighted events are generated and filled into a finely binned two-dimensional histogram, from which the final cross sections with typical experimental binning and cuts are computed by an independent analysis program. Monte Carlo sampling of event weights is tuned automatically to get better efficiency. Running time: Depends on details of the calculation and sought numerical accuracy. See benchmark performance in Section 4. The tests provided take approximately 27 min for the jetbin run and a few seconds for jetana.
Computer-aided design of antenna structures and components
NASA Technical Reports Server (NTRS)
Levy, R.
1976-01-01
This paper discusses computer-aided design procedures for antenna reflector structures and related components. The primary design aid is a computer program that establishes cross sectional sizes of the structural members by an optimality criterion. Alternative types of deflection-dependent objectives can be selected for designs subject to constraints on structure weight. The computer program has a special-purpose formulation to design structures of the type frequently used for antenna construction. These structures, in common with many in other areas of application, are represented by analytical models that employ only the three translational degrees of freedom at each node. The special-purpose construction of the program, however, permits coding and data management simplifications that provide advantages in problem size and execution speed. Size and speed are essentially governed by the requirements of structural analysis and are relatively unaffected by the added requirements of design. Computation times to execute several design/analysis cycles are comparable to the times required by general-purpose programs for a single analysis cycle. Examples in the paper illustrate effective design improvement for structures with several thousand degrees of freedom and within reasonable computing times.
Sum and mean. Standard programs for activation analysis.
Lindstrom, R M
1994-01-01
Two computer programs in use for over a decade in the Nuclear Methods Group at NIST illustrate the utility of standard software: programs widely available and widely used, in which (ideally) well-tested public algorithms produce results that are well understood, and thereby capable of comparison, within the community of users. Sum interactively computes the position, net area, and uncertainty of the area of spectral peaks, and can give better results than automatic peak search programs when peaks are very small, very large, or unusually shaped. Mean combines unequal measurements of a single quantity, tests for consistency, and obtains the weighted mean and six measures of its uncertainty.
Mobini, Sirous; Mackintosh, Bundy; Illingworth, Jo; Gega, Lina; Langdon, Peter; Hoppitt, Laura
2014-06-01
This study examines the effects of a single session of Cognitive Bias Modification to induce positive Interpretative bias (CBM-I) using standard or explicit instructions and an analogue of computer-administered CBT (c-CBT) program on modifying cognitive biases and social anxiety. A sample of 76 volunteers with social anxiety attended a research site. At both pre- and post-test, participants completed two computer-administered tests of interpretative and attentional biases and a self-report measure of social anxiety. Participants in the training conditions completed a single session of either standard or explicit CBM-I positive training and a c-CBT program. Participants in the Control (no training) condition completed a CBM-I neutral task matched the active CBM-I intervention in format and duration but did not encourage positive disambiguation of socially ambiguous or threatening scenarios. Participants in both CBM-I programs (either standard or explicit instructions) and the c-CBT condition exhibited more positive interpretations of ambiguous social scenarios at post-test and one-week follow-up as compared to the Control condition. Moreover, the results showed that CBM-I and c-CBT, to some extent, changed negative attention biases in a positive direction. Furthermore, the results showed that both CBM-I training conditions and c-CBT reduced social anxiety symptoms at one-week follow-up. This study used a single session of CBM-I training, however multi-sessions intervention might result in more endurable positive CBM-I changes. A computerised single session of CBM-I and an analogue of c-CBT program reduced negative interpretative biases and social anxiety. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.
Mobini, Sirous; Mackintosh, Bundy; Illingworth, Jo; Gega, Lina; Langdon, Peter; Hoppitt, Laura
2014-01-01
Background and objectives This study examines the effects of a single session of Cognitive Bias Modification to induce positive Interpretative bias (CBM-I) using standard or explicit instructions and an analogue of computer-administered CBT (c-CBT) program on modifying cognitive biases and social anxiety. Methods A sample of 76 volunteers with social anxiety attended a research site. At both pre- and post-test, participants completed two computer-administered tests of interpretative and attentional biases and a self-report measure of social anxiety. Participants in the training conditions completed a single session of either standard or explicit CBM-I positive training and a c-CBT program. Participants in the Control (no training) condition completed a CBM-I neutral task matched the active CBM-I intervention in format and duration but did not encourage positive disambiguation of socially ambiguous or threatening scenarios. Results Participants in both CBM-I programs (either standard or explicit instructions) and the c-CBT condition exhibited more positive interpretations of ambiguous social scenarios at post-test and one-week follow-up as compared to the Control condition. Moreover, the results showed that CBM-I and c-CBT, to some extent, changed negative attention biases in a positive direction. Furthermore, the results showed that both CBM-I training conditions and c-CBT reduced social anxiety symptoms at one-week follow-up. Limitations This study used a single session of CBM-I training, however multi-sessions intervention might result in more endurable positive CBM-I changes. Conclusions A computerised single session of CBM-I and an analogue of c-CBT program reduced negative interpretative biases and social anxiety. PMID:24412966
von Arnim, Albrecht G.; Missra, Anamika
2017-01-01
Leading voices in the biological sciences have called for a transformation in graduate education leading to the PhD degree. One area commonly singled out for growth and innovation is cross-training in computational science. In 1998, the University of Tennessee (UT) founded an intercollegiate graduate program called the UT-ORNL Graduate School of Genome Science and Technology in partnership with the nearby Oak Ridge National Laboratory. Here, we report outcome data that attest to the program’s effectiveness in graduating computationally enabled biologists for diverse careers. Among 77 PhD graduates since 2003, the majority came with traditional degrees in the biological sciences, yet two-thirds moved into computational or hybrid (computational–experimental) positions. We describe the curriculum of the program and how it has changed. We also summarize how the program seeks to establish cohesion between computational and experimental biologists. This type of program can respond flexibly and dynamically to unmet training needs. In conclusion, this study from a flagship, state-supported university may serve as a reference point for creating a stable, degree-granting, interdepartmental graduate program in computational biology and allied areas. PMID:29167223
Fast single-pass alignment and variant calling using sequencing data
USDA-ARS?s Scientific Manuscript database
Sequencing research requires efficient computation. Few programs use already known information about DNA variants when aligning sequence data to the reference map. New program findmap.f90 reads the previous variant list before aligning sequence, calling variant alleles, and summing the allele counts...
Implementing Multidisciplinary and Multi-Zonal Applications Using MPI
NASA Technical Reports Server (NTRS)
Fineberg, Samuel A.
1995-01-01
Multidisciplinary and multi-zonal applications are an important class of applications in the area of Computational Aerosciences. In these codes, two or more distinct parallel programs or copies of a single program are utilized to model a single problem. To support such applications, it is common to use a programming model where a program is divided into several single program multiple data stream (SPMD) applications, each of which solves the equations for a single physical discipline or grid zone. These SPMD applications are then bound together to form a single multidisciplinary or multi-zonal program in which the constituent parts communicate via point-to-point message passing routines. Unfortunately, simple message passing models, like Intel's NX library, only allow point-to-point and global communication within a single system-defined partition. This makes implementation of these applications quite difficult, if not impossible. In this report it is shown that the new Message Passing Interface (MPI) standard is a viable portable library for implementing the message passing portion of multidisciplinary applications. Further, with the extension of a portable loader, fully portable multidisciplinary application programs can be developed. Finally, the performance of MPI is compared to that of some native message passing libraries. This comparison shows that MPI can be implemented to deliver performance commensurate with native message libraries.
NASA Technical Reports Server (NTRS)
Coen, Peter G.
1991-01-01
A new computer technique for the analysis of transport aircraft sonic boom signature characteristics was developed. This new technique, based on linear theory methods, combines the previously separate equivalent area and F function development with a signature propagation method using a single geometry description. The new technique was implemented in a stand-alone computer program and was incorporated into an aircraft performance analysis program. Through these implementations, both configuration designers and performance analysts are given new capabilities to rapidly analyze an aircraft's sonic boom characteristics throughout the flight envelope.
for the game. Subsequent duels , flown with single armed escorts, calculated reduction in losses and damage states. For the study, hybrid computer...6) a duel between a ground weapon, armed escort, and formation of lift aircraft. (Author)
Multiprocessor programming environment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, M.B.; Fornaro, R.
Programming tools and techniques have been well developed for traditional uniprocessor computer systems. The focus of this research project is on the development of a programming environment for a high speed real time heterogeneous multiprocessor system, with special emphasis on languages and compilers. The new tools and techniques will allow a smooth transition for programmers with experience only on single processor systems.
Torak, L.J.
1993-01-01
A MODular Finite-Element, digital-computer program (MODFE) was developed to simulate steady or unsteady-state, two-dimensional or axisymmetric ground-water-flow. The modular structure of MODFE places the computationally independent tasks that are performed routinely by digital-computer programs simulating ground-water flow into separate subroutines, which are executed from the main program by control statements. Each subroutine consists of complete sets of computations, or modules, which are identified by comment statements, and can be modified by the user without affecting unrelated computations elsewhere in the program. Simulation capabilities can be added or modified by either adding or modifying subroutines that perform specific computational tasks, and the modular-program structure allows the user to create versions of MODFE that contain only the simulation capabilities that pertain to the ground-water problem of interest. MODFE is written in a Fortran programming language that makes it virtually device independent and compatible with desk-top personal computers and large mainframes. MODFE uses computer storage and execution time efficiently by taking advantage of symmetry and sparseness within the coefficient matrices of the finite-element equations. Parts of the matrix coefficients are computed and stored as single-subscripted variables, which are assembled into a complete coefficient just prior to solution. Computer storage is reused during simulation to decrease storage requirements. Descriptions of subroutines that execute the computational steps of the modular-program structure are given in tables that cross reference the subroutines with particular versions of MODFE. Programming details of linear and nonlinear hydrologic terms are provided. Structure diagrams for the main programs show the order in which subroutines are executed for each version and illustrate some of the linear and nonlinear versions of MODFE that are possible. Computational aspects of changing stresses and boundary conditions with time and of mass-balance and error terms are given for each hydrologic feature. Program variables are listed and defined according to their occurrence in the main programs and in subroutines. Listings of the main programs and subroutines are given.
NASA Technical Reports Server (NTRS)
Shih, T. I.-P.; Bailey, R. T.; Nguyen, H. L.; Roelke, R. J.
1990-01-01
An efficient computer program, called GRID2D/3D was developed to generate single and composite grid systems within geometrically complex two- and three-dimensional (2- and 3-D) spatial domains that can deform with time. GRID2D/3D generates single grid systems by using algebraic grid generation methods based on transfinite interpolation in which the distribution of grid points within the spatial domain is controlled by stretching functions. All single grid systems generated by GRID2D/3D can have grid lines that are continuous and differentiable everywhere up to the second-order. Also, grid lines can intersect boundaries of the spatial domain orthogonally. GRID2D/3D generates composite grid systems by patching together two or more single grid systems. The patching can be discontinuous or continuous. For continuous composite grid systems, the grid lines are continuous and differentiable everywhere up to the second-order except at interfaces where different single grid systems meet. At interfaces where different single grid systems meet, the grid lines are only differentiable up to the first-order. For 2-D spatial domains, the boundary curves are described by using either cubic or tension spline interpolation. For 3-D spatial domains, the boundary surfaces are described by using either linear Coon's interpolation, bi-hyperbolic spline interpolation, or a new technique referred to as 3-D bi-directional Hermite interpolation. Since grid systems generated by algebraic methods can have grid lines that overlap one another, GRID2D/3D contains a graphics package for evaluating the grid systems generated. With the graphics package, the user can generate grid systems in an interactive manner with the grid generation part of GRID2D/3D. GRID2D/3D is written in FORTRAN 77 and can be run on any IBM PC, XT, or AT compatible computer. In order to use GRID2D/3D on workstations or mainframe computers, some minor modifications must be made in the graphics part of the program; no modifications are needed in the grid generation part of the program. This technical memorandum describes the theory and method used in GRID2D/3D.
Abstract quantum computing machines and quantum computational logics
NASA Astrophysics Data System (ADS)
Chiara, Maria Luisa Dalla; Giuntini, Roberto; Sergioli, Giuseppe; Leporini, Roberto
2016-06-01
Classical and quantum parallelism are deeply different, although it is sometimes claimed that quantum Turing machines are nothing but special examples of classical probabilistic machines. We introduce the concepts of deterministic state machine, classical probabilistic state machine and quantum state machine. On this basis, we discuss the question: To what extent can quantum state machines be simulated by classical probabilistic state machines? Each state machine is devoted to a single task determined by its program. Real computers, however, behave differently, being able to solve different kinds of problems. This capacity can be modeled, in the quantum case, by the mathematical notion of abstract quantum computing machine, whose different programs determine different quantum state machines. The computations of abstract quantum computing machines can be linguistically described by the formulas of a particular form of quantum logic, termed quantum computational logic.
NASA Technical Reports Server (NTRS)
Sackett, L. L.; Edelbaum, T. N.; Malchow, H. L.
1974-01-01
This manual is a guide for using a computer program which calculates time optimal trajectories for high-and low-thrust geocentric transfers. Either SEP or NEP may be assumed and a one or two impulse, fixed total delta V, initial high thrust phase may be included. Also a single impulse of specified delta V may be included after the low thrust state. The low thrust phase utilizes equinoctial orbital elements to avoid the classical singularities and Kryloff-Boguliuboff averaging to help insure more rapid computation time. The program is written in FORTRAN 4 in double precision for use on an IBM 360 computer. The manual includes a description of the problem treated, input/output information, examples of runs, and source code listings.
TLIFE: a Program for Spur, Helical and Spiral Bevel Transmission Life and Reliability Modeling
NASA Technical Reports Server (NTRS)
Savage, M.; Prasanna, M. G.; Rubadeux, K. L.
1994-01-01
This report describes a computer program, 'TLIFE', which models the service life of a transmission. The program is written in ANSI standard Fortran 77 and has an executable size of about 157 K bytes for use on a personal computer running DOS. It can also be compiled and executed in UNIX. The computer program can analyze any one of eleven unit transmissions either singly or in a series combination of up to twenty-five unit transmissions. Metric or English unit calculations are performed with the same routines using consistent input data and a units flag. Primary outputs are the dynamic capacity of the transmission and the mean lives of the transmission and of the sum of its components. The program uses a modular approach to separate the load analyses from the system life calculations. The program and its input and output data files are described herein. Three examples illustrate its use. A development of the theory behind the analysis in the program is included after the examples.
Hand-held computer operating system program for collection of resident experience data.
Malan, T K; Haffner, W H; Armstrong, A Y; Satin, A J
2000-11-01
To describe a system for recording resident experience involving hand-held computers with the Palm Operating System (3 Com, Inc., Santa Clara, CA). Hand-held personal computers (PCs) are popular, easy to use, inexpensive, portable, and can share data among other operating systems. Residents in our program carry individual hand-held database computers to record Residency Review Committee (RRC) reportable patient encounters. Each resident's data is transferred to a single central relational database compatible with Microsoft Access (Microsoft Corporation, Redmond, WA). Patient data entry and subsequent transfer to a central database is accomplished with commercially available software that requires minimal computer expertise to implement and maintain. The central database can then be used for statistical analysis or to create required RRC resident experience reports. As a result, the data collection and transfer process takes less time for residents and program director alike, than paper-based or central computer-based systems. The system of collecting resident encounter data using hand-held computers with the Palm Operating System is easy to use, relatively inexpensive, accurate, and secure. The user-friendly system provides prompt, complete, and accurate data, enhancing the education of residents while facilitating the job of the program director.
A computer program for two-particle intrinsic coefficients of fractional parentage
NASA Astrophysics Data System (ADS)
Deveikis, A.
2012-06-01
A Fortran 90 program CESOS for the calculation of the two-particle intrinsic coefficients of fractional parentage for several j-shells with isospin and an arbitrary number of oscillator quanta (CESOs) is presented. The implemented procedure for CESOs calculation consistently follows the principles of antisymmetry and translational invariance. The approach is based on a simple enumeration scheme for antisymmetric many-particle states, efficient algorithms for calculation of the coefficients of fractional parentage for j-shells with isospin, and construction of the subspace of the center-of-mass Hamiltonian eigenvectors corresponding to the minimal eigenvalue equal to 3/2 (in ℏω). The program provides fast calculation of CESOs for a given particle number and produces results possessing small numerical uncertainties. The introduced CESOs may be used for calculation of expectation values of two-particle nuclear shell-model operators within the isospin formalism. Program summaryProgram title: CESOS Catalogue identifier: AELT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AELT_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 10 932 No. of bytes in distributed program, including test data, etc.: 61 023 Distribution format: tar.gz Programming language: Fortran 90 Computer: Any computer with a Fortran 90 compiler Operating system: Windows XP, Linux RAM: The memory demand depends on the number of particles A and the excitation energy of the system E. Computation of the A=6 particle system with the total angular momentum J=0 and the total isospin T=1 requires around 4 kB of RAM at E=0,˜3 MB at E=3, and ˜172 MB at E=5. Classification: 17.18 Nature of problem: The code CESOS generates a list of two-particle intrinsic coefficients of fractional parentage for several j-shells with isospin. Solution method: The method is based on the observation that CESOs may be obtained by diagonalizing the center-of-mass Hamiltonian in the basis set of antisymmetric A-particle oscillator functions with singled out dependence on Jacobi coordinates of two last particles and choosing the subspace of its eigenvectors corresponding to the minimal eigenvalue equal to 3/2. Restrictions: One run of the code CESOS generates CESOs for one specified set of (A,E,J,T) values only. The restrictions on the (A,E,J,T) values are completely determined by the restrictions on the computation of the single-shell CFPs and two-particle multishell CFPs (GCFPs) [1]. The full sets of single-shell CFPs may be calculated up to the j=9/2 shell (for any particular shell of the configuration); the shell with j⩾11/2 cannot get full (it is the implementation constraint). The calculation of GCFPs is limited by A<86 when E=0 (due to the memory constraints); small numbers of particles allow significantly higher excitations. Any allowed values of J and T may be chosen for the specified values of A and E. The complete list of allowed values of J and T for the chosen values of A and E may be generated by the GCFP program - CPC Program Library, Catalogue Id. AEBI_v1_0. The actual scale of the CESOs computation problem depends strongly on the magnitude of the A and E values. Though there are no limitations on A and E values (within the limits of single-shell CFPs and multishell CFPs calculation), however the generation of corresponding list of CESOs is the subject of available computing resources. For example, the computing time of CESOs for A=6, JT=10 at E=5 took around 14 hours. The system with A=11, JT=1/23/2 at E=2 requires around 15 hours. These computations were performed on Pentium 3 GHz PC with 1 GB RAM [2]. Unusual features: It is possible to test the computed CESOs without saving them to a file. This allows the user to learn their number and approximate computation time and to evaluate the accuracy of calculations. Additional comments: The program CESOS uses the code from GCFP program for calculation of the two-particle multishell coefficients of fractional parentage. Running time: It depends on the size of the problem. The A=6 particle system with the JT=01 took around 31 seconds on Pentium 3 GHz PC with 1 GB RAM at E=3 and about 2.6 hours at E=5.
Life prediction and constitutive models for engine hot section anisotropic materials program
NASA Technical Reports Server (NTRS)
Nissley, D. M.; Meyer, T. G.; Walker, K. P.
1992-01-01
This report presents a summary of results from a 7 year program designed to develop generic constitutive and life prediction approaches and models for nickel-based single crystal gas turbine airfoils. The program was composed of a base program and an optional program. The base program addressed the high temperature coated single crystal regime above the airfoil root platform. The optional program investigated the low temperature uncoated single crystal regime below the airfoil root platform including the notched conditions of the airfoil attachment. Both base and option programs involved experimental and analytical efforts. Results from uniaxial constitutive and fatigue life experiments of coated and uncoated PWA 1480 single crystal material formed the basis for the analytical modeling effort. Four single crystal primary orientations were used in the experiments: group of zone axes (001), group of zone axes (011), group of zone axes (111), and group of zone axes (213). Specific secondary orientations were also selected for the notched experiments in the optional program. Constitutive models for an overlay coating and PWA 1480 single crystal materials were developed based on isothermal hysteresis loop data and verified using thermomechanical (TMF) hysteresis loop data. A fatigue life approach and life models were developed for TMF crack initiation of coated PWA 1480. A life model was developed for smooth and notched fatigue in the option program. Finally, computer software incorporating the overlay coating and PWA 1480 constitutive and life models was developed.
Building Software Development Capacity to Advance the State of Educational Technology
ERIC Educational Resources Information Center
Luterbach, Kenneth J.
2013-01-01
Educational technologists may advance the state of the field by increasing capacity to develop software tools and instructional applications. Presently, few academic programs in educational technology require even a single computer programming course. Further, the educational technologists who develop software generally work independently or in…
Structural behavior of composites with progressive fracture
NASA Technical Reports Server (NTRS)
Minnetyan, L.; Murthy, P. L. N.; Chamis, C. C.
1989-01-01
The objective of the study is to unify several computational tools developed for the prediction of progressive damage and fracture with efforts for the prediction of the overall response of damaged composite structures. In particular, a computational finite element model for the damaged structure is developed using a computer program as a byproduct of the analysis of progressive damage and fracture. Thus, a single computational investigation can predict progressive fracture and the resulting variation in structural properties of angleplied composites.
Computer-Aided Engineering Tools | Water Power | NREL
energy converters that will provide a full range of simulation capabilities for single devices and arrays simulation of water power technologies on high-performance computers enables the study of complex systems and experimentation. Such simulation is critical to accelerate progress in energy programs within the U.S. Department
Combining-Ability Determinations for Incomplete Mating Designs
E.B. Snyder
1975-01-01
It is shown how general combining ability values (GCA's) from cross-, open-, and self-pollinated progeny can be derived in a single analysis. Breeding values are employed to facilitate explaining genetic models of the expected family means and the derivation of the GCA's. A FORTRAN computer program also includes computation of specific combining ability...
NASA Technical Reports Server (NTRS)
Liew, K. H.; Urip, E.; Yang, S. L.; Marek, C. J.
2004-01-01
Droplet interaction with a high temperature gaseous crossflow is important because of its wide application in systems involving two phase mixing such as in combustion requiring quick mixing of fuel and air with the reduction of pollutants and for jet mixing in the dilution zone of combustors. Therefore, the focus of this work is to investigate dispersion of a two-dimensional atomized and evaporating spray jet into a two-dimensional crossflow. An interactive Microsoft Excel program for tracking a single droplet in crossflow that has previously been developed will be modified to include droplet evaporation computation. In addition to the high velocity airflow, the injected droplets are also subjected to combustor temperature and pressure that affect their motion in the flow field. Six ordinary differential equations are then solved by 4th-order Runge-Kutta method using Microsoft Excel software. Microsoft Visual Basic programming and Microsoft Excel macrocode are used to produce the data and plot graphs describing the droplet's motion in the flow field. This program computes and plots the data sequentially without forcing the user to open other types of plotting programs. A user's manual on how to use the program is included.
Nishizawa, Hiroaki; Nishimura, Yoshifumi; Kobayashi, Masato; Irle, Stephan; Nakai, Hiromi
2016-08-05
The linear-scaling divide-and-conquer (DC) quantum chemical methodology is applied to the density-functional tight-binding (DFTB) theory to develop a massively parallel program that achieves on-the-fly molecular reaction dynamics simulations of huge systems from scratch. The functions to perform large scale geometry optimization and molecular dynamics with DC-DFTB potential energy surface are implemented to the program called DC-DFTB-K. A novel interpolation-based algorithm is developed for parallelizing the determination of the Fermi level in the DC method. The performance of the DC-DFTB-K program is assessed using a laboratory computer and the K computer. Numerical tests show the high efficiency of the DC-DFTB-K program, a single-point energy gradient calculation of a one-million-atom system is completed within 60 s using 7290 nodes of the K computer. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
HYDES: A generalized hybrid computer program for studying turbojet or turbofan engine dynamics
NASA Technical Reports Server (NTRS)
Szuch, J. R.
1974-01-01
This report describes HYDES, a hybrid computer program capable of simulating one-spool turbojet, two-spool turbojet, or two-spool turbofan engine dynamics. HYDES is also capable of simulating two- or three-stream turbofans with or without mixing of the exhaust streams. The program is intended to reduce the time required for implementing dynamic engine simulations. HYDES was developed for running on the Lewis Research Center's Electronic Associates (EAI) 690 Hybrid Computing System and satisfies the 16384-word core-size and hybrid-interface limits of that machine. The program could be modified for running on other computing systems. The use of HYDES to simulate a single-spool turbojet and a two-spool, two-stream turbofan engine is demonstrated. The form of the required input data is shown and samples of output listings (teletype) and transient plots (x-y plotter) are provided. HYDES is shown to be capable of performing both steady-state design and off-design analyses and transient analyses.
METLIN-PC: An applications-program package for problems of mathematical programming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pshenichnyi, B.N.; Sobolenko, L.A.; Sosnovskii, A.A.
1994-05-01
The METLIN-PC applications-program package (APP) was developed at the V.M. Glushkov Institute of Cybernetics of the Academy of Sciences of Ukraine on IBM PC XT and AT computers. The present version of the package was written in Turbo Pascal and Fortran-77. The METLIN-PC is chiefly designed for the solution of smooth problems of mathematical programming and is a further development of the METLIN prototype, which was created earlier on a BESM-6 computer. The principal property of the previous package is retained - the applications modules employ a single approach based on the linearization method of B.N. Pschenichnyi. Hence the namemore » {open_quotes}METLIN.{close_quotes}« less
Technology transfer of military space microprocessor developments
NASA Astrophysics Data System (ADS)
Gorden, C.; King, D.; Byington, L.; Lanza, D.
1999-01-01
Over the past 13 years the Air Force Research Laboratory (AFRL) has led the development of microprocessors and computers for USAF space and strategic missile applications. As a result of these Air Force development programs, advanced computer technology is available for use by civil and commercial space customers as well. The Generic VHSIC Spaceborne Computer (GVSC) program began in 1985 at AFRL to fulfill a deficiency in the availability of space-qualified data and control processors. GVSC developed a radiation hardened multi-chip version of the 16-bit, Mil-Std 1750A microprocessor. The follow-on to GVSC, the Advanced Spaceborne Computer Module (ASCM) program, was initiated by AFRL to establish two industrial sources for complete, radiation-hardened 16-bit and 32-bit computers and microelectronic components. Development of the Control Processor Module (CPM), the first of two ASCM contract phases, concluded in 1994 with the availability of two sources for space-qualified, 16-bit Mil-Std-1750A computers, cards, multi-chip modules, and integrated circuits. The second phase of the program, the Advanced Technology Insertion Module (ATIM), was completed in December 1997. ATIM developed two single board computers based on 32-bit reduced instruction set computer (RISC) processors. GVSC, CPM, and ATIM technologies are flying or baselined into the majority of today's DoD, NASA, and commercial satellite systems.
ERIC Educational Resources Information Center
Hsu, Jenq-Muh; Chang, Ting-Wen; Yu, Pao-Ta
2012-01-01
The teaching and learning environment in a traditional classroom typically includes a projection screen, a projector, and a computer within a digital interactive table. Instructors may apply multimedia learning materials using various information communication technologies to increase interaction effects. However, a single screen only displays a…
Predicting the Coupling Properties of Axially-Textured Materials.
Fuentes-Cobas, Luis E; Muñoz-Romero, Alejandro; Montero-Cabrera, María E; Fuentes-Montero, Luis; Fuentes-Montero, María E
2013-10-30
A description of methods and computer programs for the prediction of "coupling properties" in axially-textured polycrystals is presented. Starting data are the single-crystal properties, texture and stereography. The validity and proper protocols for applying the Voigt, Reuss and Hill approximations to estimate coupling properties effective values is analyzed. Working algorithms for predicting mentioned averages are given. Bunge's symmetrized spherical harmonics expansion of orientation distribution functions, inverse pole figures and (single and polycrystals) physical properties is applied in all stages of the proposed methodology. The established mathematical route has been systematized in a working computer program. The discussion of piezoelectricity in a representative textured ferro-piezoelectric ceramic illustrates the application of the proposed methodology. Polycrystal coupling properties, predicted by the suggested route, are fairly close to experimentally measured ones.
Predicting the Coupling Properties of Axially-Textured Materials
Fuentes-Cobas, Luis E.; Muñoz-Romero, Alejandro; Montero-Cabrera, María E.; Fuentes-Montero, Luis; Fuentes-Montero, María E.
2013-01-01
A description of methods and computer programs for the prediction of “coupling properties” in axially-textured polycrystals is presented. Starting data are the single-crystal properties, texture and stereography. The validity and proper protocols for applying the Voigt, Reuss and Hill approximations to estimate coupling properties effective values is analyzed. Working algorithms for predicting mentioned averages are given. Bunge’s symmetrized spherical harmonics expansion of orientation distribution functions, inverse pole figures and (single and polycrystals) physical properties is applied in all stages of the proposed methodology. The established mathematical route has been systematized in a working computer program. The discussion of piezoelectricity in a representative textured ferro-piezoelectric ceramic illustrates the application of the proposed methodology. Polycrystal coupling properties, predicted by the suggested route, are fairly close to experimentally measured ones. PMID:28788370
Multi-mesh gear dynamics program evaluation and enhancements
NASA Technical Reports Server (NTRS)
Boyd, L. S.; Pike, J.
1985-01-01
A multiple mesh gear dynamics computer program was continually developed and modified during the last four years. The program can handle epicyclic gear systems as well as single mesh systems with internal, buttress, or helical tooth forms. The following modifications were added under the current funding: variable contact friction, planet cage and ring gear rim flexibility options, user friendly options, dynamic side bands, a speed survey option and the combining of the single and multiple mesh options into one general program. The modified program was evaluated by comparing calculated values to published test data and to test data taken on a Hamilton Standard turboprop reduction gear-box. In general, the correlation between the test data and the analytical data is good.
NASA Technical Reports Server (NTRS)
Cline, M. C.
1981-01-01
A computer program, VNAP2, for calculating turbulent (as well as laminar and inviscid), steady, and unsteady flow is presented. It solves the two dimensional, time dependent, compressible Navier-Stokes equations. The turbulence is modeled with either an algebraic mixing length model, a one equation model, or the Jones-Launder two equation model. The geometry may be a single or a dual flowing stream. The interior grid points are computed using the unsplit MacCormack scheme. Two options to speed up the calculations for high Reynolds number flows are included. The boundary grid points are computed using a reference plane characteristic scheme with the viscous terms treated as source functions. An explicit artificial viscosity is included for shock computations. The fluid is assumed to be a perfect gas. The flow boundaries may be arbitrary curved solid walls, inflow/outflow boundaries, or free jet envelopes. Typical problems that can be solved concern nozzles, inlets, jet powered afterbodies, airfoils, and free jet expansions. The accuracy and efficiency of the program are shown by calculations of several inviscid and turbulent flows. The program and its use are described completely, and six sample cases and a code listing are included.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Venkata, Manjunath Gorentla; Aderholdt, William F
The pre-exascale systems are expected to have a significant amount of hierarchical and heterogeneous on-node memory, and this trend of system architecture in extreme-scale systems is expected to continue into the exascale era. along with hierarchical-heterogeneous memory, the system typically has a high-performing network ad a compute accelerator. This system architecture is not only effective for running traditional High Performance Computing (HPC) applications (Big-Compute), but also for running data-intensive HPC applications and Big-Data applications. As a consequence, there is a growing desire to have a single system serve the needs of both Big-Compute and Big-Data applications. Though the system architecturemore » supports the convergence of the Big-Compute and Big-Data, the programming models and software layer have yet to evolve to support either hierarchical-heterogeneous memory systems or the convergence. A programming abstraction to address this problem. The programming abstraction is implemented as a software library and runs on pre-exascale and exascale systems supporting current and emerging system architecture. Using distributed data-structures as a central concept, it provides (1) a simple, usable, and portable abstraction for hierarchical-heterogeneous memory and (2) a unified programming abstraction for Big-Compute and Big-Data applications.« less
Automatic computation of the travelling wave solutions to nonlinear PDEs
NASA Astrophysics Data System (ADS)
Liang, Songxin; Jeffrey, David J.
2008-05-01
Various extensions of the tanh-function method and their implementations for finding explicit travelling wave solutions to nonlinear partial differential equations (PDEs) have been reported in the literature. However, some solutions are often missed by these packages. In this paper, a new algorithm and its implementation called TWS for solving single nonlinear PDEs are presented. TWS is implemented in MAPLE 10. It turns out that, for PDEs whose balancing numbers are not positive integers, TWS works much better than existing packages. Furthermore, TWS obtains more solutions than existing packages for most cases. Program summaryProgram title:TWS Catalogue identifier:AEAM_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAM_v1_0.html Program obtainable from:CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions:Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.:1250 No. of bytes in distributed program, including test data, etc.:78 101 Distribution format:tar.gz Programming language:Maple 10 Computer:A laptop with 1.6 GHz Pentium CPU Operating system:Windows XP Professional RAM:760 Mbytes Classification:5 Nature of problem:Finding the travelling wave solutions to single nonlinear PDEs. Solution method:Based on tanh-function method. Restrictions:The current version of this package can only deal with single autonomous PDEs or ODEs, not systems of PDEs or ODEs. However, the PDEs can have any finite number of independent space variables in addition to time t. Unusual features:For PDEs whose balancing numbers are not positive integers, TWS works much better than existing packages. Furthermore, TWS obtains more solutions than existing packages for most cases. Additional comments:It is easy to use. Running time:Less than 20 seconds for most cases, between 20 to 100 seconds for some cases, over 100 seconds for few cases. References: [1] E.S. Cheb-Terrab, K. von Bulow, Comput. Phys. Comm. 90 (1995) 102. [2] S.A. Elwakil, S.K. El-Labany, M.A. Zahran, R. Sabry, Phys. Lett. A 299 (2002) 179. [3] E. Fan, Phys. Lett. 277 (2000) 212. [4] W. Malfliet, Amer. J. Phys. 60 (1992) 650. [5] W. Malfliet, W. Hereman, Phys. Scripta 54 (1996) 563. [6] E.J. Parkes, B.R. Duffy, Comput. Phys. Comm. 98 (1996) 288.
A computer program to determine the possible daily release window for sky target experiments
NASA Technical Reports Server (NTRS)
Michaud, N. H.
1973-01-01
A computer program is presented which is designed to determine the daily release window for sky target experiments. Factors considered in the program include: (1) target illumination by the sun at release time and during the tracking period; (2) look angle elevation above local horizon from each tracking station to the target; (3) solar depression angle from the local horizon of each tracking station during the experimental period after target release; (4) lunar depression angle from the local horizon of each tracking station during the experimental period after target release; and (5) total sky background brightness as seen from each tracking station while viewing the target. Program output is produced in both graphic and data form. Output data can be plotted for a single calendar month or year. The numerical values used to generate the plots are furnished to permit a more detailed review of the computed daily release windows.
NASA Technical Reports Server (NTRS)
Egolf, T. Alan; Anderson, Olof L.; Edwards, David E.; Landgrebe, Anton J.
1988-01-01
A computer program, the Propeller Nacelle Aerodynamic Performance Prediction Analysis (PANPER), was developed for the prediction and analysis of the performance and airflow of propeller-nacelle configurations operating over a forward speed range inclusive of high speed flight typical of recent propfan designs. A propeller lifting line, wake program was combined with a compressible, viscous center body interaction program, originally developed for diffusers, to compute the propeller-nacelle flow field, blade loading distribution, propeller performance, and the nacelle forebody pressure and viscous drag distributions. The computer analysis is applicable to single and coaxial counterrotating propellers. The blade geometries can include spanwise variations in sweep, droop, taper, thickness, and airfoil section type. In the coaxial mode of operation the analysis can treat both equal and unequal blade number and rotational speeds on the propeller disks. The nacelle portion of the analysis can treat both free air and tunnel wall configurations including wall bleed. The analysis was applied to many different sets of flight conditions using selected aerodynamic modeling options. The influence of different propeller nacelle-tunnel wall configurations was studied. Comparisons with available test data for both single and coaxial propeller configurations are presented along with a discussion of the results.
User's manual for EZPLOT version 5.5: A FORTRAN program for 2-dimensional graphic display of data
NASA Technical Reports Server (NTRS)
Garbinski, Charles; Redin, Paul C.; Budd, Gerald D.
1988-01-01
EZPLOT is a computer applications program that converts data resident on a file into a plot displayed on the screen of a graphics terminal. This program generates either time history or x-y plots in response to commands entered interactively from a terminal keyboard. Plot parameters consist of a single independent parameter and from one to eight dependent parameters. Various line patterns, symbol shapes, axis scales, text labels, and data modification techniques are available. This user's manual describes EZPLOT as it is implemented on the Ames Research Center, Dryden Research Facility ELXSI computer using DI-3000 graphics software tools.
A Simple and Resource-efficient Setup for the Computer-aided Drug Design Laboratory.
Moretti, Loris; Sartori, Luca
2016-10-01
Undertaking modelling investigations for Computer-Aided Drug Design (CADD) requires a proper environment. In principle, this could be done on a single computer, but the reality of a drug discovery program requires robustness and high-throughput computing (HTC) to efficiently support the research. Therefore, a more capable alternative is needed but its implementation has no widespread solution. Here, the realization of such a computing facility is discussed, from general layout to technical details all aspects are covered. © 2016 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
ERIC Educational Resources Information Center
Soddell, J. A.; Seviour, R. J.
1985-01-01
Describes an exercise which uses a computer program (written for Commodore 64 microcomputers) that accepts data obtained from identifying bacteria, calculates similarity coefficients, and performs single linkage cluster analysis. Includes a program for simulating bacterial cultures for students who should not handle pathogenic microorganisms. (JN)
Usage of Thin-Client/Server Architecture in Computer Aided Education
ERIC Educational Resources Information Center
Cimen, Caghan; Kavurucu, Yusuf; Aydin, Halit
2014-01-01
With the advances of technology, thin-client/server architecture has become popular in multi-user/single network environments. Thin-client is a user terminal in which the user can login to a domain and run programs by connecting to a remote server. Recent developments in network and hardware technologies (cloud computing, virtualization, etc.)…
Instrumentation and control of harmonic oscillators via a single-board microprocessor-FPGA device.
Picone, Rico A R; Davis, Solomon; Devine, Cameron; Garbini, Joseph L; Sidles, John A
2017-04-01
We report the development of an instrumentation and control system instantiated on a microprocessor-field programmable gate array (FPGA) device for a harmonic oscillator comprising a portion of a magnetic resonance force microscope. The specific advantages of the system are that it minimizes computation, increases maintainability, and reduces the technical barrier required to enter the experimental field of magnetic resonance force microscopy. Heterodyne digital control and measurement yields computational advantages. A single microprocessor-FPGA device improves system maintainability by using a single programming language. The system presented requires significantly less technical expertise to instantiate than the instrumentation of previous systems, yet integrity of performance is retained and demonstrated with experimental data.
Instrumentation and control of harmonic oscillators via a single-board microprocessor-FPGA device
NASA Astrophysics Data System (ADS)
Picone, Rico A. R.; Davis, Solomon; Devine, Cameron; Garbini, Joseph L.; Sidles, John A.
2017-04-01
We report the development of an instrumentation and control system instantiated on a microprocessor-field programmable gate array (FPGA) device for a harmonic oscillator comprising a portion of a magnetic resonance force microscope. The specific advantages of the system are that it minimizes computation, increases maintainability, and reduces the technical barrier required to enter the experimental field of magnetic resonance force microscopy. Heterodyne digital control and measurement yields computational advantages. A single microprocessor-FPGA device improves system maintainability by using a single programming language. The system presented requires significantly less technical expertise to instantiate than the instrumentation of previous systems, yet integrity of performance is retained and demonstrated with experimental data.
MPIRUN: A Portable Loader for Multidisciplinary and Multi-Zonal Applications
NASA Technical Reports Server (NTRS)
Fineberg, Samuel A.; Woodrow, Thomas S. (Technical Monitor)
1994-01-01
Multidisciplinary and multi-zonal applications are an important class of applications in the area of Computational Aerosciences. In these codes, two or more distinct parallel programs or copies of a single program are utilized to model a single problem. To support such applications, it is common to use a programming model where a program is divided into several single program multiple data stream (SPMD) applications, each of which solves the equations for a single physical discipline or grid zone. These SPMD applications are then bound together to form a single multidisciplinary or multi-zonal program in which the constituent parts communicate via point-to-point message passing routines. One method for implementing the message passing portion of these codes is with the new Message Passing Interface (MPI) standard. Unfortunately, this standard only specifies the message passing portion of an application, but does not specify any portable mechanisms for loading an application. MPIRUN was developed to provide a portable means for loading MPI programs, and was specifically targeted at multidisciplinary and multi-zonal applications. Programs using MPIRUN for loading and MPI for message passing are then portable between all machines supported by MPIRUN. MPIRUN is currently implemented for the Intel iPSC/860, TMC CM5, IBM SP-1 and SP-2, Intel Paragon, and workstation clusters. Further, MPIRUN is designed to be simple enough to port easily to any system supporting MPI.
Omega flight-test data reduction sequence. [computer programs for reduction of navigation data
NASA Technical Reports Server (NTRS)
Lilley, R. W.
1974-01-01
Computer programs for Omega data conversion, summary, and preparation for distribution are presented. Program logic and sample data formats are included, along with operational instructions for each program. Flight data (or data collected in flight format in the laboratory) is provided by the Ohio University Omega receiver base in the form of 6-bit binary words representing the phase of an Omega station with respect to the receiver's local clock. All eight Omega stations are measured in each 10-second Omega time frame. In addition, an event-marker bit and a time-slot D synchronizing bit are recorded. Program FDCON is used to remove data from the flight recorder tape and place it on data-processing cards for later use. Program FDSUM provides for computer plotting of selected LOP's, for single-station phase plots, and for printout of basic signal statistics for each Omega channel. Mean phase and standard deviation are printed, along with data from which a phase distribution can be plotted for each Omega station. Program DACOP simply copies the Omega data deck a controlled number of times, for distribution to users.
GASPRNG: GPU accelerated scalable parallel random number generator library
NASA Astrophysics Data System (ADS)
Gao, Shuang; Peterson, Gregory D.
2013-04-01
Graphics processors represent a promising technology for accelerating computational science applications. Many computational science applications require fast and scalable random number generation with good statistical properties, so they use the Scalable Parallel Random Number Generators library (SPRNG). We present the GPU Accelerated SPRNG library (GASPRNG) to accelerate SPRNG in GPU-based high performance computing systems. GASPRNG includes code for a host CPU and CUDA code for execution on NVIDIA graphics processing units (GPUs) along with a programming interface to support various usage models for pseudorandom numbers and computational science applications executing on the CPU, GPU, or both. This paper describes the implementation approach used to produce high performance and also describes how to use the programming interface. The programming interface allows a user to be able to use GASPRNG the same way as SPRNG on traditional serial or parallel computers as well as to develop tightly coupled programs executing primarily on the GPU. We also describe how to install GASPRNG and use it. To help illustrate linking with GASPRNG, various demonstration codes are included for the different usage models. GASPRNG on a single GPU shows up to 280x speedup over SPRNG on a single CPU core and is able to scale for larger systems in the same manner as SPRNG. Because GASPRNG generates identical streams of pseudorandom numbers as SPRNG, users can be confident about the quality of GASPRNG for scalable computational science applications. Catalogue identifier: AEOI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOI_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: UTK license. No. of lines in distributed program, including test data, etc.: 167900 No. of bytes in distributed program, including test data, etc.: 1422058 Distribution format: tar.gz Programming language: C and CUDA. Computer: Any PC or workstation with NVIDIA GPU (Tested on Fermi GTX480, Tesla C1060, Tesla M2070). Operating system: Linux with CUDA version 4.0 or later. Should also run on MacOS, Windows, or UNIX. Has the code been vectorized or parallelized?: Yes. Parallelized using MPI directives. RAM: 512 MB˜ 732 MB (main memory on host CPU, depending on the data type of random numbers.) / 512 MB (GPU global memory) Classification: 4.13, 6.5. Nature of problem: Many computational science applications are able to consume large numbers of random numbers. For example, Monte Carlo simulations are able to consume limitless random numbers for the computation as long as resources for the computing are supported. Moreover, parallel computational science applications require independent streams of random numbers to attain statistically significant results. The SPRNG library provides this capability, but at a significant computational cost. The GASPRNG library presented here accelerates the generators of independent streams of random numbers using graphical processing units (GPUs). Solution method: Multiple copies of random number generators in GPUs allow a computational science application to consume large numbers of random numbers from independent, parallel streams. GASPRNG is a random number generators library to allow a computational science application to employ multiple copies of random number generators to boost performance. Users can interface GASPRNG with software code executing on microprocessors and/or GPUs. Running time: The tests provided take a few minutes to run.
Program For Engineering Electrical Connections
NASA Technical Reports Server (NTRS)
Billitti, Joseph W.
1990-01-01
DFACS is interactive multiuser computer-aided-engineering software tool for system-level electrical integration and cabling engineering. Purpose of program to provide engineering community with centralized data base for putting in and gaining access to data on functional definition of system, details of end-circuit pinouts in systems and subsystems, and data on wiring harnesses. Objective, to provide instantaneous single point of interchange of information, thus avoiding error-prone, time-consuming, and costly shuttling of data along multiple paths. Designed to operate on DEC VAX mini or micro computer using Version 5.0/03 of INGRES.
LFSPMC: Linear feature selection program using the probability of misclassification
NASA Technical Reports Server (NTRS)
Guseman, L. F., Jr.; Marion, B. P.
1975-01-01
The computational procedure and associated computer program for a linear feature selection technique are presented. The technique assumes that: a finite number, m, of classes exists; each class is described by an n-dimensional multivariate normal density function of its measurement vectors; the mean vector and covariance matrix for each density function are known (or can be estimated); and the a priori probability for each class is known. The technique produces a single linear combination of the original measurements which minimizes the one-dimensional probability of misclassification defined by the transformed densities.
Heliocentric interplanetary low thrust trajectory optimization program, supplement 1, part 2
NASA Technical Reports Server (NTRS)
Mann, F. I.; Horsewood, J. L.
1978-01-01
The improvements made to the HILTOP electric propulsion trajectory computer program are described. A more realistic propulsion system model was implemented in which various thrust subsystem efficiencies and specific impulse are modeled as variable functions of power available to the propulsion system. The number of operating thrusters are staged, and the beam voltage is selected from a set of five (or less) constant voltages, based upon the application of variational calculus. The constant beam voltages may be optimized individually or collectively. The propulsion system logic is activated by a single program input key in such a manner as to preserve the HILTOP logic. An analysis describing these features, a complete description of program input quantities, and sample cases of computer output illustrating the program capabilities are presented.
NASA Technical Reports Server (NTRS)
Bergeron, H. P.; Haynie, A. T.; Mcdede, J. B.
1980-01-01
A general aviation single pilot instrument flight rule simulation capability was developed. Problems experienced by single pilots flying in IFR conditions were investigated. The simulation required a three dimensional spatial navaid environment of a flight navigational area. A computer simulation of all the navigational aids plus 12 selected airports located in the Washington/Norfolk area was developed. All programmed locations in the list were referenced to a Cartesian coordinate system with the origin located at a specified airport's reference point. All navigational aids with their associated frequencies, call letters, locations, and orientations plus runways and true headings are included in the data base. The simulation included a TV displayed out-the-window visual scene of country and suburban terrain and a scaled model runway complex. Any of the programmed runways, with all its associated navaids, can be referenced to a runway on the airport in this visual scene. This allows a simulation of a full mission scenario including breakout and landing.
Best-Fit Conic Approximation of Spacecraft Trajectory
NASA Technical Reports Server (NTRS)
Singh, Gurkipal
2005-01-01
A computer program calculates a best conic fit of a given spacecraft trajectory. Spacecraft trajectories are often propagated as conics onboard. The conic-section parameters as a result of the best-conic-fit are uplinked to computers aboard the spacecraft for use in updating predictions of the spacecraft trajectory for operational purposes. In the initial application for which this program was written, there is a requirement to fit a single conic section (necessitated by onboard memory constraints) accurate within 200 microradians to a sequence of positions measured over a 4.7-hour interval. The present program supplants a prior one that could not cover the interval with fewer than four successive conic sections. The present program is based on formulating the best-fit conic problem as a parameter-optimization problem and solving the problem numerically, on the ground, by use of a modified steepest-descent algorithm. For the purpose of this algorithm, optimization is defined as minimization of the maximum directional propagation error across the fit interval. In the specific initial application, the program generates a single 4.7-hour conic, the directional propagation of which is accurate to within 34 microradians easily exceeding the mission constraints by a wide margin.
Mairesse, Olivier; Hofmans, Joeri; Theuns, Peter
2008-05-01
We propose a free, easy-to-use computer program that does not requires prior knowledge of computer programming to generate and run experiments using textual or pictorial stimuli. Although the FM Experiment Builder suite was initially programmed for building and conducting FM experiments, it can also be applied for non-FM experiments that necessitate randomized, single, or multifactorial designs. The program is highly configurable, allowing multilingual use and a wide range of different response formats. The outputs of the experiments are Microsoft Excel compatible .xls files that allow easy copy-paste of the results into Weiss's FM CalSTAT program (2006) or any other statistical package. Its Java-based structure is compatible with both Windows and Macintosh operating systems, and its compactness (< 1 MB) makes it easily distributable over the Internet.
Elastic-plastic finite-element analyses of thermally cycled single-edge wedge specimens
NASA Technical Reports Server (NTRS)
Kaufman, A.
1982-01-01
Elastic-plastic stress-strain analyses were performed for single-edge wedge alloys subjected to thermal cycling in fluidized beds. Three cases (NASA TAZ-8A alloy under one cycling condition and 316 stainless steel alloy under two cycling conditions) were analyzed by using the MARC nonlinear, finite-element computer program. Elastic solutions from MARC showed good agreement with previously reported solutions that used the NASTRAN and ISO3DQ computer programs. The NASA TAZ-8A case exhibited no plastic strains, and the elastic and elastic-plastic analyses gave identical results. Elastic-plastic analyses of the 316 stainless steel alloy showed plastic strain reversal with a shift of the mean stresses in the compressive direction. The maximum equivalent total strain ranges for these cases were 13 to 22 percent greater than that calculated from elastic analyses.
Thermodynamic Data to 20,000 K For Monatomic Gases
NASA Technical Reports Server (NTRS)
Gordon, Sanford; McBride, Bonnie J.
1999-01-01
This report contains standard-state thermodynamic functions for 50 gaseous atomic elements plus deuterium and electron gas, 51 singly ionized positive ions, and 36 singly ionized negative ions. The data were generated by the NASA Lewis computer program PAC97, a modified version of PAC91 reported in McBride and Gordon. This report is being published primarily to document part of the data currently being used in several NASA Lewis computer programs. The data are presented in tabular and graphical format and are also represented in the form of least-squares coefficients. The tables give the following data as functions of temperature : heat capacity, enthalpy, entropy Gibbs energy, enthalpy of formation, and equilibrium constant. A brief discussion and a comparison of calculated results are given for several models for calculating ideal thermodynamic data for monatomic gases.
Hybrid Circuit Quantum Electrodynamics: Coupling a Single Silicon Spin Qubit to a Photon
2015-01-01
HYBRID CIRCUIT QUANTUM ELECTRODYNAMICS: COUPLING A SINGLE SILICON SPIN QUBIT TO A PHOTON PRINCETON UNIVERSITY JANUARY 2015 FINAL...SILICON SPIN QUBIT TO A PHOTON 5a. CONTRACT NUMBER FA8750-12-2-0296 5b. GRANT NUMBER N/A 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Jason R. Petta...architectures. 15. SUBJECT TERMS Quantum Computing, Quantum Hybrid Circuits, Quantum Electrodynamics, Coupling a Single Silicon Spin Qubit to a Photon
Development of Theoretical and Computational Methods for Single-Source Bathymetric Data
2016-09-15
Methods for Single-Source N00014-16-1-2035 Bathymetric Data Sb. GRANT NUMBER 11893686 Sc. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Sd. PROJECT NUMBER...A method is outlined for fusing the information inherent in such source documents, at different scales, into a single picture for the marine...algorithm reliability, which reflects the degree of inconsistency of the source documents, is also provided. A conceptual outline of the method , and a
Miller, Mark P.; Knaus, Brian J.; Mullins, Thomas D.; Haig, Susan M.
2013-01-01
SSR_pipeline is a flexible set of programs designed to efficiently identify simple sequence repeats (SSRs; for example, microsatellites) from paired-end high-throughput Illumina DNA sequencing data. The program suite contains three analysis modules along with a fourth control module that can be used to automate analyses of large volumes of data. The modules are used to (1) identify the subset of paired-end sequences that pass quality standards, (2) align paired-end reads into a single composite DNA sequence, and (3) identify sequences that possess microsatellites conforming to user specified parameters. Each of the three separate analysis modules also can be used independently to provide greater flexibility or to work with FASTQ or FASTA files generated from other sequencing platforms (Roche 454, Ion Torrent, etc). All modules are implemented in the Python programming language and can therefore be used from nearly any computer operating system (Linux, Macintosh, Windows). The program suite relies on a compiled Python extension module to perform paired-end alignments. Instructions for compiling the extension from source code are provided in the documentation. Users who do not have Python installed on their computers or who do not have the ability to compile software also may choose to download packaged executable files. These files include all Python scripts, a copy of the compiled extension module, and a minimal installation of Python in a single binary executable. See program documentation for more information.
NASA Astrophysics Data System (ADS)
Castro, María Eugenia; Díaz, Javier; Muñoz-Caro, Camelia; Niño, Alfonso
2011-09-01
We present a system of classes, SHMatrix, to deal in a unified way with the computation of eigenvalues and eigenvectors in real symmetric and Hermitian matrices. Thus, two descendant classes, one for the real symmetric and other for the Hermitian cases, override the abstract methods defined in a base class. The use of the inheritance relationship and polymorphism allows handling objects of any descendant class using a single reference of the base class. The system of classes is intended to be the core element of more sophisticated methods to deal with large eigenvalue problems, as those arising in the variational treatment of realistic quantum mechanical problems. The present system of classes allows computing a subset of all the possible eigenvalues and, optionally, the corresponding eigenvectors. Comparison with well established solutions for analogous eigenvalue problems, as those included in LAPACK, shows that the present solution is competitive against them. Program summaryProgram title: SHMatrix Catalogue identifier: AEHZ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHZ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 2616 No. of bytes in distributed program, including test data, etc.: 127 312 Distribution format: tar.gz Programming language: Standard ANSI C++. Computer: PCs and workstations. Operating system: Linux, Windows. Classification: 4.8. Nature of problem: The treatment of problems involving eigensystems is a central topic in the quantum mechanical field. Here, the use of the variational approach leads to the computation of eigenvalues and eigenvectors of real symmetric and Hermitian Hamiltonian matrices. Realistic models with several degrees of freedom leads to large (sometimes very large) matrices. Different techniques, such as divide and conquer, can be used to factorize the matrices in order to apply a parallel computing approach. However, it is still interesting to have a core procedure able to tackle the computation of eigenvalues and eigenvectors once the matrix has been factorized to pieces of enough small size. Several available software packages, such as LAPACK, tackled this problem under the traditional imperative programming paradigm. In order to ease the modelling of complex quantum mechanical models it could be interesting to apply an object-oriented approach to the treatment of the eigenproblem. This approach offers the advantage of a single, uniform treatment for the real symmetric and Hermitian cases. Solution method: To reach the above goals, we have developed a system of classes: SHMatrix. SHMatrix is composed by an abstract base class and two descendant classes, one for real symmetric matrices and the other for the Hermitian case. The object-oriented characteristics of inheritance and polymorphism allows handling both cases using a single reference of the base class. The basic computing strategy applied in SHMatrix allows computing subsets of eigenvalues and (optionally) eigenvectors. The tests performed show that SHMatrix is competitive, and more efficient for large matrices, than the equivalent routines of the LAPACK package. Running time: The examples included in the distribution take only a couple of seconds to run.
X based interactive computer graphics applications for aerodynamic design and education
NASA Technical Reports Server (NTRS)
Benson, Thomas J.; Higgs, C. Fred, III
1995-01-01
Six computer applications packages have been developed to solve a variety of aerodynamic problems in an interactive environment on a single workstation. The packages perform classical one dimensional analysis under the control of a graphical user interface and can be used for preliminary design or educational purposes. The programs were originally developed on a Silicon Graphics workstation and used the GL version of the FORMS library as the graphical user interface. These programs have recently been converted to the XFORMS library of X based graphics widgets and have been tested on SGI, IBM, Sun, HP and PC-Lunix computers. The paper will show results from the new VU-DUCT program as a prime example. VU-DUCT has been developed as an educational package for the study of subsonic open and closed loop wind tunnels.
NASA Technical Reports Server (NTRS)
1972-01-01
Current research is reported on precise and accurate descriptions of the earth's surface and gravitational field and on time variations of geophysical parameters. A new computer program was written in connection with the adjustment of the BC-4 worldwide geometric satellite triangulation net. The possibility that an increment to accuracy could be transferred from a super-control net to the basic geodetic (first-order triangulation) was investigated. Coordinates of the NA9 solution were computed and were transformed to the NAD datum, based on GEOS 1 observations. Normal equations from observational data of several different systems and constraint equations were added and a single solution was obtained for the combined systems. Transformation parameters with constraints were determined, and the impact of computers on surveying and mapping is discussed.
Decoding the Regulatory Network for Blood Development from Single-Cell Gene Expression Measurements
Haghverdi, Laleh; Lilly, Andrew J.; Tanaka, Yosuke; Wilkinson, Adam C.; Buettner, Florian; Macaulay, Iain C.; Jawaid, Wajid; Diamanti, Evangelia; Nishikawa, Shin-Ichi; Piterman, Nir; Kouskoff, Valerie; Theis, Fabian J.; Fisher, Jasmin; Göttgens, Berthold
2015-01-01
Here we report the use of diffusion maps and network synthesis from state transition graphs to better understand developmental pathways from single cell gene expression profiling. We map the progression of mesoderm towards blood in the mouse by single-cell expression analysis of 3,934 cells, capturing cells with blood-forming potential at four sequential developmental stages. By adapting the diffusion plot methodology for dimensionality reduction to single-cell data, we reconstruct the developmental journey to blood at single-cell resolution. Using transitions between individual cellular states as input, we develop a single-cell network synthesis toolkit to generate a computationally executable transcriptional regulatory network model that recapitulates blood development. Model predictions were validated by showing that Sox7 inhibits primitive erythropoiesis, and that Sox and Hox factors control early expression of Erg. We therefore demonstrate that single-cell analysis of a developing organ coupled with computational approaches can reveal the transcriptional programs that control organogenesis. PMID:25664528
Demonstration of Multi- and Single-Reader Sample Size Program for Diagnostic Studies software.
Hillis, Stephen L; Schartz, Kevin M
2015-02-01
The recently released software Multi- and Single-Reader Sample Size Sample Size Program for Diagnostic Studies , written by Kevin Schartz and Stephen Hillis, performs sample size computations for diagnostic reader-performance studies. The program computes the sample size needed to detect a specified difference in a reader performance measure between two modalities, when using the analysis methods initially proposed by Dorfman, Berbaum, and Metz (DBM) and Obuchowski and Rockette (OR), and later unified and improved by Hillis and colleagues. A commonly used reader performance measure is the area under the receiver-operating-characteristic curve. The program can be used with typical common reader-performance measures which can be estimated parametrically or nonparametrically. The program has an easy-to-use step-by-step intuitive interface that walks the user through the entry of the needed information. Features of the software include the following: (1) choice of several study designs; (2) choice of inputs obtained from either OR or DBM analyses; (3) choice of three different inference situations: both readers and cases random, readers fixed and cases random, and readers random and cases fixed; (4) choice of two types of hypotheses: equivalence or noninferiority; (6) choice of two output formats: power for specified case and reader sample sizes, or a listing of case-reader combinations that provide a specified power; (7) choice of single or multi-reader analyses; and (8) functionality in Windows, Mac OS, and Linux.
Grammar Review: Your Tool for Success. Teacher Materials.
ERIC Educational Resources Information Center
Pittsburgh Univ., Johnstown, PA. Education Div.
Teacher materials are provided for a computer-assisted English grammar curriculum for adult basic education students (1-8 grade level). They accompany a software program (diskette) that the student is able to use by himself/herself with the Apple IIc or Apple IIe computer with single or double drive and a monitor or a television with an R.F.…
NASA Technical Reports Server (NTRS)
Hall, William A.
1990-01-01
Slave microprocessors in multimicroprocessor computing system contains modified circuit cards programmed via bus connecting master processor with slave microprocessors. Enables interactive, microprocessor-based, single-loop control. Confers ability to load and run program from master/slave bus, without need for microprocessor development station. Tristate buffers latch all data and information on status. Slave central processing unit never connected directly to bus.
WhAEM2000 is computer program that solves steady state ground-water flow and advective streamlines in homogeneous, single layer aquifers. The program was designed for capture zone delineation in support of protection of the source water area surrounding public water supply well...
Accelerated Adaptive MGS Phase Retrieval
NASA Technical Reports Server (NTRS)
Lam, Raymond K.; Ohara, Catherine M.; Green, Joseph J.; Bikkannavar, Siddarayappa A.; Basinger, Scott A.; Redding, David C.; Shi, Fang
2011-01-01
The Modified Gerchberg-Saxton (MGS) algorithm is an image-based wavefront-sensing method that can turn any science instrument focal plane into a wavefront sensor. MGS characterizes optical systems by estimating the wavefront errors in the exit pupil using only intensity images of a star or other point source of light. This innovative implementation of MGS significantly accelerates the MGS phase retrieval algorithm by using stream-processing hardware on conventional graphics cards. Stream processing is a relatively new, yet powerful, paradigm to allow parallel processing of certain applications that apply single instructions to multiple data (SIMD). These stream processors are designed specifically to support large-scale parallel computing on a single graphics chip. Computationally intensive algorithms, such as the Fast Fourier Transform (FFT), are particularly well suited for this computing environment. This high-speed version of MGS exploits commercially available hardware to accomplish the same objective in a fraction of the original time. The exploit involves performing matrix calculations in nVidia graphic cards. The graphical processor unit (GPU) is hardware that is specialized for computationally intensive, highly parallel computation. From the software perspective, a parallel programming model is used, called CUDA, to transparently scale multicore parallelism in hardware. This technology gives computationally intensive applications access to the processing power of the nVidia GPUs through a C/C++ programming interface. The AAMGS (Accelerated Adaptive MGS) software takes advantage of these advanced technologies, to accelerate the optical phase error characterization. With a single PC that contains four nVidia GTX-280 graphic cards, the new implementation can process four images simultaneously to produce a JWST (James Webb Space Telescope) wavefront measurement 60 times faster than the previous code.
Toy Control Program evaluation.
Stewart, H A; Ormond, C; Seeger, B R
1991-08-01
The Toy Control Program for the Apple IIe microcomputer is a software and hardware package developed for the training of single-switch scanning skills. The specially designed scanning programs provide on screen visual feedback and activate a battery-powered toy to reinforce performance. This study examined whether the training of preschool subjects in single-switch scanning skills with the Toy Control Program would result in increased task completion scores and increased levels of attention to task, as compared with conditions of toy activation only and microcomputer programs with screen reinforcement only. The results showed that the subjects paid significantly more attention to the toys as reinforcers (p less than .01). No significant difference was found for the performance results of the three conditions. These findings support the use of a program like the Toy Control Program, which integrates the instructional capabilities of a computer with the reinforcement potential of a toy and the creativity of a therapist.
Macintosh/LabVIEW based control and data acquisition system for a single photon counting fluorometer
NASA Astrophysics Data System (ADS)
Stryjewski, Wieslaw J.
1991-08-01
A flexible software system has been developed for controlling fluorescence decay measurements using the virtual instrument approach offered by LabVIEW. The time-correlated single photon counting instrument operates under computer control in both manual and automatic mode. Implementation time was short and the equipment is now easier to use, reducing the training time required for new investigators. It is not difficult to customize the front panel or adapt the program to a different instrument. We found LabVIEW much more convenient to use for this application than traditional, textual computer languages.
Spur, helical, and spiral bevel transmission life modeling
NASA Technical Reports Server (NTRS)
Savage, Michael; Rubadeux, Kelly L.; Coe, Harold H.; Coy, John J.
1994-01-01
A computer program, TLIFE, which estimates the life, dynamic capacity, and reliability of aircraft transmissions, is presented. The program enables comparisons of transmission service life at the design stage for optimization. A variety of transmissions may be analyzed including: spur, helical, and spiral bevel reductions as well as series combinations of these reductions. The basic spur and helical reductions include: single mesh, compound, and parallel path plus revert star and planetary gear trains. A variety of straddle and overhung bearing configurations on the gear shafts are possible as is the use of a ring gear for the output. The spiral bevel reductions include single and dual input drives with arbitrary shaft angles. The program is written in FORTRAN 77 and has been executed both in the personal computer DOS environment and on UNIX workstations. The analysis may be performed in either the SI metric or the English inch system of units. The reliability and life analysis is based on the two-parameter Weibull distribution lives of the component gears and bearings. The program output file describes the overall transmission and each constituent transmission, its components, and their locations, capacities, and loads. Primary output is the dynamic capacity and 90-percent reliability and mean lives of the unit transmissions and the overall system which can be used to estimate service overhaul frequency requirements. Two examples are presented to illustrate the information available for single element and series transmissions.
Comparing the OpenMP, MPI, and Hybrid Programming Paradigm on an SMP Cluster
NASA Technical Reports Server (NTRS)
Jost, Gabriele; Jin, Haoqiang; anMey, Dieter; Hatay, Ferhat F.
2003-01-01
With the advent of parallel hardware and software technologies users are faced with the challenge to choose a programming paradigm best suited for the underlying computer architecture. With the current trend in parallel computer architectures towards clusters of shared memory symmetric multi-processors (SMP), parallel programming techniques have evolved to support parallelism beyond a single level. Which programming paradigm is the best will depend on the nature of the given problem, the hardware architecture, and the available software. In this study we will compare different programming paradigms for the parallelization of a selected benchmark application on a cluster of SMP nodes. We compare the timings of different implementations of the same CFD benchmark application employing the same numerical algorithm on a cluster of Sun Fire SMP nodes. The rest of the paper is structured as follows: In section 2 we briefly discuss the programming models under consideration. We describe our compute platform in section 3. The different implementations of our benchmark code are described in section 4 and the performance results are presented in section 5. We conclude our study in section 6.
Advanced Simulation and Computing Fiscal Year 14 Implementation Plan, Rev. 0.5
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meisner, Robert; McCoy, Michel; Archer, Bill
2013-09-11
The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of experimental facilities and programs, and the computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities andmore » computational resources that support annual stockpile assessment and certification, study advanced nuclear weapons design and manufacturing processes, analyze accident scenarios and weapons aging, and provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is now focused on increasing predictive capabilities in a three-dimensional (3D) simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (sufficient resolution, dimensionality, and scientific details), quantify critical margins and uncertainties, and resolve increasingly difficult analyses needed for the SSP. Moreover, ASC’s business model is integrated and focused on requirements-driven products that address long-standing technical questions related to enhanced predictive capability in the simulation tools.« less
von Arnim, Albrecht G; Missra, Anamika
2017-01-01
Leading voices in the biological sciences have called for a transformation in graduate education leading to the PhD degree. One area commonly singled out for growth and innovation is cross-training in computational science. In 1998, the University of Tennessee (UT) founded an intercollegiate graduate program called the UT-ORNL Graduate School of Genome Science and Technology in partnership with the nearby Oak Ridge National Laboratory. Here, we report outcome data that attest to the program's effectiveness in graduating computationally enabled biologists for diverse careers. Among 77 PhD graduates since 2003, the majority came with traditional degrees in the biological sciences, yet two-thirds moved into computational or hybrid (computational-experimental) positions. We describe the curriculum of the program and how it has changed. We also summarize how the program seeks to establish cohesion between computational and experimental biologists. This type of program can respond flexibly and dynamically to unmet training needs. In conclusion, this study from a flagship, state-supported university may serve as a reference point for creating a stable, degree-granting, interdepartmental graduate program in computational biology and allied areas. © 2017 A. G. von Arnim and A. Missra. CBE—Life Sciences Education © 2017 The American Society for Cell Biology. This article is distributed by The American Society for Cell Biology under license from the author(s). It is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).
NBS computerized carpool matching system: users' guide. Final technical report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilsinn, J.F.; Landau, S.
1974-12-01
The report includes flowcharts, input/output formats, and program listings for the programs, plus details of the manual process for coordinate coding. The matching program produces, for each person desiring it, a list of others residing within a pre-specified distance of him, and is thus applicable to a single work destination having primarily one work schedule. The system is currently operational on the National Bureau of Standards' UNIVAC 1108 computer and was run in March of 1974, producing lists for about 950 employees in less than four minutes computer time. Subsequent maintenance of the system will be carried out by themore » NBS Management and Organization Division. (GRA)« less
Computing Operating Characteristics Of Bearing/Shaft Systems
NASA Technical Reports Server (NTRS)
Moore, James D.
1996-01-01
SHABERTH computer program predicts operating characteristics of bearings in multibearing load-support system. Lubricated and nonlubricated bearings modeled. Calculates loads, torques, temperatures, and fatigue lives of ball and/or roller bearings on single shaft. Provides for analysis of reaction of system to termination of supply of lubricant to bearings and other lubricated mechanical elements. Valuable in design and analysis of shaft/bearing systems. Two versions of SHABERTH available. Cray version (LEW-14860), "Computing Thermal Performances Of Shafts and Bearings". IBM PC version (MFS-28818), written for IBM PC-series and compatible computers running MS-DOS.
Developing software to use parallel processing effectively. Final report, June-December 1987
DOE Office of Scientific and Technical Information (OSTI.GOV)
Center, J.
1988-10-01
This report describes the difficulties involved in writing efficient parallel programs and describes the hardware and software support currently available for generating software that utilizes processing effectively. Historically, the processing rate of single-processor computers has increased by one order of magnitude every five years. However, this pace is slowing since electronic circuitry is coming up against physical barriers. Unfortunately, the complexity of engineering and research problems continues to require ever more processing power (far in excess of the maximum estimated 3 Gflops achievable by single-processor computers). For this reason, parallel-processing architectures are receiving considerable interest, since they offer high performancemore » more cheaply than a single-processor supercomputer, such as the Cray.« less
Secure entanglement distillation for double-server blind quantum computation.
Morimae, Tomoyuki; Fujii, Keisuke
2013-07-12
Blind quantum computation is a new secure quantum computing protocol where a client, who does not have enough quantum technologies at her disposal, can delegate her quantum computation to a server, who has a fully fledged quantum computer, in such a way that the server cannot learn anything about the client's input, output, and program. If the client interacts with only a single server, the client has to have some minimum quantum power, such as the ability of emitting randomly rotated single-qubit states or the ability of measuring states. If the client interacts with two servers who share Bell pairs but cannot communicate with each other, the client can be completely classical. For such a double-server scheme, two servers have to share clean Bell pairs, and therefore the entanglement distillation is necessary in a realistic noisy environment. In this Letter, we show that it is possible to perform entanglement distillation in the double-server scheme without degrading the security of blind quantum computing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hasenkamp, Daren; Sim, Alexander; Wehner, Michael
Extensive computing power has been used to tackle issues such as climate changes, fusion energy, and other pressing scientific challenges. These computations produce a tremendous amount of data; however, many of the data analysis programs currently only run a single processor. In this work, we explore the possibility of using the emerging cloud computing platform to parallelize such sequential data analysis tasks. As a proof of concept, we wrap a program for analyzing trends of tropical cyclones in a set of virtual machines (VMs). This approach allows the user to keep their familiar data analysis environment in the VMs, whilemore » we provide the coordination and data transfer services to ensure the necessary input and output are directed to the desired locations. This work extensively exercises the networking capability of the cloud computing systems and has revealed a number of weaknesses in the current cloud system software. In our tests, we are able to scale the parallel data analysis job to a modest number of VMs and achieve a speedup that is comparable to running the same analysis task using MPI. However, compared to MPI based parallelization, the cloud-based approach has a number of advantages. The cloud-based approach is more flexible because the VMs can capture arbitrary software dependencies without requiring the user to rewrite their programs. The cloud-based approach is also more resilient to failure; as long as a single VM is running, it can make progress while as soon as one MPI node fails the whole analysis job fails. In short, this initial work demonstrates that a cloud computing system is a viable platform for distributed scientific data analyses traditionally conducted on dedicated supercomputing systems.« less
CUDAEASY - a GPU accelerated cosmological lattice program
NASA Astrophysics Data System (ADS)
Sainio, J.
2010-05-01
This paper presents, to the author's knowledge, the first graphics processing unit (GPU) accelerated program that solves the evolution of interacting scalar fields in an expanding universe. We present the implementation in NVIDIA's Compute Unified Device Architecture (CUDA) and compare the performance to other similar programs in chaotic inflation models. We report speedups between one and two orders of magnitude depending on the used hardware and software while achieving small errors in single precision. Simulations that used to last roughly one day to compute can now be done in hours and this difference is expected to increase in the future. The program has been written in the spirit of LATTICEEASY and users of the aforementioned program should find it relatively easy to start using CUDAEASY in lattice simulations. The program is available at http://www.physics.utu.fi/theory/particlecosmology/cudaeasy/ under the GNU General Public License.
Multispan Elevated Guideway Design for Passenger Transport Vehicles : Volume 1. Text.
DOT National Transportation Integrated Search
1975-04-01
Analysis techniques, a design procedure and design data are described for passenger vehicle, simply supported, single span and multiple span elevated guideway structures. Analyses and computer programs are developed to determine guideway deflections,...
The performance of low-cost commercial cloud computing as an alternative in computational chemistry.
Thackston, Russell; Fortenberry, Ryan C
2015-05-05
The growth of commercial cloud computing (CCC) as a viable means of computational infrastructure is largely unexplored for the purposes of quantum chemistry. In this work, the PSI4 suite of computational chemistry programs is installed on five different types of Amazon World Services CCC platforms. The performance for a set of electronically excited state single-point energies is compared between these CCC platforms and typical, "in-house" physical machines. Further considerations are made for the number of cores or virtual CPUs (vCPUs, for the CCC platforms), but no considerations are made for full parallelization of the program (even though parallelization of the BLAS library is implemented), complete high-performance computing cluster utilization, or steal time. Even with this most pessimistic view of the computations, CCC resources are shown to be more cost effective for significant numbers of typical quantum chemistry computations. Large numbers of large computations are still best utilized by more traditional means, but smaller-scale research may be more effectively undertaken through CCC services. © 2015 Wiley Periodicals, Inc.
NASA Technical Reports Server (NTRS)
Hajela, P.; Chen, J. L.
1986-01-01
The present paper describes an approach for the optimum sizing of single and joined wing structures that is based on representing the built-up finite element model of the structure by an equivalent beam model. The low order beam model is computationally more efficient in an environment that requires repetitive analysis of several trial designs. The design procedure is implemented in a computer program that requires geometry and loading data typically available from an aerodynamic synthesis program, to create the finite element model of the lifting surface and an equivalent beam model. A fully stressed design procedure is used to obtain rapid estimates of the optimum structural weight for the beam model for a given geometry, and a qualitative description of the material distribution over the wing structure. The synthesis procedure is demonstrated for representative single wing and joined wing structures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlsson, Mats; Johansson, Mikael; Larson, Jeffrey
Previous approaches for scheduling a league with round-robin and divisional tournaments involved decomposing the problem into easier subproblems. This approach, used to schedule the top Swedish handball league Elitserien, reduces the problem complexity but can result in suboptimal schedules. This paper presents an integrated constraint programming model that allows to perform the scheduling in a single step. Particular attention is given to identifying implied and symmetry-breaking constraints that reduce the computational complexity significantly. The experimental evaluation of the integrated approach takes considerably less computational effort than the previous approach.
Laboratory for Computer Science Progress Report 19, 1 July 1981-30 June 1982.
1984-05-01
Multiprocessor Architectures 202 4. TRIX Operating System 209 5. VLSI Tools 212 ’SYSTEMATIC PROGRAM DEVELOPMENT, 221 1. Introduction 222 2. Specification...exploring distributed operating systems and the architecture of single-user powerful computers that are interconnected by communication networks. The...to now. In particular, we expect to experiment with languages, operating systems , and applications that establish the feasibility of distributed
Using the Parallel Computing Toolbox with MATLAB on the Peregrine System |
parallel pool took %g seconds.\\n', toc) % "single program multiple data" spmd fprintf('Worker %d says Hello World!\\n', labindex) end delete(gcp); % close the parallel pool exit To run the script on a compute node, create the file helloWorld.sub: #!/bin/bash #PBS -l walltime=05:00 #PBS -l nodes=1 #PBS -N
POLO2: a user's guide to multiple Probit Or LOgit analysis
Robert M. Russell; N. E. Savin; Jacqueline L. Robertson
1981-01-01
This guide provides instructions for the use of POLO2, a computer program for multivariate probit or logic analysis of quantal response data. As many as 3000 test subjects may be included in a single analysis. Including the constant term, up to nine explanatory variables may be used. Examples illustrating input, output, and uses of the program's special features...
Urban Crowns: crown analysis software to assist in quantifying urban tree benefits
Matthew F. Winn; Sang-Mook Lee Bradley; Philip A. Araman
2010-01-01
UrbanCrowns is a Microsoft® Windows®-based computer program developed by the U.S. Forest Service Southern Research Station. The software assists urban forestry professionals, arborists, and community volunteers in assessing and monitoring the crown characteristics of urban trees (both deciduous and coniferous) using a single side-view digital photograph. Program output...
Topics in the optimization of millimeter-wave mixers
NASA Technical Reports Server (NTRS)
Siegel, P. H.; Kerr, A. R.; Hwang, W.
1984-01-01
A user oriented computer program for the analysis of single-ended Schottky diode mixers is described. The program is used to compute the performance of a 140 to 220 GHz mixer and excellent agreement with measurements at 150 and 180 GHz is obtained. A sensitivity analysis indicates the importance of various diode and mount characteristics on the mixer performance. A computer program for the analysis of varactor diode multipliers is described. The diode operates in either the reverse biased varactor mode or with substantial forward current flow where the conversion mechanism is predominantly resistive. A description and analysis of a new H-plane rectangular waveguide transformer is reported. The transformer is made quickly and easily in split-block waveguide using a standard slitting saw. It is particularly suited for use in the millimeter-wave band, replacing conventional electroformed stepped transformers. A theoretical analysis of the transformer is given and good agreement is obtained with measurements made at X-band.
Hukerikar, Saurabh; Teranishi, Keita; Diniz, Pedro C.; ...
2017-02-11
In the presence of accelerated fault rates, which are projected to be the norm on future exascale systems, it will become increasingly difficult for high-performance computing (HPC) applications to accomplish useful computation. Due to the fault-oblivious nature of current HPC programming paradigms and execution environments, HPC applications are insufficiently equipped to deal with errors. We believe that HPC applications should be enabled with capabilities to actively search for and correct errors in their computations. The redundant multithreading (RMT) approach offers lightweight replicated execution streams of program instructions within the context of a single application process. Furthermore, the use of completemore » redundancy incurs significant overhead to the application performance.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hukerikar, Saurabh; Teranishi, Keita; Diniz, Pedro C.
In the presence of accelerated fault rates, which are projected to be the norm on future exascale systems, it will become increasingly difficult for high-performance computing (HPC) applications to accomplish useful computation. Due to the fault-oblivious nature of current HPC programming paradigms and execution environments, HPC applications are insufficiently equipped to deal with errors. We believe that HPC applications should be enabled with capabilities to actively search for and correct errors in their computations. The redundant multithreading (RMT) approach offers lightweight replicated execution streams of program instructions within the context of a single application process. Furthermore, the use of completemore » redundancy incurs significant overhead to the application performance.« less
A user-oriented and computerized model for estimating vehicle ride quality
NASA Technical Reports Server (NTRS)
Leatherwood, J. D.; Barker, L. M.
1984-01-01
A simplified empirical model and computer program for estimating passenger ride comfort within air and surface transportation systems are described. The model is based on subjective ratings from more than 3000 persons who were exposed to controlled combinations of noise and vibration in the passenger ride quality apparatus. This model has the capability of transforming individual elements of a vehicle's noise and vibration environment into subjective discomfort units and then combining the subjective units to produce a single discomfort index typifying passenger acceptance of the environment. The computational procedures required to obtain discomfort estimates are discussed, and a user oriented ride comfort computer program is described. Examples illustrating application of the simplified model to helicopter and automobile ride environments are presented.
Concentrator optical characterization using computer mathematical modelling and point source testing
NASA Technical Reports Server (NTRS)
Dennison, E. W.; John, S. L.; Trentelman, G. F.
1984-01-01
The optical characteristics of a paraboloidal solar concentrator are analyzed using the intercept factor curve (a format for image data) to describe the results of a mathematical model and to represent reduced data from experimental testing. This procedure makes it possible not only to test an assembled concentrator, but also to evaluate single optical panels or to conduct non-solar tests of an assembled concentrator. The use of three-dimensional ray tracing computer programs to calculate the mathematical model is described. These ray tracing programs can include any type of optical configuration from simple paraboloids to array of spherical facets and can be adapted to microcomputers or larger computers, which can graphically display real-time comparison of calculated and measured data.
ENGINEL: A single rotor turbojet engine cycle match performance program
NASA Technical Reports Server (NTRS)
Lovell, W. A.
1977-01-01
ENGINEL is a computer program which was developed to generate the design and off-design performance of a single rotor turbojet engine with or without afterburning using a cycle match procedure. It is capable of producing engine performance over a wide range of altitudes and Mach numbers. The flexibility, of operating with a variable geometry turbine, for improved off-design fuel consumption or with a fixed geometry turbine as in conventional turbojets, has been incorporated. In addition, the option of generation engine performance with JP4, liquid hydrogen or methane as fuel is provided.
NASA Astrophysics Data System (ADS)
Mohlman, H. T.
1983-04-01
The Air Force community noise prediction model (NOISEMAP) is used to describe the aircraft noise exposure around airbases and thereby aid airbase planners to minimize exposure and prevent community encroachment which could limit mission effectiveness of the installation. This report documents two computer programs (OMEGA 10 and OMEGA 11) which were developed to prepare aircraft flight and ground runup noise data for input to NOISEMAP. OMEGA 10 is for flight operations and OMEGA 11 is for aircraft ground runups. All routines in each program are documented at a level useful to a programmer working with the code or a reader interested in a general overview of what happens within a specific subroutine. Both programs input normalized, reference aircraft noise data; i.e., data at a standard reference distance from the aircraft, for several fixed engine power settings, a reference airspeed and standard day meteorological conditions. Both programs operate on these normalized, reference data in accordance with user-defined, non-reference conditions to derive single-event noise data for 22 distances (200 to 25,000 feet) in a variety of physical and psycho-acoustic metrics. These outputs are in formats ready for input to NOISEMAP.
Turbine endwall single cylinder program
NASA Technical Reports Server (NTRS)
Langston, L. S.
1982-01-01
Detailed measurement of the flow field in front of a large-scale single cylinder, mounted in a wind tunnel is discussed. A better understanding of the three dimensional separation occuring in front of the cylinder on the endwall, and of the vortex system that is formed is sought. A data base with which to check analytical and numerical computer models of three dimensional flows is also anticipated.
Multi-dimensional computer simulation of MHD combustor hydrodynamics
NASA Astrophysics Data System (ADS)
Berry, G. F.; Chang, S. L.; Lottes, S. A.; Rimkus, W. A.
1991-04-01
Argonne National Laboratory is investigating the nonreacting jet gas mixing patterns in an MHD second stage combustor by using a 2-D multiphase hydrodynamics computer program and a 3-D single phase hydrodynamics computer program. The computer simulations are intended to enhance the understanding of flow and mixing patterns in the combustor, which in turn may lead to improvement of the downstream MHD channel performance. A 2-D steady state computer model, based on mass and momentum conservation laws for multiple gas species, is used to simulate the hydrodynamics of the combustor in which a jet of oxidizer is injected into an unconfined cross stream gas flow. A 3-D code is used to examine the effects of the side walls and the distributed jet flows on the non-reacting jet gas mixing patterns. The code solves the conservation equations of mass, momentum, and energy, and a transport equation of a turbulence parameter and allows permeable surfaces to be specified for any computational cell.
Performance and economics of residential solar space heating
NASA Astrophysics Data System (ADS)
Zehr, F. J.; Vineyard, T. A.; Barnes, R. W.; Oneal, D. L.
1982-11-01
The performance and economics of residential solar space heating were studied for various locations in the contiguous United States. Common types of active and passive solar heating systems were analyzed with respect to an average-size, single-family house designed to meet or exceed the thermal requirements of the Department of Housing and Urban Development Minimum Property Standards (HUD-MPS). The solar systems were evaluated in seventeen cities to provide a broad range of climatic conditions. Active systems evaluated consist of air and liquid flat plate collectors with single- and double-glazing: passive systems include Trombe wall, water wall, direct gain, and sunspace systems. The active system solar heating performance was computed using the University of Wisconsin's F-CHART computer program. The Los Alamos Scientific Laboratory's Solar Load Ratio (SLR) method was employed to compute solar heating performance for the passive systems. Heating costs were computed with gas, oil, and electricity as backups and as conventional heating system fuels.
Probabilistic analysis algorithm for UA slope software program.
DOT National Transportation Integrated Search
2013-12-01
A reliability-based computational algorithm for using a single row and equally spaced drilled shafts to : stabilize an unstable slope has been developed in this research. The Monte-Carlo simulation (MCS) : technique was used in the previously develop...
[Development of a training program for Japanese dyslexic children and its short-term efficacy].
Wakamiya, Eiji; Takeshita, Takashi; Nakanishi, Makoto; Mizuta, Mekumi; Kurimoto, Naoko; Okumura, Tomohito; Tamai, Hiroshi; Koeda, Tatsuya; Inagaki, Masumi
2013-07-01
The purpose of this study is to develop a computer training program of reading for the Japanese dyslexic children and to examine its short-term efficacy on their reading and writing abilities. Fifteen dyslexic children underwent two sets of training programs, one for single-hiragana and non-word reading, and the other for the reading of real words, in which each hiragana was followed by the correctly read sound. Subjects were required to use a given program for five minutes a day for three weeks, switching to the other program after a three-week interval. Four kinds of reading test and one writing test were done at the beginning and end of each program period. The averages reading speeds increased, and the single-hiragana reading error average was lower after the training. Hiragana-writing errors also decreased, even though no writing procedure was involved in the programs. The results indicate the usefulness of these training programs as an early intervention of reading and writing for the Japanese dyslexic children.
NASA Technical Reports Server (NTRS)
Elrad, Tzilla (Editor); Filman, Robert E. (Editor); Bader, Atef (Editor)
2001-01-01
Computer science has experienced an evolution in programming languages and systems from the crude assembly and machine codes of the earliest computers through concepts such as formula translation, procedural programming, structured programming, functional programming, logic programming, and programming with abstract data types. Each of these steps in programming technology has advanced our ability to achieve clear separation of concerns at the source code level. Currently, the dominant programming paradigm is object-oriented programming - the idea that one builds a software system by decomposing a problem into objects and then writing the code of those objects. Such objects abstract together behavior and data into a single conceptual and physical entity. Object-orientation is reflected in the entire spectrum of current software development methodologies and tools - we have OO methodologies, analysis and design tools, and OO programming languages. Writing complex applications such as graphical user interfaces, operating systems, and distributed applications while maintaining comprehensible source code has been made possible with OOP. Success at developing simpler systems leads to aspirations for greater complexity. Object orientation is a clever idea, but has certain limitations. We are now seeing that many requirements do not decompose neatly into behavior centered on a single locus. Object technology has difficulty localizing concerns invoking global constraints and pandemic behaviors, appropriately segregating concerns, and applying domain-specific knowledge. Post-object programming (POP) mechanisms that look to increase the expressiveness of the OO paradigm are a fertile arena for current research. Examples of POP technologies include domain-specific languages, generative programming, generic programming, constraint languages, reflection and metaprogramming, feature-oriented development, views/viewpoints, and asynchronous message brokering. (Czarneclu and Eisenecker s book includes a good survey of many of these technologies).
A Model of Human Cognitive Behavior in Writing Code for Computer Programs. Volume 1
1975-05-01
nearly all programming languages, each line of code actually involves a great many decisions - basic statement types, variable and expression choices...labels, etc. - and any heuristic which evaluates code on the basis of a single decision is not likely to have sufficient power. Only the use of plans...recalculated in the following line because It was needed again. The second reason is that there are some decisions about the structure of a program
The importance of employing computational resources for the automation of drug discovery.
Rosales-Hernández, Martha Cecilia; Correa-Basurto, José
2015-03-01
The application of computational tools to drug discovery helps researchers to design and evaluate new drugs swiftly with a reduce economic resources. To discover new potential drugs, computational chemistry incorporates automatization for obtaining biological data such as adsorption, distribution, metabolism, excretion and toxicity (ADMET), as well as drug mechanisms of action. This editorial looks at examples of these computational tools, including docking, molecular dynamics simulation, virtual screening, quantum chemistry, quantitative structural activity relationship, principal component analysis and drug screening workflow systems. The authors then provide their perspectives on the importance of these techniques for drug discovery. Computational tools help researchers to design and discover new drugs for the treatment of several human diseases without side effects, thus allowing for the evaluation of millions of compounds with a reduced cost in both time and economic resources. The problem is that operating each program is difficult; one is required to use several programs and understand each of the properties being tested. In the future, it is possible that a single computer and software program will be capable of evaluating the complete properties (mechanisms of action and ADMET properties) of ligands. It is also possible that after submitting one target, this computer-software will be capable of suggesting potential compounds along with ways to synthesize them, and presenting biological models for testing.
NASA Astrophysics Data System (ADS)
Gupta, V.; Gupta, N.; Gupta, S.; Field, E.; Maechling, P.
2003-12-01
Modern laptop computers, and personal computers, can provide capabilities that are, in many ways, comparable to workstations or departmental servers. However, this doesn't mean we should run all computations on our local computers. We have identified several situations in which it preferable to implement our seismological application programs in a distributed, server-based, computing model. In this model, application programs on the user's laptop, or local computer, invoke programs that run on an organizational server, and the results are returned to the invoking system. Situations in which a server-based architecture may be preferred include: (a) a program is written in a language, or written for an operating environment, that is unsupported on the local computer, (b) software libraries or utilities required to execute a program are not available on the users computer, (c) a computational program is physically too large, or computationally too expensive, to run on a users computer, (d) a user community wants to enforce a consistent method of performing a computation by standardizing on a single implementation of a program, and (e) the computational program may require current information, that is not available to all client computers. Until recently, distributed, server-based, computational capabilities were implemented using client/server architectures. In these architectures, client programs were often written in the same language, and they executed in the same computing environment, as the servers. Recently, a new distributed computational model, called Web Services, has been developed. Web Services are based on Internet standards such as XML, SOAP, WDSL, and UDDI. Web Services offer the promise of platform, and language, independent distributed computing. To investigate this new computational model, and to provide useful services to the SCEC Community, we have implemented several computational and utility programs using a Web Service architecture. We have hosted these Web Services as a part of the SCEC Community Modeling Environment (SCEC/CME) ITR Project (http://www.scec.org/cme). We have implemented Web Services for several of the reasons sited previously. For example, we implemented a FORTRAN-based Earthquake Rupture Forecast (ERF) as a Web Service for use by client computers that don't support a FORTRAN runtime environment. We implemented a Generic Mapping Tool (GMT) Web Service for use by systems that don't have local access to GMT. We implemented a Hazard Map Calculator Web Service to execute Hazard calculations that are too computationally intensive to run on a local system. We implemented a Coordinate Conversion Web Service to enforce a standard and consistent method for converting between UTM and Lat/Lon. Our experience developing these services indicates both strengths and weakness in current Web Service technology. Client programs that utilize Web Services typically need network access, a significant disadvantage at times. Programs with simple input and output parameters were the easiest to implement as Web Services, while programs with complex parameter-types required a significant amount of additional development. We also noted that Web services are very data-oriented, and adapting object-oriented software into the Web Service model proved problematic. Also, the Web Service approach of converting data types into XML format for network transmission has significant inefficiencies for some data sets.
An evaluation of four single element airfoil analytic methods
NASA Technical Reports Server (NTRS)
Freuler, R. J.; Gregorek, G. M.
1979-01-01
A comparison of four computer codes for the analysis of two-dimensional single element airfoil sections is presented for three classes of section geometries. Two of the computer codes utilize vortex singularities methods to obtain the potential flow solution. The other two codes solve the full inviscid potential flow equation using finite differencing techniques, allowing results to be obtained for transonic flow about an airfoil including weak shocks. Each program incorporates boundary layer routines for computing the boundary layer displacement thickness and boundary layer effects on aerodynamic coefficients. Computational results are given for a symmetrical section represented by an NACA 0012 profile, a conventional section illustrated by an NACA 65A413 profile, and a supercritical type section for general aviation applications typified by a NASA LS(1)-0413 section. The four codes are compared and contrasted in the areas of method of approach, range of applicability, agreement among each other and with experiment, individual advantages and disadvantages, computer run times and memory requirements, and operational idiosyncrasies.
NASA Technical Reports Server (NTRS)
Hall, William A. (Inventor)
1993-01-01
A bus programmable slave module card for use in a computer control system is disclosed which comprises a master computer and one or more slave computer modules interfacing by means of a bus. Each slave module includes its own microprocessor, memory, and control program for acting as a single loop controller. The slave card includes a plurality of memory means (S1, S2...) corresponding to a like plurality of memory devices (C1, C2...) in the master computer, for each slave memory means its own communication lines connectable through the bus with memory communication lines of an associated memory device in the master computer, and a one-way electronic door which is switchable to either a closed condition or a one-way open condition. With the door closed, communication lines between master computer memory (C1, C2...) and slave memory (S1, S2...) are blocked. In the one-way open condition invention, the memory communication lines or each slave memory means (S1, S2...) connect with the memory communication lines of its associated memory device (C1, C2...) in the master computer, and the memory devices (C1, C2...) of the master computer and slave card are electrically parallel such that information seen by the master's memory is also seen by the slave's memory. The slave card is also connectable to a switch for electronically removing the slave microprocessor from the system. With the master computer and the slave card in programming mode relationship, and the slave microprocessor electronically removed from the system, loading a program in the memory devices (C1, C2...) of the master accomplishes a parallel loading into the memory devices (S1, S2...) of the slave.
ERIC Educational Resources Information Center
von Arnim, Albrecht G.; Missra, Anamika
2017-01-01
Leading voices in the biological sciences have called for a transformation in graduate education leading to the PhD degree. One area commonly singled out for growth and innovation is cross-training in computational science. In 1998, the University of Tennessee (UT) founded an intercollegiate graduate program called the UT-ORNL Graduate School of…
Life and dynamic capacity modeling for aircraft transmissions
NASA Technical Reports Server (NTRS)
Savage, Michael
1991-01-01
A computer program to simulate the dynamic capacity and life of parallel shaft aircraft transmissions is presented. Five basic configurations can be analyzed: single mesh, compound, parallel, reverted, and single plane reductions. In execution, the program prompts the user for the data file prefix name, takes input from a ASCII file, and writes its output to a second ASCII file with the same prefix name. The input data file includes the transmission configuration, the input shaft torque and speed, and descriptions of the transmission geometry and the component gears and bearings. The program output file describes the transmission, its components, their capabilities, locations, and loads. It also lists the dynamic capability, ninety percent reliability, and mean life of each component and the transmission as a system. Here, the program, its input and output files, and the theory behind the operation of the program are described.
Software for Acquiring Image Data for PIV
NASA Technical Reports Server (NTRS)
Wernet, Mark P.; Cheung, H. M.; Kressler, Brian
2003-01-01
PIV Acquisition (PIVACQ) is a computer program for acquisition of data for particle-image velocimetry (PIV). In the PIV system for which PIVACQ was developed, small particles entrained in a flow are illuminated with a sheet of light from a pulsed laser. The illuminated region is monitored by a charge-coupled-device camera that operates in conjunction with a data-acquisition system that includes a frame grabber and a counter-timer board, both installed in a single computer. The camera operates in "frame-straddle" mode where a pair of images can be obtained closely spaced in time (on the order of microseconds). The frame grabber acquires image data from the camera and stores the data in the computer memory. The counter/timer board triggers the camera and synchronizes the pulsing of the laser with acquisition of data from the camera. PIVPROC coordinates all of these functions and provides a graphical user interface, through which the user can control the PIV data-acquisition system. PIVACQ enables the user to acquire a sequence of single-exposure images, display the images, process the images, and then save the images to the computer hard drive. PIVACQ works in conjunction with the PIVPROC program which processes the images of particles into the velocity field in the illuminated plane.
NASA Technical Reports Server (NTRS)
Lehtinen, B.; Geyser, L. C.
1984-01-01
AESOP is a computer program for use in designing feedback controls and state estimators for linear multivariable systems. AESOP is meant to be used in an interactive manner. Each design task that the program performs is assigned a "function" number. The user accesses these functions either (1) by inputting a list of desired function numbers or (2) by inputting a single function number. In the latter case the choice of the function will in general depend on the results obtained by the previously executed function. The most important of the AESOP functions are those that design,linear quadratic regulators and Kalman filters. The user interacts with the program when using these design functions by inputting design weighting parameters and by viewing graphic displays of designed system responses. Supporting functions are provided that obtain system transient and frequency responses, transfer functions, and covariance matrices. The program can also compute open-loop system information such as stability (eigenvalues), eigenvectors, controllability, and observability. The program is written in ANSI-66 FORTRAN for use on an IBM 3033 using TSS 370. Descriptions of all subroutines and results of two test cases are included in the appendixes.
Trelease, R B
1996-01-01
Advances in computer visualization and user interface technologies have enabled development of "virtual reality" programs that allow users to perceive and to interact with objects in artificial three-dimensional environments. Such technologies were used to create an image database and program for studying the human skull, a specimen that has become increasingly expensive and scarce. Stereoscopic image pairs of a museum-quality skull were digitized from multiple views. For each view, the stereo pairs were interlaced into a single, field-sequential stereoscopic picture using an image processing program. The resulting interlaced image files are organized in an interactive multimedia program. At run-time, gray-scale 3-D images are displayed on a large-screen computer monitor and observed through liquid-crystal shutter goggles. Users can then control the program and change views with a mouse and cursor to point-and-click on screen-level control words ("buttons"). For each view of the skull, an ID control button can be used to overlay pointers and captions for important structures. Pointing and clicking on "hidden buttons" overlying certain structures triggers digitized audio spoken word descriptions or mini lectures.
Comparative analysis of economic models in selected solar energy computer programs
NASA Astrophysics Data System (ADS)
Powell, J. W.; Barnes, K. A.
1982-01-01
The economic evaluation models in five computer programs widely used for analyzing solar energy systems (F-CHART 3.0, F-CHART 4.0, SOLCOST, BLAST, and DOE-2) are compared. Differences in analysis techniques and assumptions among the programs are assessed from the point of view of consistency with the Federal requirements for life cycle costing (10 CFR Part 436), effect on predicted economic performance, and optimal system size, case of use, and general applicability to diverse systems types and building types. The FEDSOL program developed by the National Bureau of Standards specifically to meet the Federal life cycle cost requirements serves as a basis for the comparison. Results of the study are illustrated in test cases of two different types of Federally owned buildings: a single family residence and a low rise office building.
GASP- General Aviation Synthesis Program. Volume 1: Main program. Part 1: Theoretical development
NASA Technical Reports Server (NTRS)
Hague, D.
1978-01-01
The General Aviation synthesis program performs tasks generally associated with aircraft preliminary design and allows an analyst the capability of performing parametric studies in a rapid manner. GASP emphasizes small fixed-wing aircraft employing propulsion systems varying froma single piston engine with fixed pitch propeller through twin turboprop/ turbofan powered business or transport type aircraft. The program, which may be operated from a computer terminal in either the batch or interactive graphic mode, is comprised of modules representing the various technical disciplines integrated into a computational flow which ensures that the interacting effects of design variables are continuously accounted for in the aircraft sizing procedure. The model is a useful tool for comparing configurations, assessing aircraft performance and economics, performing tradeoff and sensitivity studies, and assessing the impact of advanced technologies on aircraft performance and economics.
Cranswick, Lachlan Michael David
2008-01-01
The history of crystallographic computing and use of crystallographic software is one which traces the escape from the drudgery of manual human calculations to a world where the user delegates most of the travail to electronic computers. In practice, this involves practising crystallographers communicating their thoughts to the crystallographic program authors, in the hope that new procedures will be implemented within their software. Against this background, the development of small-molecule single-crystal and powder diffraction software is traced. Starting with the analogue machines and the use of Hollerith tabulators of the late 1930's, it is shown that computing developments have been science led, with new technologies being harnessed to solve pressing crystallographic problems. The development of software is also traced, with a final caution that few of the computations now performed daily are really understood by the program users. Unless a sufficient body of people continues to dismantle and re-build programs, the knowledge encoded in the old programs will become as inaccessible as the knowledge of how to build the Great Pyramid at Giza.
Design of three-phased SPWM based on AT89C52
NASA Astrophysics Data System (ADS)
Wu, Xiaorui
2018-05-01
According to the AT89C52 and the area equivalent principle, a three phase SPWM algorithm based on the 8 bit single chip is obtained. Through computer programming, three-phase SPWM wave generated by a single chip microcomputer is applied to the circuit of the static reactive power generator. The result shows that this method is feasible and can reduce the cost of SVG.
Two stage gear tooth dynamics program
NASA Technical Reports Server (NTRS)
Boyd, Linda S.
1989-01-01
The epicyclic gear dynamics program was expanded to add the option of evaluating the tooth pair dynamics for two epicyclic gear stages with peripheral components. This was a practical extension to the program as multiple gear stages are often used for speed reduction, space, weight, and/or auxiliary units. The option was developed for either stage to be a basic planetary, star, single external-external mesh, or single external-internal mesh. The two stage system allows for modeling of the peripherals with an input mass and shaft, an output mass and shaft, and a connecting shaft. Execution of the initial test case indicated an instability in the solution with the tooth paid loads growing to excessive magnitudes. A procedure to trace the instability is recommended as well as a method of reducing the program's computation time by reducing the number of boundary condition iterations.
NASA Space Engineering Research Center for VLSI systems design
NASA Technical Reports Server (NTRS)
1991-01-01
This annual review reports the center's activities and findings on very large scale integration (VLSI) systems design for 1990, including project status, financial support, publications, the NASA Space Engineering Research Center (SERC) Symposium on VLSI Design, research results, and outreach programs. Processor chips completed or under development are listed. Research results summarized include a design technique to harden complementary metal oxide semiconductors (CMOS) memory circuits against single event upset (SEU); improved circuit design procedures; and advances in computer aided design (CAD), communications, computer architectures, and reliability design. Also described is a high school teacher program that exposes teachers to the fundamentals of digital logic design.
A computer program for two-particle generalized coefficients of fractional parentage
NASA Astrophysics Data System (ADS)
Deveikis, A.; Juodagalvis, A.
2008-10-01
We present a FORTRAN90 program GCFP for the calculation of the generalized coefficients of fractional parentage (generalized CFPs or GCFP). The approach is based on the observation that the multi-shell CFPs can be expressed in terms of single-shell CFPs, while the latter can be readily calculated employing a simple enumeration scheme of antisymmetric A-particle states and an efficient method of construction of the idempotent matrix eigenvectors. The program provides fast calculation of GCFPs for a given particle number and produces results possessing numerical uncertainties below the desired tolerance. A single j-shell is defined by four quantum numbers, (e,l,j,t). A supplemental C++ program parGCFP allows calculation to be done in batches and/or in parallel. Program summaryProgram title:GCFP, parGCFP Catalogue identifier: AEBI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEBI_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 17 199 No. of bytes in distributed program, including test data, etc.: 88 658 Distribution format: tar.gz Programming language: FORTRAN 77/90 ( GCFP), C++ ( parGCFP) Computer: Any computer with suitable compilers. The program GCFP requires a FORTRAN 77/90 compiler. The auxiliary program parGCFP requires GNU-C++ compatible compiler, while its parallel version additionally requires MPI-1 standard libraries Operating system: Linux (Ubuntu, Scientific) (all programs), also checked on Windows XP ( GCFP, serial version of parGCFP) RAM: The memory demand depends on the computation and output mode. If this mode is not 4, the program GCFP demands the following amounts of memory on a computer with Linux operating system. It requires around 2 MB of RAM for the A=12 system at E⩽2. Computation of the A=50 particle system requires around 60 MB of RAM at E=0 and ˜70 MB at E=2 (note, however, that the calculation of this system will take a very long time). If the computation and output mode is set to 4, the memory demands by GCFP are significantly larger. Calculation of GCFPs of A=12 system at E=1 requires 145 MB. The program parGCFP requires additional 2.5 and 4.5 MB of memory for the serial and parallel version, respectively. Classification: 17.18 Nature of problem: The program GCFP generates a list of two-particle coefficients of fractional parentage for several j-shells with isospin. Solution method: The method is based on the observation that multishell coefficients of fractional parentage can be expressed in terms of single-shell CFPs [1]. The latter are calculated using the algorithm [2,3] for a spectral decomposition of an antisymmetrization operator matrix Y. The coefficients of fractional parentage are those eigenvectors of the antisymmetrization operator matrix Y that correspond to unit eigenvalues. A computer code for these coefficients is available [4]. The program GCFP offers computation of two-particle multishell coefficients of fractional parentage. The program parGCFP allows a batch calculation using one input file. Sets of GCFPs are independent and can be calculated in parallel. Restrictions:A<86 when E=0 (due to the memory constraints); small numbers of particles allow significantly higher excitations, though the shell with j⩾11/2 cannot get full (it is the implementation constraint). Unusual features: Using the program GCFP it is possible to determine allowed particle configurations without the GCFP computation. The GCFPs can be calculated either for all particle configurations at once or for a specified particle configuration. The values of GCFPs can be printed out with a complete specification in either one file or with the parent and daughter configurations printed in separate files. The latter output mode requires additional time and RAM memory. It is possible to restrict the ( J,T) values of the considered particle configurations. (Here J is the total angular momentum and T is the total isospin of the system.) The program parGCFP produces several result files the number of which equals to the number of particle configurations. To work correctly, the program GCFP needs to be compiled to read parameters from the standard input (the default setting). Running time: It depends on the size of the problem. The minimum time is required, if the computation and output mode ( CompMode) is not 4, but the resulting file is larger. A system with A=12 particles at E=0 (all 9411 GCFPs) took around 1 sec on a Pentium4 2.8 GHz processor with 1 MB L2 cache. The program required about 14 min to calculate all 1.3×10 GCFPs of E=1. The time for all 5.5×10 GCFPs of E=2 was about 53 hours. For this number of particles, the calculation time of both E=0 and E=1 with CompMode = 1 and 4 is nearly the same, when no other processes are running. The case of E=2 could not be calculated with CompMode = 4, because the RAM memory was insufficient. In general, the latter CompMode requires a longer computation time, although the resulting files are smaller in size. The program parGCFP puts virtually no time overhead. Its parallel version speeds-up the calculation. However, the results need to be collected from several files created for each configuration. References: [1] J. Levinsonas, Works of Lithuanian SSR Academy of Sciences 4 (1957) 17. [2] A. Deveikis, A. Bončkus, R. Kalinauskas, Lithuanian Phys. J. 41 (2001) 3. [3] A. Deveikis, R.K. Kalinauskas, B.R. Barrett, Ann. Phys. 296 (2002) 287. [4] A. Deveikis, Comput. Phys. Comm. 173 (2005) 186. (CPC Catalogue ID. ADWI_v1_0)
An autonomous molecular computer for logical control of gene expression.
Benenson, Yaakov; Gil, Binyamin; Ben-Dor, Uri; Adar, Rivka; Shapiro, Ehud
2004-05-27
Early biomolecular computer research focused on laboratory-scale, human-operated computers for complex computational problems. Recently, simple molecular-scale autonomous programmable computers were demonstrated allowing both input and output information to be in molecular form. Such computers, using biological molecules as input data and biologically active molecules as outputs, could produce a system for 'logical' control of biological processes. Here we describe an autonomous biomolecular computer that, at least in vitro, logically analyses the levels of messenger RNA species, and in response produces a molecule capable of affecting levels of gene expression. The computer operates at a concentration of close to a trillion computers per microlitre and consists of three programmable modules: a computation module, that is, a stochastic molecular automaton; an input module, by which specific mRNA levels or point mutations regulate software molecule concentrations, and hence automaton transition probabilities; and an output module, capable of controlled release of a short single-stranded DNA molecule. This approach might be applied in vivo to biochemical sensing, genetic engineering and even medical diagnosis and treatment. As a proof of principle we programmed the computer to identify and analyse mRNA of disease-related genes associated with models of small-cell lung cancer and prostate cancer, and to produce a single-stranded DNA molecule modelled after an anticancer drug.
NASA Technical Reports Server (NTRS)
Vlahos, William
2005-01-01
eDirectory is a computer program that makes it possible to view entries in the Jet Propulsion Laboratory (JPL) telephone directory by use of PalmPilot(TradeMark) (or equivalent) personal digital assistants. When one uses eDirectory, a single click causes the downloading of a current copy of the directory (which is updated nightly) from a server. The downloaded directory data can be sorted and searched. The program can append a "JPL" category and save directory information in a file that can be imported into the Palm Desktop(TradeMark) software.
Parallelization of Program to Optimize Simulated Trajectories (POST3D)
NASA Technical Reports Server (NTRS)
Hammond, Dana P.; Korte, John J. (Technical Monitor)
2001-01-01
This paper describes the parallelization of the Program to Optimize Simulated Trajectories (POST3D). POST3D uses a gradient-based optimization algorithm that reaches an optimum design point by moving from one design point to the next. The gradient calculations required to complete the optimization process, dominate the computational time and have been parallelized using a Single Program Multiple Data (SPMD) on a distributed memory NUMA (non-uniform memory access) architecture. The Origin2000 was used for the tests presented.
2012-12-01
identity operation SIMD Single instruction, multiple datastream parallel computing Scala A byte-compiled programming language featuring dynamic type...Specific Languages 5a. CONTRACT NUMBER FA8750-10-1-0191 5b. GRANT NUMBER N/A 5c. PROGRAM ELEMENT NUMBER 61101E 6. AUTHOR(S) Armando Fox 5d...application performance, but usually must rely on efficiency programmers who are experts in explicit parallel programming to achieve it. Since such efficiency
Space shuttle propulsion estimation development verification, volume 1
NASA Technical Reports Server (NTRS)
Rogers, Robert M.
1989-01-01
The results of the Propulsion Estimation Development Verification are summarized. A computer program developed under a previous contract (NAS8-35324) was modified to include improved models for the Solid Rocket Booster (SRB) internal ballistics, the Space Shuttle Main Engine (SSME) power coefficient model, the vehicle dynamics using quaternions, and an improved Kalman filter algorithm based on the U-D factorized algorithm. As additional output, the estimated propulsion performances, for each device are computed with the associated 1-sigma bounds. The outputs of the estimation program are provided in graphical plots. An additional effort was expended to examine the use of the estimation approach to evaluate single engine test data. In addition to the propulsion estimation program PFILTER, a program was developed to produce a best estimate of trajectory (BET). The program LFILTER, also uses the U-D factorized algorithm form of the Kalman filter as in the propulsion estimation program PFILTER. The necessary definitions and equations explaining the Kalman filtering approach for the PFILTER program, the models used for this application for dynamics and measurements, program description, and program operation are presented.
Tools and techniques for computational reproducibility.
Piccolo, Stephen R; Frampton, Michael B
2016-07-11
When reporting research findings, scientists document the steps they followed so that others can verify and build upon the research. When those steps have been described in sufficient detail that others can retrace the steps and obtain similar results, the research is said to be reproducible. Computers play a vital role in many research disciplines and present both opportunities and challenges for reproducibility. Computers can be programmed to execute analysis tasks, and those programs can be repeated and shared with others. The deterministic nature of most computer programs means that the same analysis tasks, applied to the same data, will often produce the same outputs. However, in practice, computational findings often cannot be reproduced because of complexities in how software is packaged, installed, and executed-and because of limitations associated with how scientists document analysis steps. Many tools and techniques are available to help overcome these challenges; here we describe seven such strategies. With a broad scientific audience in mind, we describe the strengths and limitations of each approach, as well as the circumstances under which each might be applied. No single strategy is sufficient for every scenario; thus we emphasize that it is often useful to combine approaches.
Airborne Intelligent Display (AID) Phase I Software Description,
1983-10-24
Board Computer Characteristics 10 3.0 SOFTWARE GENERAL DESCRIPTION 13 3.1 Overview 13 3.2 System Software 14 3.2.1 System Startup 14 3.2.1.1 Initial...3 A-2 Task States A-4 A-3 Task Program Structure A-6 A-4 Task States and State Change Mechanisms A-7 A-5 Computing Return Addresses: RUNADR, SLPADR A...techniques. 2.2 Design Approach The stated objectives were met by: 1. distributing the processing load among multiple Z80 single-board computers (SBC’s). This
Parallel computation and the Basis system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, G.R.
1992-12-16
A software package has been written that can facilitate efforts to develop powerful, flexible, and easy-to-use programs that can run in single-processor, massively parallel, and distributed computing environments. Particular attention has been given to the difficulties posed by a program consisting of many science packages that represent subsystems of a complicated, coupled system. Methods have been found to maintain independence of the packages by hiding data structures without increasing the communication costs in a parallel computing environment. Concepts developed in this work are demonstrated by a prototype program that uses library routines from two existing software systems, Basis and Parallelmore » Virtual Machine (PVM). Most of the details of these libraries have been encapsulated in routines and macros that could be rewritten for alternative libraries that possess certain minimum capabilities. The prototype software uses a flexible master-and-slaves paradigm for parallel computation and supports domain decomposition with message passing for partitioning work among slaves. Facilities are provided for accessing variables that are distributed among the memories of slaves assigned to subdomains. The software is named PROTOPAR.« less
Parallel computation and the basis system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, G.R.
1993-05-01
A software package has been written that can facilitate efforts to develop powerful, flexible, and easy-to use programs that can run in single-processor, massively parallel, and distributed computing environments. Particular attention has been given to the difficulties posed by a program consisting of many science packages that represent subsystems of a complicated, coupled system. Methods have been found to maintain independence of the packages by hiding data structures without increasing the communications costs in a parallel computing environment. Concepts developed in this work are demonstrated by a prototype program that uses library routines from two existing software systems, Basis andmore » Parallel Virtual Machine (PVM). Most of the details of these libraries have been encapsulated in routines and macros that could be rewritten for alternative libraries that possess certain minimum capabilities. The prototype software uses a flexible master-and-slaves paradigm for parallel computation and supports domain decomposition with message passing for partitioning work among slaves. Facilities are provided for accessing variables that are distributed among the memories of slaves assigned to subdomains. The software is named PROTOPAR.« less
HeNCE: A Heterogeneous Network Computing Environment
Beguelin, Adam; Dongarra, Jack J.; Geist, George Al; ...
1994-01-01
Network computing seeks to utilize the aggregate resources of many networked computers to solve a single problem. In so doing it is often possible to obtain supercomputer performance from an inexpensive local area network. The drawback is that network computing is complicated and error prone when done by hand, especially if the computers have different operating systems and data formats and are thus heterogeneous. The heterogeneous network computing environment (HeNCE) is an integrated graphical environment for creating and running parallel programs over a heterogeneous collection of computers. It is built on a lower level package called parallel virtual machine (PVM).more » The HeNCE philosophy of parallel programming is to have the programmer graphically specify the parallelism of a computation and to automate, as much as possible, the tasks of writing, compiling, executing, debugging, and tracing the network computation. Key to HeNCE is a graphical language based on directed graphs that describe the parallelism and data dependencies of an application. Nodes in the graphs represent conventional Fortran or C subroutines and the arcs represent data and control flow. This article describes the present state of HeNCE, its capabilities, limitations, and areas of future research.« less
Besnier, Francois; Glover, Kevin A.
2013-01-01
This software package provides an R-based framework to make use of multi-core computers when running analyses in the population genetics program STRUCTURE. It is especially addressed to those users of STRUCTURE dealing with numerous and repeated data analyses, and who could take advantage of an efficient script to automatically distribute STRUCTURE jobs among multiple processors. It also consists of additional functions to divide analyses among combinations of populations within a single data set without the need to manually produce multiple projects, as it is currently the case in STRUCTURE. The package consists of two main functions: MPI_structure() and parallel_structure() as well as an example data file. We compared the performance in computing time for this example data on two computer architectures and showed that the use of the present functions can result in several-fold improvements in terms of computation time. ParallelStructure is freely available at https://r-forge.r-project.org/projects/parallstructure/. PMID:23923012
SPSS and SAS programming for the testing of mediation models.
Dudley, William N; Benuzillo, Jose G; Carrico, Mineh S
2004-01-01
Mediation modeling can explain the nature of the relation among three or more variables. In addition, it can be used to show how a variable mediates the relation between levels of intervention and outcome. The Sobel test, developed in 1990, provides a statistical method for determining the influence of a mediator on an intervention or outcome. Although interactive Web-based and stand-alone methods exist for computing the Sobel test, SPSS and SAS programs that automatically run the required regression analyses and computations increase the accessibility of mediation modeling to nursing researchers. To illustrate the utility of the Sobel test and to make this programming available to the Nursing Research audience in both SAS and SPSS. The history, logic, and technical aspects of mediation testing are introduced. The syntax files sobel.sps and sobel.sas, created to automate the computation of the regression analysis and test statistic, are available from the corresponding author. The reported programming allows the user to complete mediation testing with the user's own data in a single-step fashion. A technical manual included with the programming provides instruction on program use and interpretation of the output. Mediation modeling is a useful tool for describing the relation between three or more variables. Programming and manuals for using this model are made available.
Computer analysis of digital well logs
Scott, James H.
1984-01-01
A comprehensive system of computer programs has been developed by the U.S. Geological Survey for analyzing digital well logs. The programs are operational on a minicomputer in a research well-logging truck, making it possible to analyze and replot the logs while at the field site. The minicomputer also serves as a controller of digitizers, counters, and recorders during acquisition of well logs. The analytical programs are coordinated with the data acquisition programs in a flexible system that allows the operator to make changes quickly and easily in program variables such as calibration coefficients, measurement units, and plotting scales. The programs are designed to analyze the following well-logging measurements: natural gamma-ray, neutron-neutron, dual-detector density with caliper, magnetic susceptibility, single-point resistance, self potential, resistivity (normal and Wenner configurations), induced polarization, temperature, sonic delta-t, and sonic amplitude. The computer programs are designed to make basic corrections for depth displacements, tool response characteristics, hole diameter, and borehole fluid effects (when applicable). Corrected well-log measurements are output to magnetic tape or plotter with measurement units transformed to petrophysical and chemical units of interest, such as grade of uranium mineralization in percent eU3O8, neutron porosity index in percent, and sonic velocity in kilometers per second.
The study of microstrip antenna arrays and related problems
NASA Technical Reports Server (NTRS)
Lo, Y. T.
1986-01-01
In February, an initial computer program to be used in analyzing the four-element array module was completed. This program performs the analysis of modules composed of four rectangular patches which are corporately fed by a microstrip line network terminated in four identical load impedances. Currently, a rigorous full-wave analysis of various types of microstrip line feed structures and patches is being performed. These tests include the microstrip line feed between layers of different electrical parameters. A method of moments was implemented for the case of a single dielectric layer and microstrip line fed rectangular patches in which the primary source is assumed to be a magnetic current ribbon across the line some distance from the patch. Measured values are compared with those computed by the program.
Multi-dimensional Rankings, Program Termination, and Complexity Bounds of Flowchart Programs
NASA Astrophysics Data System (ADS)
Alias, Christophe; Darte, Alain; Feautrier, Paul; Gonnord, Laure
Proving the termination of a flowchart program can be done by exhibiting a ranking function, i.e., a function from the program states to a well-founded set, which strictly decreases at each program step. A standard method to automatically generate such a function is to compute invariants for each program point and to search for a ranking in a restricted class of functions that can be handled with linear programming techniques. Previous algorithms based on affine rankings either are applicable only to simple loops (i.e., single-node flowcharts) and rely on enumeration, or are not complete in the sense that they are not guaranteed to find a ranking in the class of functions they consider, if one exists. Our first contribution is to propose an efficient algorithm to compute ranking functions: It can handle flowcharts of arbitrary structure, the class of candidate rankings it explores is larger, and our method, although greedy, is provably complete. Our second contribution is to show how to use the ranking functions we generate to get upper bounds for the computational complexity (number of transitions) of the source program. This estimate is a polynomial, which means that we can handle programs with more than linear complexity. We applied the method on a collection of test cases from the literature. We also show the links and differences with previous techniques based on the insertion of counters.
1992-12-28
analysis. Marvin Minsky , carefully applying mathematical techniques, developed rigo.,ous theorems regarding netwcrk operation. His research led to the...electrical circuits but was later convened to computer simulation, which is still commonly used today. Early success by - Marvirn Minsky , Frank...publication of the book Perceptrons ( Minsky and Papert 1969), in which he and Seymore Papert proved that the single-layer networks then in use were
Automated Diversity in Computer Systems
2005-09-01
traces that started with trace heads , namely backwards- taken branches. These branches are indicative of loops within the program, and Dynamo assumes that...would be the ones the program would normally take. Therefore when a trace head became hot (was visited enough times), only a single code trace would...all encountered trace heads . When an interesting instruction is being emulated, the tracing code checks to see if it has been encountered before
A Programming Framework for Scientific Applications on CPU-GPU Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Owens, John
2013-03-24
At a high level, my research interests center around designing, programming, and evaluating computer systems that use new approaches to solve interesting problems. The rapid change of technology allows a variety of different architectural approaches to computationally difficult problems, and a constantly shifting set of constraints and trends makes the solutions to these problems both challenging and interesting. One of the most important recent trends in computing has been a move to commodity parallel architectures. This sea change is motivated by the industry’s inability to continue to profitably increase performance on a single processor and instead to move to multiplemore » parallel processors. In the period of review, my most significant work has been leading a research group looking at the use of the graphics processing unit (GPU) as a general-purpose processor. GPUs can potentially deliver superior performance on a broad range of problems than their CPU counterparts, but effectively mapping complex applications to a parallel programming model with an emerging programming environment is a significant and important research problem.« less
Flexible language constructs for large parallel programs
NASA Technical Reports Server (NTRS)
Rosing, Matthew; Schnabel, Robert
1993-01-01
The goal of the research described is to develop flexible language constructs for writing large data parallel numerical programs for distributed memory (MIMD) multiprocessors. Previously, several models have been developed to support synchronization and communication. Models for global synchronization include SIMD (Single Instruction Multiple Data), SPMD (Single Program Multiple Data), and sequential programs annotated with data distribution statements. The two primary models for communication include implicit communication based on shared memory and explicit communication based on messages. None of these models by themselves seem sufficient to permit the natural and efficient expression of the variety of algorithms that occur in large scientific computations. An overview of a new language that combines many of these programming models in a clean manner is given. This is done in a modular fashion such that different models can be combined to support large programs. Within a module, the selection of a model depends on the algorithm and its efficiency requirements. An overview of the language and discussion of some of the critical implementation details is given.
Extended Lagrangian Density Functional Tight-Binding Molecular Dynamics for Molecules and Solids.
Aradi, Bálint; Niklasson, Anders M N; Frauenheim, Thomas
2015-07-14
A computationally fast quantum mechanical molecular dynamics scheme using an extended Lagrangian density functional tight-binding formulation has been developed and implemented in the DFTB+ electronic structure program package for simulations of solids and molecular systems. The scheme combines the computational speed of self-consistent density functional tight-binding theory with the efficiency and long-term accuracy of extended Lagrangian Born-Oppenheimer molecular dynamics. For systems without self-consistent charge instabilities, only a single diagonalization or construction of the single-particle density matrix is required in each time step. The molecular dynamics simulation scheme can be applied to a broad range of problems in materials science, chemistry, and biology.
Guo, Weixing; Langevin, C.D.
2002-01-01
This report documents a computer program (SEAWAT) that simulates variable-density, transient, ground-water flow in three dimensions. The source code for SEAWAT was developed by combining MODFLOW and MT3DMS into a single program that solves the coupled flow and solute-transport equations. The SEAWAT code follows a modular structure, and thus, new capabilities can be added with only minor modifications to the main program. SEAWAT reads and writes standard MODFLOW and MT3DMS data sets, although some extra input may be required for some SEAWAT simulations. This means that many of the existing pre- and post-processors can be used to create input data sets and analyze simulation results. Users familiar with MODFLOW and MT3DMS should have little difficulty applying SEAWAT to problems of variable-density ground-water flow.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bacon, Charles; Bell, Greg; Canon, Shane
The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the U.S. Department of Energy (DOE) Office of Science (SC), the single largest supporter of basic research in the physical sciences in the United States. In support of SC programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 25 years. In October 2012, ESnet and the Office of Advanced Scientific Computing Research (ASCR) of the DOE SCmore » organized a review to characterize the networking requirements of the programs funded by the ASCR program office. The requirements identified at the review are summarized in the Findings section, and are described in more detail in the body of the report.« less
Algorithms and Parametric Studies for Assessing Effects of Two-Point Contact
DOT National Transportation Integrated Search
1984-02-01
This report describes analyses conducted to assess the effects of two-point wheel rail contact on a single wheel on the prediction of wheel-rail forces, and for including these effects in a computer program for predicting curving behavior of rail veh...
DOT National Transportation Integrated Search
1995-09-05
The Run-Off-Road Collision Avoidance Using IVHS Countermeasures program is to address the single vehicle crash problem through application of technology to prevent and/or reduce the severity of these crashes. : This report documents the RORSIM comput...
Teaching Clinical Neurology with the PLATO IV Computer System
ERIC Educational Resources Information Center
Parker, Alan; Trynda, Richard
1975-01-01
A "Neurox" program entitled "Canine Neurological Diagnosis" developed at the University of Illinois College of Veterinary Medicine enables a student to obtain the results of 78 possible neurological tests or associated questions on a single case. A lesson and possible adaptations are described. (LBH)
SU (2) lattice gauge theory simulations on Fermi GPUs
NASA Astrophysics Data System (ADS)
Cardoso, Nuno; Bicudo, Pedro
2011-05-01
In this work we explore the performance of CUDA in quenched lattice SU (2) simulations. CUDA, NVIDIA Compute Unified Device Architecture, is a hardware and software architecture developed by NVIDIA for computing on the GPU. We present an analysis and performance comparison between the GPU and CPU in single and double precision. Analyses with multiple GPUs and two different architectures (G200 and Fermi architectures) are also presented. In order to obtain a high performance, the code must be optimized for the GPU architecture, i.e., an implementation that exploits the memory hierarchy of the CUDA programming model. We produce codes for the Monte Carlo generation of SU (2) lattice gauge configurations, for the mean plaquette, for the Polyakov Loop at finite T and for the Wilson loop. We also present results for the potential using many configurations (50,000) without smearing and almost 2000 configurations with APE smearing. With two Fermi GPUs we have achieved an excellent performance of 200× the speed over one CPU, in single precision, around 110 Gflops/s. We also find that, using the Fermi architecture, double precision computations for the static quark-antiquark potential are not much slower (less than 2× slower) than single precision computations.
Incinerator ash dissolution model for the system: Plutonium, nitric acid and hydrofluoric acid
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, E V
1988-06-01
This research accomplished two goals. The first was to develop a computer program to simulate a cascade dissolver system. This program would be used to predict the bulk rate of dissolution in incinerator ash. The other goal was to verify the model in a single-stage dissolver system using Dy/sub 2/O/sub 3/. PuO/sub 2/ (and all of the species in the incinerator ash) was assumed to exist as spherical particles. A model was used to calculate the bulk rate of plutonium oxide dissolution using fluoride as a catalyst. Once the bulk rate of PuO/sub 2/ dissolution and the dissolution rate ofmore » all soluble species were calculated, mass and energy balances were written. A computer program simulating the cascade dissolver system was then developed. Tests were conducted on a single-stage dissolver. A simulated incinerator ash mixture was made and added to the dissolver. CaF/sub 2/ was added to the mixture as a catalyst. A 9M HNO/sub 3/ solution was pumped into the dissolver system. Samples of the dissolver effluent were analyzed for dissolved and F concentrations. The computer program proved satisfactory in predicting the F concentrations in the dissolver effluent. The experimental sparge air flow rate was predicted to within 5.5%. The experimental percentage of solids dissolved (51.34%) compared favorably to the percentage of incinerator ash dissolved (47%) in previous work. No general conclusions on model verification could be reached. 56 refs., 11 figs., 24 tabs.« less
NASA Technical Reports Server (NTRS)
Burns, K. Lee; Altino, Karen
2008-01-01
The Marshall Space Flight Center Natural Environments Branch has a long history of expertise in the modeling and computation of statistical launch availabilities with respect to weather conditions. Their existing data analysis product, the Atmospheric Parametric Risk Assessment (APRA) tool, computes launch availability given an input set of vehicle hardware and/or operational weather constraints by calculating the climatological probability of exceeding the specified constraint limits, APRA has been used extensively to provide the Space Shuttle program the ability to estimate impacts that various proposed design modifications would have to overall launch availability. The model accounts for both seasonal and diurnal variability at a single geographic location and provides output probabilities for a single arbitrary launch attempt. Recently, the Shuttle program has shown interest in having additional capabilities added to the APRA model, including analysis of humidity parameters, inclusion of landing site weather to produce landing availability, and concurrent analysis of multiple sites, to assist in operational landing site selection. In addition, the Constellation program has also expressed interest in the APRA tool, and has requested several additional capabilities to address some Constellation-specific issues, both in the specification and verification of design requirements and in the development of operations concepts. The combined scope of the requested capability enhancements suggests an evolution of the model beyond a simple revision process. Development has begun for a new data analysis tool that will satisfy the requests of both programs. This new tool, Probabilities of Atmospheric Conditions and Environmental Risk (PACER), will provide greater flexibility and significantly enhanced functionality compared to the currently existing tool.
BlueSNP: R package for highly scalable genome-wide association studies using Hadoop clusters.
Huang, Hailiang; Tata, Sandeep; Prill, Robert J
2013-01-01
Computational workloads for genome-wide association studies (GWAS) are growing in scale and complexity outpacing the capabilities of single-threaded software designed for personal computers. The BlueSNP R package implements GWAS statistical tests in the R programming language and executes the calculations across computer clusters configured with Apache Hadoop, a de facto standard framework for distributed data processing using the MapReduce formalism. BlueSNP makes computationally intensive analyses, such as estimating empirical p-values via data permutation, and searching for expression quantitative trait loci over thousands of genes, feasible for large genotype-phenotype datasets. http://github.com/ibm-bioinformatics/bluesnp
Instrumentino: An Open-Source Software for Scientific Instruments.
Koenka, Israel Joel; Sáiz, Jorge; Hauser, Peter C
2015-01-01
Scientists often need to build dedicated computer-controlled experimental systems. For this purpose, it is becoming common to employ open-source microcontroller platforms, such as the Arduino. These boards and associated integrated software development environments provide affordable yet powerful solutions for the implementation of hardware control of transducers and acquisition of signals from detectors and sensors. It is, however, a challenge to write programs that allow interactive use of such arrangements from a personal computer. This task is particularly complex if some of the included hardware components are connected directly to the computer and not via the microcontroller. A graphical user interface framework, Instrumentino, was therefore developed to allow the creation of control programs for complex systems with minimal programming effort. By writing a single code file, a powerful custom user interface is generated, which enables the automatic running of elaborate operation sequences and observation of acquired experimental data in real time. The framework, which is written in Python, allows extension by users, and is made available as an open source project.
An object-oriented approach to data display and storage: 3 years experience, 25,000 cases.
Sainsbury, D A
1993-11-01
Object-oriented programming techniques were used to develop computer based data display and storage systems. These have been operating in the 8 anaesthetising areas of the Adelaide Children's Hospital for 3 years. The analogue and serial outputs from an array of patient monitors are connected to IBM compatible PC-XT computers. The information is displayed on a colour screen as wave-form and trend graphs and digital format in 'real time'. The trend data is printed simultaneously on a dot matrix printer. This data is also stored for 24 hours on 'hard' disk. The major benefit has been the provision of a single visual focus for all monitored variables. The automatic logging of data has been invaluable in the analysis of critical incidents. The systems were made possible by recent, rapid improvements in computer hardware and software. This paper traces the development of the program and demonstrates the advantages of object-oriented programming techniques.
Improved Boundary Layer Module (BLM) for the Solid Performance Program (SPP)
NASA Astrophysics Data System (ADS)
Coats, D. E.; Cebeci, T.
1982-03-01
The requirements for a replacement to the Bartz boundary layer code, the standard method of computing the performance loss due to viscous effects by the solid performance program, were discussed by the propulsion community along with four nationally recognized boundary layer experts. A consensus was reached regarding the preferred features for the analysis of the replacement code. The major points that were agreed upon are: (1) finite difference methods are preferred over integral methods; (2) a single equation eddy viscosity model was considered to be adequate for the purpose of computing performance loss; (3) a variable grid capability in both coordinate directions would be required; (4) a proven finite difference algorithm which is not stability restricted should be used, that is, an implicit numerical scheme would be required; and (5) the replacement code should be able to compute both turbulent and laminar flows. The program should treat mass addition at the wall as well as being able to calculate a stagnation point starting line.
Li, Xiang; Samei, Ehsan; Segars, W. Paul; Sturgeon, Gregory M.; Colsher, James G.; Toncheva, Greta; Yoshizumi, Terry T.; Frush, Donald P.
2011-01-01
Purpose: Radiation-dose awareness and optimization in CT can greatly benefit from a dose-reporting system that provides dose and risk estimates specific to each patient and each CT examination. As the first step toward patient-specific dose and risk estimation, this article aimed to develop a method for accurately assessing radiation dose from CT examinations. Methods: A Monte Carlo program was developed to model a CT system (LightSpeed VCT, GE Healthcare). The geometry of the system, the energy spectra of the x-ray source, the three-dimensional geometry of the bowtie filters, and the trajectories of source motions during axial and helical scans were explicitly modeled. To validate the accuracy of the program, a cylindrical phantom was built to enable dose measurements at seven different radial distances from its central axis. Simulated radial dose distributions in the cylindrical phantom were validated against ion chamber measurements for single axial scans at all combinations of tube potential and bowtie filter settings. The accuracy of the program was further validated using two anthropomorphic phantoms (a pediatric one-year-old phantom and an adult female phantom). Computer models of the two phantoms were created based on their CT data and were voxelized for input into the Monte Carlo program. Simulated dose at various organ locations was compared against measurements made with thermoluminescent dosimetry chips for both single axial and helical scans. Results: For the cylindrical phantom, simulations differed from measurements by −4.8% to 2.2%. For the two anthropomorphic phantoms, the discrepancies between simulations and measurements ranged between (−8.1%, 8.1%) and (−17.2%, 13.0%) for the single axial scans and the helical scans, respectively. Conclusions: The authors developed an accurate Monte Carlo program for assessing radiation dose from CT examinations. When combined with computer models of actual patients, the program can provide accurate dose estimates for specific patients. PMID:21361208
4P: fast computing of population genetics statistics from large DNA polymorphism panels
Benazzo, Andrea; Panziera, Alex; Bertorelle, Giorgio
2015-01-01
Massive DNA sequencing has significantly increased the amount of data available for population genetics and molecular ecology studies. However, the parallel computation of simple statistics within and between populations from large panels of polymorphic sites is not yet available, making the exploratory analyses of a set or subset of data a very laborious task. Here, we present 4P (parallel processing of polymorphism panels), a stand-alone software program for the rapid computation of genetic variation statistics (including the joint frequency spectrum) from millions of DNA variants in multiple individuals and multiple populations. It handles a standard input file format commonly used to store DNA variation from empirical or simulation experiments. The computational performance of 4P was evaluated using large SNP (single nucleotide polymorphism) datasets from human genomes or obtained by simulations. 4P was faster or much faster than other comparable programs, and the impact of parallel computing using multicore computers or servers was evident. 4P is a useful tool for biologists who need a simple and rapid computer program to run exploratory population genetics analyses in large panels of genomic data. It is also particularly suitable to analyze multiple data sets produced in simulation studies. Unix, Windows, and MacOs versions are provided, as well as the source code for easier pipeline implementations. PMID:25628874
SequenceL: Automated Parallel Algorithms Derived from CSP-NT Computational Laws
NASA Technical Reports Server (NTRS)
Cooke, Daniel; Rushton, Nelson
2013-01-01
With the introduction of new parallel architectures like the cell and multicore chips from IBM, Intel, AMD, and ARM, as well as the petascale processing available for highend computing, a larger number of programmers will need to write parallel codes. Adding the parallel control structure to the sequence, selection, and iterative control constructs increases the complexity of code development, which often results in increased development costs and decreased reliability. SequenceL is a high-level programming language that is, a programming language that is closer to a human s way of thinking than to a machine s. Historically, high-level languages have resulted in decreased development costs and increased reliability, at the expense of performance. In recent applications at JSC and in industry, SequenceL has demonstrated the usual advantages of high-level programming in terms of low cost and high reliability. SequenceL programs, however, have run at speeds typically comparable with, and in many cases faster than, their counterparts written in C and C++ when run on single-core processors. Moreover, SequenceL is able to generate parallel executables automatically for multicore hardware, gaining parallel speedups without any extra effort from the programmer beyond what is required to write the sequen tial/singlecore code. A SequenceL-to-C++ translator has been developed that automatically renders readable multithreaded C++ from a combination of a SequenceL program and sample data input. The SequenceL language is based on two fundamental computational laws, Consume-Simplify- Produce (CSP) and Normalize-Trans - pose (NT), which enable it to automate the creation of parallel algorithms from high-level code that has no annotations of parallelism whatsoever. In our anecdotal experience, SequenceL development has been in every case less costly than development of the same algorithm in sequential (that is, single-core, single process) C or C++, and an order of magnitude less costly than development of comparable parallel code. Moreover, SequenceL not only automatically parallelizes the code, but since it is based on CSP-NT, it is provably race free, thus eliminating the largest quality challenge the parallelized software developer faces.
SAWdoubler: A program for counting self-avoiding walks
NASA Astrophysics Data System (ADS)
Schram, Raoul D.; Barkema, Gerard T.; Bisseling, Rob H.
2013-03-01
This article presents SAWdoubler, a package for counting the total number ZN of self-avoiding walks (SAWs) on a regular lattice by the length-doubling method, of which the basic concept has been published previously by us. We discuss an algorithm for the creation of all SAWs of length N, efficient storage of these SAWs in a tree data structure, and an algorithm for the computation of correction terms to the count Z2N for SAWs of double length, removing all combinations of two intersecting single-length SAWs. We present an efficient numbering of the lattice sites that enables exploitation of symmetry and leads to a smaller tree data structure; this numbering is by increasing Euclidean distance from the origin of the lattice. Furthermore, we show how the computation can be parallelised by distributing the iterations of the main loop of the algorithm over the cores of a multicore architecture. Experimental results on the 3D cubic lattice demonstrate that Z28 can be computed on a dual-core PC in only 1 h and 40 min, with a speedup of 1.56 compared to the single-core computation and with a gain by using symmetry of a factor of 26. We present results for memory use and show how the computation is made to fit in 4 GB RAM. It is easy to extend the SAWdoubler software to other lattices; it is publicly available under the GNU LGPL license. Catalogue identifier: AEOB_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOB_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU Lesser General Public Licence No. of lines in distributed program, including test data, etc.: 2101 No. of bytes in distributed program, including test data, etc.: 19816 Distribution format: tar.gz Programming language: C. Computer: Any computer with a UNIX-like operating system and a C compiler. For large problems, use is made of specific 128-bit integer arithmetic provided by the gcc compiler. Operating system: Any UNIX-like system; developed under Linux and Mac OS 10. Has the code been vectorised or parallelised?: Yes. A parallel version of the code is available in the “Extras” directory of the distribution file. RAM: Problem dependent (2 GB for counting SAWs of length 28 on the 3D cubic lattice) Classification: 16.11. Nature of problem: Computing the number of self-avoiding walks of a given length on a given lattice. Solution method: Length-doubling. Restrictions: The length of the walk must be even. Lattice is 3D simple cubic. Additional comments: The lattice can be replaced by other lattices, such as BCC, FCC, or a 2D square lattice. Running time: Problem dependent (2.5 h using one processor core for length 28 on the 3D cubic lattice).
Ausems, Marlein; Mesters, Ilse; van Breukelen, Gerard; De Vries, Hein
2002-06-01
Smoking prevention programs usually run during school hours. In our study, an out-of-school program was developed consisting of a computer-tailored intervention aimed at the age group before school transition (11- to 12-year-old elementary schoolchildren). The aim of this study is to evaluate the additional effect of out-of-school smoking prevention. One hundred fifty-six participating schools were randomly allocated to one of four research conditions: (a) the in-school condition, an existing seven-lesson program; (b) the out-of-school condition, three computer-tailored letters sent to the students' homes; (c) the in-school and out-of-school condition, a combined approach; (d) the control condition. Pretest and 6 months follow-up data on smoking initiation and continuation, and data on psychosocial variables were collected from 3,349 students. Control and out-of-school conditions differed regarding posttest smoking initiation (18.1 and 10.4%) and regarding posttest smoking continuation (23.5 and 13.1%). Multilevel logistic regression analyses showed positive effects regarding the out-of-school program. Significant effects were not found regarding the in-school program, nor did the combined approach show stronger effects than the single-method approaches. The findings of this study suggest that smoking prevention trials for elementary schoolchildren can be effective when using out-of-school computer-tailored interventions. Copyright 2002 Elsevier Science (USA).
Instrumentation, performance visualization, and debugging tools for multiprocessors
NASA Technical Reports Server (NTRS)
Yan, Jerry C.; Fineman, Charles E.; Hontalas, Philip J.
1991-01-01
The need for computing power has forced a migration from serial computation on a single processor to parallel processing on multiprocessor architectures. However, without effective means to monitor (and visualize) program execution, debugging, and tuning parallel programs becomes intractably difficult as program complexity increases with the number of processors. Research on performance evaluation tools for multiprocessors is being carried out at ARC. Besides investigating new techniques for instrumenting, monitoring, and presenting the state of parallel program execution in a coherent and user-friendly manner, prototypes of software tools are being incorporated into the run-time environments of various hardware testbeds to evaluate their impact on user productivity. Our current tool set, the Ames Instrumentation Systems (AIMS), incorporates features from various software systems developed in academia and industry. The execution of FORTRAN programs on the Intel iPSC/860 can be automatically instrumented and monitored. Performance data collected in this manner can be displayed graphically on workstations supporting X-Windows. We have successfully compared various parallel algorithms for computational fluid dynamics (CFD) applications in collaboration with scientists from the Numerical Aerodynamic Simulation Systems Division. By performing these comparisons, we show that performance monitors and debuggers such as AIMS are practical and can illuminate the complex dynamics that occur within parallel programs.
User's manual for the Graphical Constituent Loading Analysis System (GCLAS)
Koltun, G.F.; Eberle, Michael; Gray, J.R.; Glysson, G.D.
2006-01-01
This manual describes the Graphical Constituent Loading Analysis System (GCLAS), an interactive cross-platform program for computing the mass (load) and average concentration of a constituent that is transported in stream water over a period of time. GCLAS computes loads as a function of an equal-interval streamflow time series and an equal- or unequal-interval time series of constituent concentrations. The constituent-concentration time series may be composed of measured concentrations or a combination of measured and estimated concentrations. GCLAS is not intended for use in situations where concentration data (or an appropriate surrogate) are collected infrequently or where an appreciable amount of the concentration values are censored. It is assumed that the constituent-concentration time series used by GCLAS adequately represents the true time-varying concentration. Commonly, measured constituent concentrations are collected at a frequency that is less than ideal (from a load-computation standpoint), so estimated concentrations must be inserted in the time series to better approximate the expected chemograph. GCLAS provides tools to facilitate estimation and entry of instantaneous concentrations for that purpose. Water-quality samples collected for load computation frequently are collected in a single vertical or at single point in a stream cross section. Several factors, some of which may vary as a function of time and (or) streamflow, can affect whether the sample concentrations are representative of the mean concentration in the cross section. GCLAS provides tools to aid the analyst in assessing whether concentrations in samples collected in a single vertical or at single point in a stream cross section exhibit systematic bias with respect to the mean concentrations. In cases where bias is evident, the analyst can construct coefficient relations in GCLAS to reduce or eliminate the observed bias. GCLAS can export load and concentration data in formats suitable for entry into the U.S. Geological Survey's National Water Information System. GCLAS can also import and export data in formats that are compatible with various commonly used spreadsheet and statistics programs.
Toroidal transformer design program with application to inverter circuitry
NASA Technical Reports Server (NTRS)
Dayton, J. A., Jr.
1972-01-01
Estimates of temperature, weight, efficiency, regulation, and final dimensions are included in the output of the computer program for the design of transformers for use in the basic parallel inverter. The program, written in FORTRAN 4, selects a tape wound toroidal magnetic core and, taking temperature, materials, core geometry, skin depth, and ohmic losses into account, chooses the appropriate wire sizes and number of turns for the center tapped primary and single secondary coils. Using the program, 2- and 4-kilovolt-ampere transformers are designed for frequencies from 200 to 3200 Hz and the efficiency of a basic transistor inverter is estimated.
NUMERICAL ANALYSES FOR TREATING DIFFUSION IN SINGLE-, TWO-, AND THREE-PHASE BINARY ALLOY SYSTEMS
NASA Technical Reports Server (NTRS)
Tenney, D. R.
1994-01-01
This package consists of a series of three computer programs for treating one-dimensional transient diffusion problems in single and multiple phase binary alloy systems. An accurate understanding of the diffusion process is important in the development and production of binary alloys. Previous solutions of the diffusion equations were highly restricted in their scope and application. The finite-difference solutions developed for this package are applicable for planar, cylindrical, and spherical geometries with any diffusion-zone size and any continuous variation of the diffusion coefficient with concentration. Special techniques were included to account for differences in modal volumes, initiation and growth of an intermediate phase, disappearance of a phase, and the presence of an initial composition profile in the specimen. In each analysis, an effort was made to achieve good accuracy while minimizing computation time. The solutions to the diffusion equations for single-, two-, and threephase binary alloy systems are numerically calculated by the three programs NAD1, NAD2, and NAD3. NAD1 treats the diffusion between pure metals which belong to a single-phase system. Diffusion in this system is described by a one-dimensional Fick's second law and will result in a continuous composition variation. For computational purposes, Fick's second law is expressed as an explicit second-order finite difference equation. Finite difference calculations are made by choosing the grid spacing small enough to give convergent solutions of acceptable accuracy. NAD2 treats diffusion between pure metals which form a two-phase system. Diffusion in the twophase system is described by two partial differential equations (a Fick's second law for each phase) and an interface-flux-balance equation which describes the location of the interface. Actual interface motion is obtained by a mass conservation procedure. To account for changes in the thicknesses of the two phases as diffusion progresses, a variable grid technique developed by Murray and Landis is employed. These equations are expressed in finite difference form and solved numerically. Program NAD3 treats diffusion between pure metals which form a two-phase system with an intermediate third phase. Diffusion in the three-phase system is described by three partial differential expressions of Fick's second law and two interface-flux-balance equations. As with the two-phase case, a variable grid finite difference is used to numerically solve the diffusion equations. Computation time is minimized without sacrificing solution accuracy by treating the three-phase problem as a two-phase problem when the thickness of the intermediate phase is less than a preset value. Comparisons between these programs and other solutions have shown excellent agreement. The programs are written in FORTRAN IV for batch execution on the CDC 6600 with a central memory requirement of approximately 51K (octal) 60 bit words.
Padhi, Radhakant; Unnikrishnan, Nishant; Wang, Xiaohua; Balakrishnan, S N
2006-12-01
Even though dynamic programming offers an optimal control solution in a state feedback form, the method is overwhelmed by computational and storage requirements. Approximate dynamic programming implemented with an Adaptive Critic (AC) neural network structure has evolved as a powerful alternative technique that obviates the need for excessive computations and storage requirements in solving optimal control problems. In this paper, an improvement to the AC architecture, called the "Single Network Adaptive Critic (SNAC)" is presented. This approach is applicable to a wide class of nonlinear systems where the optimal control (stationary) equation can be explicitly expressed in terms of the state and costate variables. The selection of this terminology is guided by the fact that it eliminates the use of one neural network (namely the action network) that is part of a typical dual network AC setup. As a consequence, the SNAC architecture offers three potential advantages: a simpler architecture, lesser computational load and elimination of the approximation error associated with the eliminated network. In order to demonstrate these benefits and the control synthesis technique using SNAC, two problems have been solved with the AC and SNAC approaches and their computational performances are compared. One of these problems is a real-life Micro-Electro-Mechanical-system (MEMS) problem, which demonstrates that the SNAC technique is applicable to complex engineering systems.
Method and device for measuring single-shot transient signals
Yin, Yan
2004-05-18
Methods, apparatus, and systems, including computer program products, implementing and using techniques for measuring multi-channel single-shot transient signals. A signal acquisition unit receives one or more single-shot pulses from a multi-channel source. An optical-fiber recirculating loop reproduces the one or more received single-shot optical pulses to form a first multi-channel pulse train for circulation in the recirculating loop, and a second multi-channel pulse train for display on a display device. The optical-fiber recirculating loop also optically amplifies the first circulating pulse train to compensate for signal losses and performs optical multi-channel noise filtration.
Large space structure damping design
NASA Technical Reports Server (NTRS)
Pilkey, W. D.; Haviland, J. K.
1983-01-01
Several FORTRAN subroutines and programs were developed which compute complex eigenvalues of a damped system using different approaches, and which rescale mode shapes to unit generalized mass and make rigid bodies orthogonal to each other. An analytical proof of a Minimum Constrained Frequency Criterion (MCFC) for a single damper is presented. A method to minimize the effect of control spill-over for large space structures is proposed. The characteristic equation of an undamped system with a generalized control law is derived using reanalysis theory. This equation can be implemented in computer programs for efficient eigenvalue analysis or control quasi synthesis. Methods to control vibrations in large space structure are reviewed and analyzed. The resulting prototype, using electromagnetic actuator, is described.
NASA Technical Reports Server (NTRS)
Mccarty, R. D.
1980-01-01
The thermodynamic and transport properties of selected cryogens had programmed into a series of computer routines. Input variables are any two of P, rho or T in the single phase regions and either P or T for the saturated liquid or vapor state. The output is pressure, density, temperature, entropy, enthalpy for all of the fluids and in most cases specific heat capacity and speed of sound. Viscosity and thermal conductivity are also given for most of the fluids. The programs are designed for access by remote terminal; however, they have been written in a modular form to allow the user to select either specific fluids or specific properties for particular needs. The program includes properties for hydrogen, helium, neon, nitrogen, oxygen, argon, and methane. The programs include properties for gaseous and liquid states usually from the triple point to some upper limit of pressure and temperature which varies from fluid to fluid.
An autonomous molecular computer for logical control of gene expression
Benenson, Yaakov; Gil, Binyamin; Ben-Dor, Uri; Adar, Rivka; Shapiro, Ehud
2013-01-01
Early biomolecular computer research focused on laboratory-scale, human-operated computers for complex computational problems1–7. Recently, simple molecular-scale autonomous programmable computers were demonstrated8–15 allowing both input and output information to be in molecular form. Such computers, using biological molecules as input data and biologically active molecules as outputs, could produce a system for ‘logical’ control of biological processes. Here we describe an autonomous biomolecular computer that, at least in vitro, logically analyses the levels of messenger RNA species, and in response produces a molecule capable of affecting levels of gene expression. The computer operates at a concentration of close to a trillion computers per microlitre and consists of three programmable modules: a computation module, that is, a stochastic molecular automaton12–17; an input module, by which specific mRNA levels or point mutations regulate software molecule concentrations, and hence automaton transition probabilities; and an output module, capable of controlled release of a short single-stranded DNA molecule. This approach might be applied in vivo to biochemical sensing, genetic engineering and even medical diagnosis and treatment. As a proof of principle we programmed the computer to identify and analyse mRNA of disease-related genes18–22 associated with models of small-cell lung cancer and prostate cancer, and to produce a single-stranded DNA molecule modelled after an anticancer drug. PMID:15116117
Cognitive training in Parkinson disease: cognition-specific vs nonspecific computer training.
Zimmermann, Ronan; Gschwandtner, Ute; Benz, Nina; Hatz, Florian; Schindler, Christian; Taub, Ethan; Fuhr, Peter
2014-04-08
In this study, we compared a cognition-specific computer-based cognitive training program with a motion-controlled computer sports game that is not cognition-specific for their ability to enhance cognitive performance in various cognitive domains in patients with Parkinson disease (PD). Patients with PD were trained with either a computer program designed to enhance cognition (CogniPlus, 19 patients) or a computer sports game with motion-capturing controllers (Nintendo Wii, 20 patients). The effect of training in 5 cognitive domains was measured by neuropsychological testing at baseline and after training. Group differences over all variables were assessed with multivariate analysis of variance, and group differences in single variables were assessed with 95% confidence intervals of mean difference. The groups were similar regarding age, sex, and educational level. Patients with PD who were trained with Wii for 4 weeks performed better in attention (95% confidence interval: -1.49 to -0.11) than patients trained with CogniPlus. In our study, patients with PD derived at least the same degree of cognitive benefit from non-cognition-specific training involving movement as from cognition-specific computerized training. For patients with PD, game consoles may be a less expensive and more entertaining alternative to computer programs specifically designed for cognitive training. This study provides Class III evidence that, in patients with PD, cognition-specific computer-based training is not superior to a motion-controlled computer game in improving cognitive performance.
A theoretical study of mixing downstream of transverse injection into a supersonic boundary layer
NASA Technical Reports Server (NTRS)
Baker, A. J.; Zelazny, S. W.
1972-01-01
A theoretical and analytical study was made of mixing downstream of transverse hydrogen injection, from single and multiple orifices, into a Mach 4 air boundary layer over a flat plate. Numerical solutions to the governing three-dimensional, elliptic boundary layer equations were obtained using a general purpose computer program. Founded upon a finite element solution algorithm. A prototype three-dimensional turbulent transport model was developed using mixing length theory in the wall region and the mass defect concept in the outer region. Excellent agreement between the computed flow field and experimental data for a jet/freestream dynamic pressure ratio of unity was obtained in the centerplane region of the single-jet configuration. Poorer agreement off centerplane suggests an inadequacy of the extrapolated two-dimensional turbulence model. Considerable improvement in off-centerplane computational agreement occured for a multi-jet configuration, using the same turbulent transport model.
Coalescent: an open-science framework for importance sampling in coalescent theory.
Tewari, Susanta; Spouge, John L
2015-01-01
Background. In coalescent theory, computer programs often use importance sampling to calculate likelihoods and other statistical quantities. An importance sampling scheme can exploit human intuition to improve statistical efficiency of computations, but unfortunately, in the absence of general computer frameworks on importance sampling, researchers often struggle to translate new sampling schemes computationally or benchmark against different schemes, in a manner that is reliable and maintainable. Moreover, most studies use computer programs lacking a convenient user interface or the flexibility to meet the current demands of open science. In particular, current computer frameworks can only evaluate the efficiency of a single importance sampling scheme or compare the efficiencies of different schemes in an ad hoc manner. Results. We have designed a general framework (http://coalescent.sourceforge.net; language: Java; License: GPLv3) for importance sampling that computes likelihoods under the standard neutral coalescent model of a single, well-mixed population of constant size over time following infinite sites model of mutation. The framework models the necessary core concepts, comes integrated with several data sets of varying size, implements the standard competing proposals, and integrates tightly with our previous framework for calculating exact probabilities. For a given dataset, it computes the likelihood and provides the maximum likelihood estimate of the mutation parameter. Well-known benchmarks in the coalescent literature validate the accuracy of the framework. The framework provides an intuitive user interface with minimal clutter. For performance, the framework switches automatically to modern multicore hardware, if available. It runs on three major platforms (Windows, Mac and Linux). Extensive tests and coverage make the framework reliable and maintainable. Conclusions. In coalescent theory, many studies of computational efficiency consider only effective sample size. Here, we evaluate proposals in the coalescent literature, to discover that the order of efficiency among the three importance sampling schemes changes when one considers running time as well as effective sample size. We also describe a computational technique called "just-in-time delegation" available to improve the trade-off between running time and precision by constructing improved importance sampling schemes from existing ones. Thus, our systems approach is a potential solution to the "2(8) programs problem" highlighted by Felsenstein, because it provides the flexibility to include or exclude various features of similar coalescent models or importance sampling schemes.
NASA Technical Reports Server (NTRS)
Teske, M. E.
1984-01-01
This is a user manual for the computer code ""AGDISP'' (AGricultural DISPersal) which has been developed to predict the deposition of material released from fixed and rotary wing aircraft in a single-pass, computationally efficient manner. The formulation of the code is novel in that the mean particle trajectory and the variance about the mean resulting from turbulent fluid fluctuations are simultaneously predicted. The code presently includes the capability of assessing the influence of neutral atmospheric conditions, inviscid wake vortices, particle evaporation, plant canopy and terrain on the deposition pattern.
Monte Carlo event generators in atomic collisions: A new tool to tackle the few-body dynamics
NASA Astrophysics Data System (ADS)
Ciappina, M. F.; Kirchner, T.; Schulz, M.
2010-04-01
We present a set of routines to produce theoretical event files, for both single and double ionization of atoms by ion impact, based on a Monte Carlo event generator (MCEG) scheme. Such event files are the theoretical counterpart of the data obtained from a kinematically complete experiment; i.e. they contain the momentum components of all collision fragments for a large number of ionization events. Among the advantages of working with theoretical event files is the possibility to incorporate the conditions present in a real experiment, such as the uncertainties in the measured quantities. Additionally, by manipulating them it is possible to generate any type of cross sections, specially those that are usually too complicated to compute with conventional methods due to a lack of symmetry. Consequently, the numerical effort of such calculations is dramatically reduced. We show examples for both single and double ionization, with special emphasis on a new data analysis tool, called four-body Dalitz plots, developed very recently. Program summaryProgram title: MCEG Catalogue identifier: AEFV_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFV_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 2695 No. of bytes in distributed program, including test data, etc.: 18 501 Distribution format: tar.gz Programming language: FORTRAN 77 with parallelization directives using scripting Computer: Single machines using Linux and Linux servers/clusters (with cores with any clock speed, cache memory and bits in a word) Operating system: Linux (any version and flavor) and FORTRAN 77 compilers Has the code been vectorised or parallelized?: Yes RAM: 64-128 kBytes (the codes are very cpu intensive) Classification: 2.6 Nature of problem: The code deals with single and double ionization of atoms by ion impact. Conventional theoretical approaches aim at a direct calculation of the corresponding cross sections. This has the important shortcoming that it is difficult to account for the experimental conditions when comparing results to measured data. In contrast, the present code generates theoretical event files of the same type as are obtained in a real experiment. From these event files any type of cross sections can be easily extracted. The theoretical schemes are based on distorted wave formalisms for both processes of interest. Solution method: The codes employ a Monte Carlo Event Generator based on theoretical formalisms to generate event files for both single and double ionization. One of the main advantages of having access to theoretical event files is the possibility of adding the conditions present in real experiments (parameter uncertainties, environmental conditions, etc.) and to incorporate additional physics in the resulting event files (e.g. elastic scattering or other interactions absent in the underlying calculations). Additional comments: The computational time can be dramatically reduced if a large number of processors is used. Since the codes has no communication between processes it is possible to achieve an efficiency of a 100% (this number certainly will be penalized by the queuing waiting time). Running time: Times vary according to the process, single or double ionization, to be simulated, the number of processors and the type of theoretical model. The typical running time is between several hours and up to a few weeks.
User's Guide for a Modular Flutter Analysis Software System (Fast Version 1.0)
NASA Technical Reports Server (NTRS)
Desmarais, R. N.; Bennett, R. M.
1978-01-01
The use and operation of a group of computer programs to perform a flutter analysis of a single planar wing are described. This system of programs is called FAST for Flutter Analysis System, and consists of five programs. Each program performs certain portions of a flutter analysis and can be run sequentially as a job step or individually. FAST uses natural vibration modes as input data and performs a conventional V-g type of solution. The unsteady aerodynamics programs in FAST are based on the subsonic kernel function lifting-surface theory although other aerodynamic programs can be used. Application of the programs is illustrated by a sample case of a complete flutter calculation that exercises each program.
NASA Technical Reports Server (NTRS)
Lee, C. H.
1978-01-01
The CELFE computer program and user's manual, together with the execution of the CELFE/NASTRAN system, are described. The execution procedure and the transfer of data between the CELFE and NASTRAN programs are controlled through the use of DATA files in the Univac 1100 system. Five data files are used to control the runstream and data transfer, and three files are used to hold the programs. These files are contained on a single tape. Changes in NASTRAN routines required by the present analysis are also discussed in this report. All the program listings, except the last two files (where the absolute and relocatable elements are stored), are included in the appendixes.
Corona performance of a compact 230-kV line
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chartier, V.L.; Blair, D.E.; Easley, M.D.
Permitting requirements and the acquisition of new rights-of-way for transmission facilities has in recent years become increasingly difficult for most utilities, including Puget Sound Power and Light Company. In order to maintain a high degree of reliability of service while being responsive to public concerns regarding the siting of high voltage (HV) transmission facilities, Puget Power has found it necessary to more heavily rely upon the use of compact lines in franchise corridors. Compaction does, however, precipitant increased levels of audible noise (AN) and radio and TV interference (RI and TVI) due to corona on the conductors and insulator assemblies.more » Puget Power relies upon the Bonneville Power Administration (BPA) Corona and Field Effects computer program to calculate AN and RI for new lines. Since there was some question of the program`s ability to accurately represent quiet 230-kV compact designs, a joint project was undertaken with BPA to verify the program`s algorithms. Long-term measurements made on an operating Puget Power 230-kV compact line confirmed the accuracy of BPA`s AN model; however, the RI measurements were much lower than predicted by the BPA computer and other programs. This paper also describes how the BPA computer program can be used to calculate the voltage needed to expose insulator assemblies to the correct electric field in single test setups in HV laboratories.« less
Extended Lagrangian Density Functional Tight-Binding Molecular Dynamics for Molecules and Solids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aradi, Bálint; Niklasson, Anders M. N.; Frauenheim, Thomas
A computationally fast quantum mechanical molecular dynamics scheme using an extended Lagrangian density functional tight-binding formulation has been developed and implemented in the DFTB+ electronic structure program package for simulations of solids and molecular systems. The scheme combines the computational speed of self-consistent density functional tight-binding theory with the efficiency and long-term accuracy of extended Lagrangian Born–Oppenheimer molecular dynamics. Furthermore, for systems without self-consistent charge instabilities, only a single diagonalization or construction of the single-particle density matrix is required in each time step. The molecular dynamics simulation scheme can also be applied to a broad range of problems in materialsmore » science, chemistry, and biology.« less
Extended Lagrangian Density Functional Tight-Binding Molecular Dynamics for Molecules and Solids
Aradi, Bálint; Niklasson, Anders M. N.; Frauenheim, Thomas
2015-06-26
A computationally fast quantum mechanical molecular dynamics scheme using an extended Lagrangian density functional tight-binding formulation has been developed and implemented in the DFTB+ electronic structure program package for simulations of solids and molecular systems. The scheme combines the computational speed of self-consistent density functional tight-binding theory with the efficiency and long-term accuracy of extended Lagrangian Born–Oppenheimer molecular dynamics. Furthermore, for systems without self-consistent charge instabilities, only a single diagonalization or construction of the single-particle density matrix is required in each time step. The molecular dynamics simulation scheme can also be applied to a broad range of problems in materialsmore » science, chemistry, and biology.« less
Simulation of n-qubit quantum systems. I. Quantum registers and quantum gates
NASA Astrophysics Data System (ADS)
Radtke, T.; Fritzsche, S.
2005-12-01
During recent years, quantum computations and the study of n-qubit quantum systems have attracted a lot of interest, both in theory and experiment. Apart from the promise of performing quantum computations, however, these investigations also revealed a great deal of difficulties which still need to be solved in practice. In quantum computing, unitary and non-unitary quantum operations act on a given set of qubits to form (entangled) states, in which the information is encoded by the overall system often referred to as quantum registers. To facilitate the simulation of such n-qubit quantum systems, we present the FEYNMAN program to provide all necessary tools in order to define and to deal with quantum registers and quantum operations. Although the present version of the program is restricted to unitary transformations, it equally supports—whenever possible—the representation of the quantum registers both, in terms of their state vectors and density matrices. In addition to the composition of two or more quantum registers, moreover, the program also supports their decomposition into various parts by applying the partial trace operation and the concept of the reduced density matrix. Using an interactive design within the framework of MAPLE, therefore, we expect the FEYNMAN program to be helpful not only for teaching the basic elements of quantum computing but also for studying their physical realization in the future. Program summaryTitle of program:FEYNMAN Catalogue number:ADWE Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADWE Program obtainable from:CPC Program Library, Queen's University of Belfast, N. Ireland Licensing provisions:None Computers for which the program is designed:All computers with a license of the computer algebra system MAPLE [Maple is a registered trademark of Waterlo Maple Inc.] Operating systems or monitors under which the program has been tested:Linux, MS Windows XP Programming language used:MAPLE 9.5 (but should be compatible with 9.0 and 8.0, too) Memory and time required to execute with typical data:Storage and time requirements critically depend on the number of qubits, n, in the quantum registers due to the exponential increase of the associated Hilbert space. In particular, complex algebraic operations may require large amounts of memory even for small qubit numbers. However, most of the standard commands (see Section 4 for simple examples) react promptly for up to five qubits on a normal single-processor machine ( ⩾1GHz with 512 MB memory) and use less than 10 MB memory. No. of lines in distributed program, including test data, etc.: 8864 No. of bytes in distributed program, including test data, etc.: 493 182 Distribution format: tar.gz Nature of the physical problem:During the last decade, quantum computing has been found to provide a revolutionary new form of computation. The algorithms by Shor [P.W. Shor, SIAM J. Sci. Statist. Comput. 26 (1997) 1484] and Grover [L.K. Grover, Phys. Rev. Lett. 79 (1997) 325. [2
Computer-Aided Engineering Of Cabling
NASA Technical Reports Server (NTRS)
Billitti, Joseph W.
1989-01-01
Program generates data sheets, drawings, and other information on electrical connections. DFACS program, centered around single data base, has built-in menus providing easy input of, and access to, data for all personnel involved in system, subsystem, and cabling. Enables parallel design of circuit-data sheets and drawings of harnesses. Also recombines raw information to generate automatically various project documents and drawings, including index of circuit-data sheets, list of electrical-interface circuits, lists of assemblies and equipment, cabling trees, and drawings of cabling electrical interfaces and harnesses. Purpose of program to provide engineering community with centralized data base for putting in, and gaining access to, functional definition of system as specified in terms of details of pin connections of end circuits of subsystems and instruments and data on harnessing. Primary objective to provide instantaneous single point of interchange of information, thus avoiding
XAFS Data Interchange: A single spectrum XAFS data file format.
Ravel, B; Newville, M
We propose a standard data format for the interchange of XAFS data. The XAFS Data Interchange (XDI) standard is meant to encapsulate a single spectrum of XAFS along with relevant metadata. XDI is a text-based format with a simple syntax which clearly delineates metadata from the data table in a way that is easily interpreted both by a computer and by a human. The metadata header is inspired by the format of an electronic mail header, representing metadata names and values as an associative array. The data table is represented as columns of numbers. This format can be imported as is into most existing XAFS data analysis, spreadsheet, or data visualization programs. Along with a specification and a dictionary of metadata types, we provide an application-programming interface written in C and bindings for programming dynamic languages.
XAFS Data Interchange: A single spectrum XAFS data file format
NASA Astrophysics Data System (ADS)
Ravel, B.; Newville, M.
2016-05-01
We propose a standard data format for the interchange of XAFS data. The XAFS Data Interchange (XDI) standard is meant to encapsulate a single spectrum of XAFS along with relevant metadata. XDI is a text-based format with a simple syntax which clearly delineates metadata from the data table in a way that is easily interpreted both by a computer and by a human. The metadata header is inspired by the format of an electronic mail header, representing metadata names and values as an associative array. The data table is represented as columns of numbers. This format can be imported as is into most existing XAFS data analysis, spreadsheet, or data visualization programs. Along with a specification and a dictionary of metadata types, we provide an application-programming interface written in C and bindings for programming dynamic languages.
Symplectic molecular dynamics simulations on specially designed parallel computers.
Borstnik, Urban; Janezic, Dusanka
2005-01-01
We have developed a computer program for molecular dynamics (MD) simulation that implements the Split Integration Symplectic Method (SISM) and is designed to run on specialized parallel computers. The MD integration is performed by the SISM, which analytically treats high-frequency vibrational motion and thus enables the use of longer simulation time steps. The low-frequency motion is treated numerically on specially designed parallel computers, which decreases the computational time of each simulation time step. The combination of these approaches means that less time is required and fewer steps are needed and so enables fast MD simulations. We study the computational performance of MD simulation of molecular systems on specialized computers and provide a comparison to standard personal computers. The combination of the SISM with two specialized parallel computers is an effective way to increase the speed of MD simulations up to 16-fold over a single PC processor.
Computational/experimental studies of isolated, single component droplet combustion
NASA Technical Reports Server (NTRS)
Dryer, Frederick L.
1993-01-01
Isolated droplet combustion processes have been the subject of extensive experimental and theoretical investigations for nearly 40 years. The gross features of droplet burning are qualitatively embodied by simple theories and are relatively well understood. However, there remain significant aspects of droplet burning, particularly its dynamics, for which additional basic knowledge is needed for thorough interpretations and quantitative explanations of transient phenomena. Spherically-symmetric droplet combustion, which can only be approximated under conditions of both low Reynolds and Grashof numbers, represents the simplest geometrical configuration in which to study the coupled chemical/transport processes inherent within non-premixed flames. The research summarized here, concerns recent results on isolated, single component, droplet combustion under microgravity conditions, a program pursued jointly with F.A. Williams of the University of California, San Diego. The overall program involves developing and applying experimental methods to study the burning of isolated, single component droplets, in various atmospheres, primarily at atmospheric pressure and below, in both drop towers and aboard space-based platforms such as the Space Shuttle or Space Station. Both computational methods and asymptotic methods, the latter pursued mainly at UCSD, are used in developing the experimental test matrix, in analyzing results, and for extending theoretical understanding. Methanol, and the normal alkanes, n-heptane, and n-decane, have been selected as test fuels to study time-dependent droplet burning phenomena. The following sections summarizes the Princeton efforts on this program, describe work in progress, and briefly delineate future research directions.
Analysis of a combined refrigerator-generator space power system
NASA Technical Reports Server (NTRS)
Klann, J. L.
1973-01-01
Description of a single-shaft and a two-shaft rotating machinery arrangements using neon for application in a combined refrigerator-generator power system for space missions. The arrangements consist of combined assemblies of a power turbine, alternator, compressor, and cry-turbine with a single-stage radial-flow design. A computer program was prepared to study the thermodynamics of the dual system in the evaluation of its cryocooling/electric capacity and appropriate weight. A preliminary analysis showed that a two-shaft arrangement of the power- and refrigeration-loop rotating machinery provided better output capacities than a single-shaft arrangement, without prohibitive operating compromises.
Computational Approaches to Simulation and Optimization of Global Aircraft Trajectories
NASA Technical Reports Server (NTRS)
Ng, Hok Kwan; Sridhar, Banavar
2016-01-01
This study examines three possible approaches to improving the speed in generating wind-optimal routes for air traffic at the national or global level. They are: (a) using the resources of a supercomputer, (b) running the computations on multiple commercially available computers and (c) implementing those same algorithms into NASAs Future ATM Concepts Evaluation Tool (FACET) and compares those to a standard implementation run on a single CPU. Wind-optimal aircraft trajectories are computed using global air traffic schedules. The run time and wait time on the supercomputer for trajectory optimization using various numbers of CPUs ranging from 80 to 10,240 units are compared with the total computational time for running the same computation on a single desktop computer and on multiple commercially available computers for potential computational enhancement through parallel processing on the computer clusters. This study also re-implements the trajectory optimization algorithm for further reduction of computational time through algorithm modifications and integrates that with FACET to facilitate the use of the new features which calculate time-optimal routes between worldwide airport pairs in a wind field for use with existing FACET applications. The implementations of trajectory optimization algorithms use MATLAB, Python, and Java programming languages. The performance evaluations are done by comparing their computational efficiencies and based on the potential application of optimized trajectories. The paper shows that in the absence of special privileges on a supercomputer, a cluster of commercially available computers provides a feasible approach for national and global air traffic system studies.
A computer aided engineering tool for ECLS systems
NASA Technical Reports Server (NTRS)
Bangham, Michal E.; Reuter, James L.
1987-01-01
The Computer-Aided Systems Engineering and Analysis tool used by NASA for environmental control and life support system design studies is capable of simulating atmospheric revitalization systems, water recovery and management systems, and single-phase active thermal control systems. The designer/analysis interface used is graphics-based, and allows the designer to build a model by constructing a schematic of the system under consideration. Data management functions are performed, and the program is translated into a format that is compatible with the solution routines.
Slimeware: engineering devices with slime mold.
Adamatzky, Andrew
2013-01-01
The plasmodium of the acellular slime mold Physarum polycephalum is a gigantic single cell visible to the unaided eye. The cell shows a rich spectrum of behavioral patterns in response to environmental conditions. In a series of simple experiments we demonstrate how to make computing, sensing, and actuating devices from the slime mold. We show how to program living slime mold machines by configurations of repelling and attracting gradients and demonstrate the workability of the living machines on tasks of computational geometry, logic, and arithmetic.
Some queuing network models of computer systems
NASA Technical Reports Server (NTRS)
Herndon, E. S.
1980-01-01
Queuing network models of a computer system operating with a single workload type are presented. Program algorithms are adapted for use on the Texas Instruments SR-52 programmable calculator. By slightly altering the algorithm to process the G and H matrices row by row instead of column by column, six devices and an unlimited job/terminal population could be handled on the SR-52. Techniques are also introduced for handling a simple load dependent server and for studying interactive systems with fixed multiprogramming limits.
1983-09-01
be illustrated by example. If ’z’ is the name of an individual and ’C’ is the name of a class (set), then ’ zEC ’ means that the individual denoted by ’z...will abbreviate this un z. Conversely, if C is a single element class, then un-1 C selects the unique member of that class: un-1C = Lz( zEC ). It is...Professor Peter Henderson1 Department of Computer Science SUNY at Stony Brook Long Island, NY 11794 Dr. Olle Olsson Department of Computer Science
Investigations of flowfields found in typical combustor geometries
NASA Technical Reports Server (NTRS)
Lilley, D. G.
1982-01-01
Measurements and computations are being applied to an axisymmetric swirling flow, emerging from swirl vanes at angle phi, entering a large chamber test section via a sudden expansion of various side-wall angles alpha. New features are: the turbulence measurements are being performed on swirling as well as nonswirling flow; and all measurements and computations are also being performed on a confined jet flowfield with realistic downstream blockage. Recent activity falls into three categories: (1) Time-mean flowfield characterization by five-hole pitot probe measurements and by flow visualization; (2) Turbulence measurements by a variety of single- and multi-wire hot-wire probe techniques; and (3) Flowfield computations using the computer code developed during the previous year's research program.
NASA Technical Reports Server (NTRS)
Goradia, S. H.; Lilley, D. E.
1975-01-01
Theoretical and experimental studies are described which were conducted for the purpose of developing a new generalized method for the prediction of profile drag of single component airfoil sections with sharp trailing edges. This method aims at solution for the flow in the wake from the airfoil trailing edge to the large distance in the downstream direction; the profile drag of the given airfoil section can then easily be obtained from the momentum balance once the shape of velocity profile at a large distance from the airfoil trailing edge has been computed. Computer program subroutines have been developed for the computation of the profile drag and flow in the airfoil wake on CDC6600 computer. The required inputs to the computer program consist of free stream conditions and the characteristics of the boundary layers at the airfoil trailing edge or at the point of incipient separation in the neighborhood of airfoil trailing edge. The method described is quite generalized and hence can be extended to the solution of the profile drag for multi-component airfoil sections.
Miller, Mark P.; Knaus, Brian J.; Mullins, Thomas D.; Haig, Susan M.
2013-01-01
SSR_pipeline is a flexible set of programs designed to efficiently identify simple sequence repeats (e.g., microsatellites) from paired-end high-throughput Illumina DNA sequencing data. The program suite contains 3 analysis modules along with a fourth control module that can automate analyses of large volumes of data. The modules are used to 1) identify the subset of paired-end sequences that pass Illumina quality standards, 2) align paired-end reads into a single composite DNA sequence, and 3) identify sequences that possess microsatellites (both simple and compound) conforming to user-specified parameters. The microsatellite search algorithm is extremely efficient, and we have used it to identify repeats with motifs from 2 to 25bp in length. Each of the 3 analysis modules can also be used independently to provide greater flexibility or to work with FASTQ or FASTA files generated from other sequencing platforms (Roche 454, Ion Torrent, etc.). We demonstrate use of the program with data from the brine fly Ephydra packardi (Diptera: Ephydridae) and provide empirical timing benchmarks to illustrate program performance on a common desktop computer environment. We further show that the Illumina platform is capable of identifying large numbers of microsatellites, even when using unenriched sample libraries and a very small percentage of the sequencing capacity from a single DNA sequencing run. All modules from SSR_pipeline are implemented in the Python programming language and can therefore be used from nearly any computer operating system (Linux, Macintosh, and Windows).
2012-01-01
Background The Poisson-Boltzmann (PB) equation and its linear approximation have been widely used to describe biomolecular electrostatics. Generalized Born (GB) models offer a convenient computational approximation for the more fundamental approach based on the Poisson-Boltzmann equation, and allows estimation of pairwise contributions to electrostatic effects in the molecular context. Results We have implemented in a single program most common analyses of the electrostatic properties of proteins. The program first computes generalized Born radii, via a surface integral and then it uses generalized Born radii (using a finite radius test particle) to perform electrostic analyses. In particular the ouput of the program entails, depending on user's requirement: 1) the generalized Born radius of each atom; 2) the electrostatic solvation free energy; 3) the electrostatic forces on each atom (currently in a dvelopmental stage); 4) the pH-dependent properties (total charge and pH-dependent free energy of folding in the pH range -2 to 18; 5) the pKa of all ionizable groups; 6) the electrostatic potential at the surface of the molecule; 7) the electrostatic potential in a volume surrounding the molecule; Conclusions Although at the expense of limited flexibility the program provides most common analyses with requirement of a single input file in PQR format. The results obtained are comparable to those obtained using state-of-the-art Poisson-Boltzmann solvers. A Linux executable with example input and output files is provided as supplementary material. PMID:22536964
Miller, Mark P; Knaus, Brian J; Mullins, Thomas D; Haig, Susan M
2013-01-01
SSR_pipeline is a flexible set of programs designed to efficiently identify simple sequence repeats (e.g., microsatellites) from paired-end high-throughput Illumina DNA sequencing data. The program suite contains 3 analysis modules along with a fourth control module that can automate analyses of large volumes of data. The modules are used to 1) identify the subset of paired-end sequences that pass Illumina quality standards, 2) align paired-end reads into a single composite DNA sequence, and 3) identify sequences that possess microsatellites (both simple and compound) conforming to user-specified parameters. The microsatellite search algorithm is extremely efficient, and we have used it to identify repeats with motifs from 2 to 25 bp in length. Each of the 3 analysis modules can also be used independently to provide greater flexibility or to work with FASTQ or FASTA files generated from other sequencing platforms (Roche 454, Ion Torrent, etc.). We demonstrate use of the program with data from the brine fly Ephydra packardi (Diptera: Ephydridae) and provide empirical timing benchmarks to illustrate program performance on a common desktop computer environment. We further show that the Illumina platform is capable of identifying large numbers of microsatellites, even when using unenriched sample libraries and a very small percentage of the sequencing capacity from a single DNA sequencing run. All modules from SSR_pipeline are implemented in the Python programming language and can therefore be used from nearly any computer operating system (Linux, Macintosh, and Windows).
NASA Astrophysics Data System (ADS)
Sanna, N.; Morelli, G.
2004-09-01
In this paper we present the new version of the SCELib program (CPC Catalogue identifier ADMG) a full numerical implementation of the Single Center Expansion (SCE) method. The physics involved is that of producing the SCE description of molecular electronic densities, of molecular electrostatic potentials and of molecular perturbed potentials due to a point negative or positive charge. This new revision of the program has been optimized to run in serial as well as in parallel execution mode, to support a larger set of molecular symmetries and to permit the restart of long-lasting calculations. To measure the performance of this new release, a comparative study has been carried out on the most powerful computing architectures in serial and parallel runs. The results of the calculations reported in this paper refer to real cases medium to large molecular systems and they are reported in full details to benchmark at best the parallel architectures the new SCELib code will run on. Program summaryTitle of program: SCELib2 Catalogue identifier: ADGU Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADGU Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Reference to previous versions: Comput. Phys. Commun. 128 (2) (2000) 139 (CPC catalogue identifier: ADMG) Does the new version supersede the original program?: Yes Computer for which the program is designed and others on which it has been tested: HP ES45 and rx2600, SUN ES4500, IBM SP and any single CPU workstation based on Alpha, SPARC, POWER, Itanium2 and X86 processors Installations: CASPUR, local Operating systems under which the program has been tested: HP Tru64 V5.X, SUNOS V5.8, IBM AIX V5.X, Linux RedHat V8.0 Programming language used: C Memory required to execute with typical data: 10 Mwords. Up to 2000 Mwords depending on the molecular system and runtime parameters No. of bits in a word: 64 No. of processors used: 1 to 32 Has the code been vectorized or parallelized?: Yes No. of bytes in distributed program, including test data, etc.: 3 798 507 No. of lines in distributed program, including test data, etc.: 187 226 Distribution format: tar.gz Nature of physical problem: In this set of codes an efficient procedure is implemented to describe the wavefunction and related molecular properties of a polyatomic molecular system within the Single Center of Expansion (SCE) approximation. The resulting SCE wavefunction, electron density, electrostatic and exchange/correlation potentials can then be used via a proper Application Programming Interface (API) to describe the target molecular system which can be employed in electron-molecule scattering calculations. The molecular properties expanded over a single center turn out to also be of more general application and some possible uses in quantum chemistry, biomodelling and drug design are also outlined. Method of solution: The polycentre Hartee-Fock solution for a molecule of arbitrary geometry, based on linear combination of Gaussian-Type Orbital (GTO), is expanded over a single center, typically the Center Of Mass (C.O.M.), by means of a Gauss-Legendre/Chebyschev quadrature over the θ, φ angular coordinates. The resulting SCE numerical wavefunction is then used to calculate the one-particle electron density, the electrostatic potential and two different models for the correlation/polarization potentials induced by the impinging electron, which have the correct asymptotic behaviour for the leading dipole molecular polarizabilities. Restrictions on the complexity of the problem: Depending on the molecular system under study and on the operating conditions the program may or may not fit into available RAM memory. In this case a feature of the program is to memory map a disk file in order to efficiently access the memory data through a disk device. Typical running time: The execution time strongly depends on the molecular target description and on the hardware/OS chosen, it is directly proportional to the ( r, θ, φ) grid size and to the number of angular basis functions used. Thus, from the program printout of the main arrays memory occupancy, the user can approximately derive the expected computer time needed for a given calculation executed in serial mode. For parallel executions the overall efficiency must be further taken into account, and this depends on the no. of processors used as well as on the parallel architecture chosen, so a simple general law is at present not determinable. Unusual features of the program: The code has been engineered to use dynamical, runtime determined, global parameters with the aim to have all the data fitted in the RAM memory. Some unusual circumstances, e.g., when using large values of those parameters, may cause the program to run with unexpected performance reductions due to runtime bottlenecks like those caused by memory swap operations which strongly depend on the hardware used. In such cases, a parallel execution of the code is generally sufficient to fix the problem since the data size is partitioned over the available processors. When a suitable parallel system is not available for execution, a mechanism of memory mapped file can be used; with this option on, all the available memory will be used as a buffer for a disk file which contains the whole data set, thus having a better throughput with respect to the traditional swapping/paging of the Unix OS.
A Personal Computer-Based Head-Spine Model
1998-09-01
the CHSM. CHSM was comprised of the pelvis, the thoracolumbar spine, a single beam representation of the cervical spine, the head, the rib cage , and...developing the private sector HSM-PC project follows the Phase II program Work Plan , but continues into a Phase m SBIR program internally funded by...on completing the head and neck portion of HSM-PC, which as described in the Confidence Assessment Plan (CA Plan ) will be known as the Head Cervical
Development of programmable artificial neural networks
NASA Technical Reports Server (NTRS)
Meade, Andrew J.
1993-01-01
Conventionally programmed digital computers can process numbers with great speed and precision, but do not easily recognize patterns or imprecise or contradictory data. Instead of being programmed in the conventional sense, artificial neural networks are capable of self-learning through exposure to repeated examples. However, the training of an ANN can be a time consuming and unpredictable process. A general method is being developed to mate the adaptability of the ANN with the speed and precision of the digital computer. This method was successful in building feedforward networks that can approximate functions and their partial derivatives from examples in a single iteration. The general method also allows the formation of feedforward networks that can approximate the solution to nonlinear ordinary and partial differential equations to desired accuracy without the need of examples. It is believed that continued research will produce artificial neural networks that can be used with confidence in practical scientific computing and engineering applications.
Ag2S atomic switch-based `tug of war' for decision making
NASA Astrophysics Data System (ADS)
Lutz, C.; Hasegawa, T.; Chikyow, T.
2016-07-01
For a computing process such as making a decision, a software controlled chip of several transistors is necessary. Inspired by how a single cell amoeba decides its movements, the theoretical `tug of war' computing model was proposed but not yet implemented in an analogue device suitable for integrated circuits. Based on this model, we now developed a new electronic element for decision making processes, which will have no need for prior programming. The devices are based on the growth and shrinkage of Ag filaments in α-Ag2+δS gap-type atomic switches. Here we present the adapted device design and the new materials. We demonstrate the basic `tug of war' operation by IV-measurements and Scanning Electron Microscopy (SEM) observation. These devices could be the base for a CMOS-free new computer architecture.For a computing process such as making a decision, a software controlled chip of several transistors is necessary. Inspired by how a single cell amoeba decides its movements, the theoretical `tug of war' computing model was proposed but not yet implemented in an analogue device suitable for integrated circuits. Based on this model, we now developed a new electronic element for decision making processes, which will have no need for prior programming. The devices are based on the growth and shrinkage of Ag filaments in α-Ag2+δS gap-type atomic switches. Here we present the adapted device design and the new materials. We demonstrate the basic `tug of war' operation by IV-measurements and Scanning Electron Microscopy (SEM) observation. These devices could be the base for a CMOS-free new computer architecture. Electronic supplementary information (ESI) available. See DOI: 10.1039/c6nr00690f
Smartphone Microscopy of Parasite Eggs Accumulated into a Single Field of View
Sowerby, Stephen J.; Crump, John A.; Johnstone, Maree C.; Krause, Kurt L.; Hill, Philip C.
2016-01-01
A Nokia Lumia 1020 cellular phone (Microsoft Corp., Auckland, New Zealand) was configured to image the ova of Ascaris lumbricoides converged into a single field of view but on different focal planes. The phone was programmed to acquire images at different distances and, using public domain computer software, composite images were created that brought all the eggs into sharp focus. This proof of concept informs a framework for field-deployable, point of care monitoring of soil-transmitted helminths. PMID:26572870
NASA Technical Reports Server (NTRS)
Bizon, P. T.; Hill, R. J.; Guilliams, B. P.; Drake, S. K.; Kladden, J. L.
1979-01-01
An elastic stress analysis was performed on a wedge specimen (prismatic bar with single-wedge cross section) subjected to thermal cycles in fluidized beds. Seven different combinations consisting of three alloys (NASA TAZ-8A, 316 stainless steel, and A-286) and four thermal cycling conditions were analyzed. The analyses were performed as a joint effort of two laboratories using different models and computer programs (NASTRAN and ISO3DQ). Stress, strain, and temperature results are presented.
Tian, He; Zhao, Lianfeng; Wang, Xuefeng; Yeh, Yao-Wen; Yao, Nan; Rand, Barry P; Ren, Tian-Ling
2017-12-26
Extremely low energy consumption neuromorphic computing is required to achieve massively parallel information processing on par with the human brain. To achieve this goal, resistive memories based on materials with ionic transport and extremely low operating current are required. Extremely low operating current allows for low power operation by minimizing the program, erase, and read currents. However, materials currently used in resistive memories, such as defective HfO x , AlO x , TaO x , etc., cannot suppress electronic transport (i.e., leakage current) while allowing good ionic transport. Here, we show that 2D Ruddlesden-Popper phase hybrid lead bromide perovskite single crystals are promising materials for low operating current nanodevice applications because of their mixed electronic and ionic transport and ease of fabrication. Ionic transport in the exfoliated 2D perovskite layer is evident via the migration of bromide ions. Filaments with a diameter of approximately 20 nm are visualized, and resistive memories with extremely low program current down to 10 pA are achieved, a value at least 1 order of magnitude lower than conventional materials. The ionic migration and diffusion as an artificial synapse is realized in the 2D layered perovskites at the pA level, which can enable extremely low energy neuromorphic computing.
Computer-Aided Design of RNA Origami Structures.
Sparvath, Steffen L; Geary, Cody W; Andersen, Ebbe S
2017-01-01
RNA nanostructures can be used as scaffolds to organize, combine, and control molecular functionalities, with great potential for applications in nanomedicine and synthetic biology. The single-stranded RNA origami method allows RNA nanostructures to be folded as they are transcribed by the RNA polymerase. RNA origami structures provide a stable framework that can be decorated with functional RNA elements such as riboswitches, ribozymes, interaction sites, and aptamers for binding small molecules or protein targets. The rich library of RNA structural and functional elements combined with the possibility to attach proteins through aptamer-based binding creates virtually limitless possibilities for constructing advanced RNA-based nanodevices.In this chapter we provide a detailed protocol for the single-stranded RNA origami design method using a simple 2-helix tall structure as an example. The first step involves 3D modeling of a double-crossover between two RNA double helices, followed by decoration with tertiary motifs. The second step deals with the construction of a 2D blueprint describing the secondary structure and sequence constraints that serves as the input for computer programs. In the third step, computer programs are used to design RNA sequences that are compatible with the structure, and the resulting outputs are evaluated and converted into DNA sequences to order.
An Interactive Excel Program for Tracking a Single Droplet in Crossflow Computation
NASA Technical Reports Server (NTRS)
Urip, E.; Yang, S. L.; Marek, C. J.
2002-01-01
Spray jet in crossflow has been a subject of research because of its wide application in systems involving pollutant dispersion, jet mixing in the dilution zone of combustors, and fuel injection strategies. The focus of this work is to investigate dispersion of a 2-dimensional atomized spray jet into a 2-dimensional crossflow. A quick computational method is developed using available software. The spreadsheet can be used for any 2D droplet trajectory problem where the drop is injected into the free stream eventually coming to the free stream conditions. During the transverse injection of a spray into high velocity airflow, the droplets (carried along and deflected by a gaseous stream of co-flowing air) are subjected to forces that affect their motion in the flow field. Based on the Newton's Second Law of motion, four ordinary differential equations were used. These equations were then solved by a fourth-order Runge-Kutta method using Excel software. Visual basic programming and Excel macrocode to produce the data facilitate Excel software to plot graphs describing the droplet's motion in the flow field. This program computes and plots the data sequentially without forcing users to open other types of plotting programs. A user's manual on how to use the program is also included in this report.
Computer vision camera with embedded FPGA processing
NASA Astrophysics Data System (ADS)
Lecerf, Antoine; Ouellet, Denis; Arias-Estrada, Miguel
2000-03-01
Traditional computer vision is based on a camera-computer system in which the image understanding algorithms are embedded in the computer. To circumvent the computational load of vision algorithms, low-level processing and imaging hardware can be integrated in a single compact module where a dedicated architecture is implemented. This paper presents a Computer Vision Camera based on an open architecture implemented in an FPGA. The system is targeted to real-time computer vision tasks where low level processing and feature extraction tasks can be implemented in the FPGA device. The camera integrates a CMOS image sensor, an FPGA device, two memory banks, and an embedded PC for communication and control tasks. The FPGA device is a medium size one equivalent to 25,000 logic gates. The device is connected to two high speed memory banks, an IS interface, and an imager interface. The camera can be accessed for architecture programming, data transfer, and control through an Ethernet link from a remote computer. A hardware architecture can be defined in a Hardware Description Language (like VHDL), simulated and synthesized into digital structures that can be programmed into the FPGA and tested on the camera. The architecture of a classical multi-scale edge detection algorithm based on a Laplacian of Gaussian convolution has been developed to show the capabilities of the system.
Flexible Language Constructs for Large Parallel Programs
Rosing, Matt; Schnabel, Robert
1994-01-01
The goal of the research described in this article is to develop flexible language constructs for writing large data parallel numerical programs for distributed memory (multiple instruction multiple data [MIMD]) multiprocessors. Previously, several models have been developed to support synchronization and communication. Models for global synchronization include single instruction multiple data (SIMD), single program multiple data (SPMD), and sequential programs annotated with data distribution statements. The two primary models for communication include implicit communication based on shared memory and explicit communication based on messages. None of these models by themselves seem sufficient to permit the natural and efficient expression ofmore » the variety of algorithms that occur in large scientific computations. In this article, we give an overview of a new language that combines many of these programming models in a clean manner. This is done in a modular fashion such that different models can be combined to support large programs. Within a module, the selection of a model depends on the algorithm and its efficiency requirements. In this article, we give an overview of the language and discuss some of the critical implementation details.« less
NASA Technical Reports Server (NTRS)
Chow, Edward T.; Schatzel, Donald V.; Whitaker, William D.; Sterling, Thomas
2008-01-01
A Spaceborne Processor Array in Multifunctional Structure (SPAMS) can lower the total mass of the electronic and structural overhead of spacecraft, resulting in reduced launch costs, while increasing the science return through dynamic onboard computing. SPAMS integrates the multifunctional structure (MFS) and the Gilgamesh Memory, Intelligence, and Network Device (MIND) multi-core in-memory computer architecture into a single-system super-architecture. This transforms every inch of a spacecraft into a sharable, interconnected, smart computing element to increase computing performance while simultaneously reducing mass. The MIND in-memory architecture provides a foundation for high-performance, low-power, and fault-tolerant computing. The MIND chip has an internal structure that includes memory, processing, and communication functionality. The Gilgamesh is a scalable system comprising multiple MIND chips interconnected to operate as a single, tightly coupled, parallel computer. The array of MIND components shares a global, virtual name space for program variables and tasks that are allocated at run time to the distributed physical memory and processing resources. Individual processor- memory nodes can be activated or powered down at run time to provide active power management and to configure around faults. A SPAMS system is comprised of a distributed Gilgamesh array built into MFS, interfaces into instrument and communication subsystems, a mass storage interface, and a radiation-hardened flight computer.
ms2: A molecular simulation tool for thermodynamic properties
NASA Astrophysics Data System (ADS)
Deublein, Stephan; Eckl, Bernhard; Stoll, Jürgen; Lishchuk, Sergey V.; Guevara-Carrion, Gabriela; Glass, Colin W.; Merker, Thorsten; Bernreuther, Martin; Hasse, Hans; Vrabec, Jadran
2011-11-01
This work presents the molecular simulation program ms2 that is designed for the calculation of thermodynamic properties of bulk fluids in equilibrium consisting of small electro-neutral molecules. ms2 features the two main molecular simulation techniques, molecular dynamics (MD) and Monte-Carlo. It supports the calculation of vapor-liquid equilibria of pure fluids and multi-component mixtures described by rigid molecular models on the basis of the grand equilibrium method. Furthermore, it is capable of sampling various classical ensembles and yields numerous thermodynamic properties. To evaluate the chemical potential, Widom's test molecule method and gradual insertion are implemented. Transport properties are determined by equilibrium MD simulations following the Green-Kubo formalism. ms2 is designed to meet the requirements of academia and industry, particularly achieving short response times and straightforward handling. It is written in Fortran90 and optimized for a fast execution on a broad range of computer architectures, spanning from single processor PCs over PC-clusters and vector computers to high-end parallel machines. The standard Message Passing Interface (MPI) is used for parallelization and ms2 is therefore easily portable to different computing platforms. Feature tools facilitate the interaction with the code and the interpretation of input and output files. The accuracy and reliability of ms2 has been shown for a large variety of fluids in preceding work. Program summaryProgram title:ms2 Catalogue identifier: AEJF_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJF_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Special Licence supplied by the authors No. of lines in distributed program, including test data, etc.: 82 794 No. of bytes in distributed program, including test data, etc.: 793 705 Distribution format: tar.gz Programming language: Fortran90 Computer: The simulation tool ms2 is usable on a wide variety of platforms, from single processor machines over PC-clusters and vector computers to vector-parallel architectures. (Tested with Fortran compilers: gfortran, Intel, PathScale, Portland Group and Sun Studio.) Operating system: Unix/Linux, Windows Has the code been vectorized or parallelized?: Yes. Message Passing Interface (MPI) protocol Scalability. Excellent scalability up to 16 processors for molecular dynamics and >512 processors for Monte-Carlo simulations. RAM:ms2 runs on single processors with 512 MB RAM. The memory demand rises with increasing number of processors used per node and increasing number of molecules. Classification: 7.7, 7.9, 12 External routines: Message Passing Interface (MPI) Nature of problem: Calculation of application oriented thermodynamic properties for rigid electro-neutral molecules: vapor-liquid equilibria, thermal and caloric data as well as transport properties of pure fluids and multi-component mixtures. Solution method: Molecular dynamics, Monte-Carlo, various classical ensembles, grand equilibrium method, Green-Kubo formalism. Restrictions: No. The system size is user-defined. Typical problems addressed by ms2 can be solved by simulating systems containing typically 2000 molecules or less. Unusual features: Feature tools are available for creating input files, analyzing simulation results and visualizing molecular trajectories. Additional comments: Sample makefiles for multiple operation platforms are provided. Documentation is provided with the installation package and is available at http://www.ms-2.de. Running time: The running time of ms2 depends on the problem set, the system size and the number of processes used in the simulation. Running four processes on a "Nehalem" processor, simulations calculating VLE data take between two and twelve hours, calculating transport properties between six and 24 hours.
SU (2) lattice gauge theory simulations on Fermi GPUs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cardoso, Nuno, E-mail: nunocardoso@cftp.ist.utl.p; Bicudo, Pedro, E-mail: bicudo@ist.utl.p
2011-05-10
In this work we explore the performance of CUDA in quenched lattice SU (2) simulations. CUDA, NVIDIA Compute Unified Device Architecture, is a hardware and software architecture developed by NVIDIA for computing on the GPU. We present an analysis and performance comparison between the GPU and CPU in single and double precision. Analyses with multiple GPUs and two different architectures (G200 and Fermi architectures) are also presented. In order to obtain a high performance, the code must be optimized for the GPU architecture, i.e., an implementation that exploits the memory hierarchy of the CUDA programming model. We produce codes formore » the Monte Carlo generation of SU (2) lattice gauge configurations, for the mean plaquette, for the Polyakov Loop at finite T and for the Wilson loop. We also present results for the potential using many configurations (50,000) without smearing and almost 2000 configurations with APE smearing. With two Fermi GPUs we have achieved an excellent performance of 200x the speed over one CPU, in single precision, around 110 Gflops/s. We also find that, using the Fermi architecture, double precision computations for the static quark-antiquark potential are not much slower (less than 2x slower) than single precision computations.« less
Uncluttered Single-Image Visualization of Vascular Structures using GPU and Integer Programming
Won, Joong-Ho; Jeon, Yongkweon; Rosenberg, Jarrett; Yoon, Sungroh; Rubin, Geoffrey D.; Napel, Sandy
2013-01-01
Direct projection of three-dimensional branching structures, such as networks of cables, blood vessels, or neurons onto a 2D image creates the illusion of intersecting structural parts and creates challenges for understanding and communication. We present a method for visualizing such structures, and demonstrate its utility in visualizing the abdominal aorta and its branches, whose tomographic images might be obtained by computed tomography or magnetic resonance angiography, in a single two-dimensional stylistic image, without overlaps among branches. The visualization method, termed uncluttered single-image visualization (USIV), involves optimization of geometry. This paper proposes a novel optimization technique that utilizes an interesting connection of the optimization problem regarding USIV to the protein structure prediction problem. Adopting the integer linear programming-based formulation for the protein structure prediction problem, we tested the proposed technique using 30 visualizations produced from five patient scans with representative anatomical variants in the abdominal aortic vessel tree. The novel technique can exploit commodity-level parallelism, enabling use of general-purpose graphics processing unit (GPGPU) technology that yields a significant speedup. Comparison of the results with the other optimization technique previously reported elsewhere suggests that, in most aspects, the quality of the visualization is comparable to that of the previous one, with a significant gain in the computation time of the algorithm. PMID:22291148
A versatile optical microscope for time-dependent single-molecule and single-particle spectroscopy
NASA Astrophysics Data System (ADS)
Li, Hao; Yang, Haw
2018-03-01
This work reports the design and implementation of a multi-function optical microscope for time-dependent spectroscopy on single molecules and single nanoparticles. It integrates the now-routine single-object measurements into one standalone platform so that no reconfiguration is needed when switching between different types of sample or spectroscopy modes. The illumination modes include evanescent field through total internal reflection, dark-field illumination, and epi-excitation onto a diffraction-limited spot suitable for confocal detection. The detection modes include spectrally resolved line imaging, wide-field imaging with dual-color capability, and two-color single-element photon-counting detection. The switch between different spectroscopy and data acquisition modes is fully automated and executed through computer programming. The capability of this microscope is demonstrated through selected proof-of-principle experiments.
A versatile optical microscope for time-dependent single-molecule and single-particle spectroscopy.
Li, Hao; Yang, Haw
2018-03-28
This work reports the design and implementation of a multi-function optical microscope for time-dependent spectroscopy on single molecules and single nanoparticles. It integrates the now-routine single-object measurements into one standalone platform so that no reconfiguration is needed when switching between different types of sample or spectroscopy modes. The illumination modes include evanescent field through total internal reflection, dark-field illumination, and epi-excitation onto a diffraction-limited spot suitable for confocal detection. The detection modes include spectrally resolved line imaging, wide-field imaging with dual-color capability, and two-color single-element photon-counting detection. The switch between different spectroscopy and data acquisition modes is fully automated and executed through computer programming. The capability of this microscope is demonstrated through selected proof-of-principle experiments.
NASA Astrophysics Data System (ADS)
Yang, Sheng-Chun; Lu, Zhong-Yuan; Qian, Hu-Jun; Wang, Yong-Lei; Han, Jie-Ping
2017-11-01
In this work, we upgraded the electrostatic interaction method of CU-ENUF (Yang, et al., 2016) which first applied CUNFFT (nonequispaced Fourier transforms based on CUDA) to the reciprocal-space electrostatic computation and made the computation of electrostatic interaction done thoroughly in GPU. The upgraded edition of CU-ENUF runs concurrently in a hybrid parallel way that enables the computation parallelizing on multiple computer nodes firstly, then further on the installed GPU in each computer. By this parallel strategy, the size of simulation system will be never restricted to the throughput of a single CPU or GPU. The most critical technical problem is how to parallelize a CUNFFT in the parallel strategy, which is conquered effectively by deep-seated research of basic principles and some algorithm skills. Furthermore, the upgraded method is capable of computing electrostatic interactions for both the atomistic molecular dynamics (MD) and the dissipative particle dynamics (DPD). Finally, the benchmarks conducted for validation and performance indicate that the upgraded method is able to not only present a good precision when setting suitable parameters, but also give an efficient way to compute electrostatic interactions for huge simulation systems. Program Files doi:http://dx.doi.org/10.17632/zncf24fhpv.1 Licensing provisions: GNU General Public License 3 (GPL) Programming language: C, C++, and CUDA C Supplementary material: The program is designed for effective electrostatic interactions of large-scale simulation systems, which runs on particular computers equipped with NVIDIA GPUs. It has been tested on (a) single computer node with Intel(R) Core(TM) i7-3770@ 3.40 GHz (CPU) and GTX 980 Ti (GPU), and (b) MPI parallel computer nodes with the same configurations. Nature of problem: For molecular dynamics simulation, the electrostatic interaction is the most time-consuming computation because of its long-range feature and slow convergence in simulation space, which approximately take up most of the total simulation time. Although the parallel method CU-ENUF (Yang et al., 2016) based on GPU has achieved a qualitative leap compared with previous methods in electrostatic interactions computation, the computation capability is limited to the throughput capacity of a single GPU for super-scale simulation system. Therefore, we should look for an effective method to handle the calculation of electrostatic interactions efficiently for a simulation system with super-scale size. Solution method: We constructed a hybrid parallel architecture, in which CPU and GPU are combined to accelerate the electrostatic computation effectively. Firstly, the simulation system is divided into many subtasks via domain-decomposition method. Then MPI (Message Passing Interface) is used to implement the CPU-parallel computation with each computer node corresponding to a particular subtask, and furthermore each subtask in one computer node will be executed in GPU in parallel efficiently. In this hybrid parallel method, the most critical technical problem is how to parallelize a CUNFFT (nonequispaced fast Fourier transform based on CUDA) in the parallel strategy, which is conquered effectively by deep-seated research of basic principles and some algorithm skills. Restrictions: The HP-ENUF is mainly oriented to super-scale system simulations, in which the performance superiority is shown adequately. However, for a small simulation system containing less than 106 particles, the mode of multiple computer nodes has no apparent efficiency advantage or even lower efficiency due to the serious network delay among computer nodes, than the mode of single computer node. References: (1) S.-C. Yang, H.-J. Qian, Z.-Y. Lu, Appl. Comput. Harmon. Anal. 2016, http://dx.doi.org/10.1016/j.acha.2016.04.009. (2) S.-C. Yang, Y.-L. Wang, G.-S. Jiao, H.-J. Qian, Z.-Y. Lu, J. Comput. Chem. 37 (2016) 378. (3) S.-C. Yang, Y.-L. Zhu, H.-J. Qian, Z.-Y. Lu, Appl. Chem. Res. Chin. Univ., 2017, http://dx.doi.org/10.1007/s40242-016-6354-5. (4) Y.-L. Zhu, H. Liu, Z.-W. Li, H.-J. Qian, G. Milano, Z.-Y. Lu, J. Comput. Chem. 34 (2013) 2197.
Computer-aided communication satellite system analysis and optimization
NASA Technical Reports Server (NTRS)
Stagl, T. W.; Morgan, N. H.; Morley, R. E.; Singh, J. P.
1973-01-01
The capabilities and limitations of the various published computer programs for fixed/broadcast communication satellite system synthesis and optimization are discussed. A satellite Telecommunication analysis and Modeling Program (STAMP) for costing and sensitivity analysis work in application of communication satellites to educational development is given. The modifications made to STAMP include: extension of the six beam capability to eight; addition of generation of multiple beams from a single reflector system with an array of feeds; an improved system costing to reflect the time value of money, growth in earth terminal population with time, and to account for various measures of system reliability; inclusion of a model for scintillation at microwave frequencies in the communication link loss model; and, an updated technological environment.
NASA Astrophysics Data System (ADS)
Whaley, Gregory J.; Karnopp, Roger J.
2010-04-01
The goal of the Air Force Highly Integrated Photonics (HIP) program is to develop and demonstrate single photonic chip components which support a single mode fiber network architecture for use on mobile military platforms. We propose an optically transparent, broadcast and select fiber optic network as the next generation interconnect on avionics platforms. In support of this network, we have developed three principal, single-chip photonic components: a tunable laser transmitter, a 32x32 port star coupler, and a 32 port multi-channel receiver which are all compatible with demanding avionics environmental and size requirements. The performance of the developed components will be presented as well as the results of a demonstration system which integrates the components into a functional network representative of the form factor used in advanced avionics computing and signal processing applications.
Space processing of crystals for opto-electronic devices: The case for solution growth
NASA Technical Reports Server (NTRS)
Hayden, S. C.; Cross, L. E.
1975-01-01
The results obtained during a six month program aimed at determining the viability of space processing in the 1980's of dielectric-elastic-magnetic single crystals were described. The results of this program included: identification of some important emerging technologies dependent on dielectric-elastic-magnetic crystals, identification of the impact of intrinsic properties and defects in the single crystals on system performance, determination of a sensible common basis for the many crystals of this class, and identification of the benefits of micro-gravity and some initial experimental evidence that these benefits can be realized in space. It is concluded that advanced computers and optical communications are at a development stage for high demand of dielectric-elastic-magnetic single crystals in the mid-1980's. Their high unit cost and promise for significantly increased perfection by growth in space justified pursuit of space processing.
CUGatesDensity—Quantum circuit analyser extended to density matrices
NASA Astrophysics Data System (ADS)
Loke, T.; Wang, J. B.
2013-12-01
CUGatesDensity is an extension of the original quantum circuit analyser CUGates (Loke and Wang, 2011) [7] to provide explicit support for the use of density matrices. The new package enables simulation of quantum circuits involving statistical ensemble of mixed quantum states. Such analysis is of vital importance in dealing with quantum decoherence, measurements, noise and error correction, and fault tolerant computation. Several examples involving mixed state quantum computation are presented to illustrate the use of this package. Catalogue identifier: AEPY_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEPY_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 5368 No. of bytes in distributed program, including test data, etc.: 143994 Distribution format: tar.gz Programming language: Mathematica. Computer: Any computer installed with a copy of Mathematica 6.0 or higher. Operating system: Any system with a copy of Mathematica 6.0 or higher installed. Classification: 4.15. Nature of problem: To simulate arbitrarily complex quantum circuits comprised of single/multiple qubit and qudit quantum gates with mixed state registers. Solution method: A density matrix representation for mixed states and a state vector representation for pure states are used. The construct is based on an irreducible form of matrix decomposition, which allows a highly efficient implementation of general controlled gates with multiple conditionals. Running time: The examples provided in the notebook CUGatesDensity.nb take approximately 30 s to run on a laptop PC.
Top ten reasons the World Wide Web may fail to change medical education.
Friedman, R B
1996-09-01
The Internet's World Wide Web (WWW) offers educators a unique opportunity to introduce computer-assisted instructional (CAI) programs into the medical school curriculum. With the WWW, CAI programs developed at one medical school could be successfully used at other institutions without concern about hardware or software compatibility; further, programs could be maintained and regularly updated at a single central location, could be distributed rapidly, would be technology-independent, and would be presented in the same format on all computers. However, while the WWW holds promise for CAI, the author discusses ten reasons that educators' efforts to fulfill the Web's promise may fail, including the following: CAI is generally not fully integrated into the medical school curriculum; students are not tested on material taught using CAI; and CAI programs tend to be poorly designed. The author argues that medical educators must overcome these obstacles if they are to make truly effective use of the WWW in the classroom.
A portable MPI-based parallel vector template library
NASA Technical Reports Server (NTRS)
Sheffler, Thomas J.
1995-01-01
This paper discusses the design and implementation of a polymorphic collection library for distributed address-space parallel computers. The library provides a data-parallel programming model for C++ by providing three main components: a single generic collection class, generic algorithms over collections, and generic algebraic combining functions. Collection elements are the fourth component of a program written using the library and may be either of the built-in types of C or of user-defined types. Many ideas are borrowed from the Standard Template Library (STL) of C++, although a restricted programming model is proposed because of the distributed address-space memory model assumed. Whereas the STL provides standard collections and implementations of algorithms for uniprocessors, this paper advocates standardizing interfaces that may be customized for different parallel computers. Just as the STL attempts to increase programmer productivity through code reuse, a similar standard for parallel computers could provide programmers with a standard set of algorithms portable across many different architectures. The efficacy of this approach is verified by examining performance data collected from an initial implementation of the library running on an IBM SP-2 and an Intel Paragon.
A Portable MPI-Based Parallel Vector Template Library
NASA Technical Reports Server (NTRS)
Sheffler, Thomas J.
1995-01-01
This paper discusses the design and implementation of a polymorphic collection library for distributed address-space parallel computers. The library provides a data-parallel programming model for C + + by providing three main components: a single generic collection class, generic algorithms over collections, and generic algebraic combining functions. Collection elements are the fourth component of a program written using the library and may be either of the built-in types of c or of user-defined types. Many ideas are borrowed from the Standard Template Library (STL) of C++, although a restricted programming model is proposed because of the distributed address-space memory model assumed. Whereas the STL provides standard collections and implementations of algorithms for uniprocessors, this paper advocates standardizing interfaces that may be customized for different parallel computers. Just as the STL attempts to increase programmer productivity through code reuse, a similar standard for parallel computers could provide programmers with a standard set of algorithms portable across many different architectures. The efficacy of this approach is verified by examining performance data collected from an initial implementation of the library running on an IBM SP-2 and an Intel Paragon.
A computer program (MACPUMP) for interactive aquifer-test analysis
Day-Lewis, F. D.; Person, M.A.; Konikow, Leonard F.
1995-01-01
This report introduces MACPUMP (Version 1.0), an aquifer-test-analysis package for use with Macintosh4 computers. The report outlines the input- data format, describes the solutions encoded in the program, explains the menu-items, and offers a tutorial illustrating the use of the program. The package reads list-directed aquifer-test data from a file, plots the data to the screen, generates and plots type curves for several different test conditions, and allows mouse-controlled curve matching. MACPUMP features pull-down menus, a simple text viewer for displaying data-files, and optional on-line help windows. This version includes the analytical solutions for nonleaky and leaky confined aquifers, using both type curves and straight-line methods, and for the analysis of single-well slug tests using type curves. An executable version of the code and sample input data sets are included on an accompanying floppy disk.
An interactive web-based system using cloud for large-scale visual analytics
NASA Astrophysics Data System (ADS)
Kaseb, Ahmed S.; Berry, Everett; Rozolis, Erik; McNulty, Kyle; Bontrager, Seth; Koh, Youngsol; Lu, Yung-Hsiang; Delp, Edward J.
2015-03-01
Network cameras have been growing rapidly in recent years. Thousands of public network cameras provide tremendous amount of visual information about the environment. There is a need to analyze this valuable information for a better understanding of the world around us. This paper presents an interactive web-based system that enables users to execute image analysis and computer vision techniques on a large scale to analyze the data from more than 65,000 worldwide cameras. This paper focuses on how to use both the system's website and Application Programming Interface (API). Given a computer program that analyzes a single frame, the user needs to make only slight changes to the existing program and choose the cameras to analyze. The system handles the heterogeneity of the geographically distributed cameras, e.g. different brands, resolutions. The system allocates and manages Amazon EC2 and Windows Azure cloud resources to meet the analysis requirements.
NASA Technical Reports Server (NTRS)
Sellers, J. P.
1976-01-01
Analysis of the data heat pipe radiator systems tested in both vacuum and ambient environments was continued. The systems included (1) a feasibility VCHP header heat-pipe panel, (2) the same panel reworked to eliminate the VCHP feature and referred to as the feasibility fluid header panel, and (3) an optimized flight-weight fluid header panel termed the 'prototype.' A description of freeze-thaw thermal vacuum tests conducted on the feasibility VCHP was included. In addition, the results of ambient tests made on the feasibility fluid header are presented, including a comparison with analytical results. A thermal model of a fluid header heat pipe radiator was constructed and a computer program written. The program was used to make a comparison of the VCHP and fluid-header concepts for both single and multiple panel applications. The computer program was also employed for a parametric study, including optimum feeder heat pipe spacing, of the prototype fluid header.
Nordstrom, M A; Mapletoft, E A; Miles, T S
1995-11-01
A solution is described for the acquisition on a personal computer of standard pulses derived from neuronal discharge, measurement of neuronal discharge times, real-time control of stimulus delivery based on specified inter-pulse interval conditions in the neuronal spike train, and on-line display and analysis of the experimental data. The hardware consisted of an Apple Macintosh IIci computer and a plug-in card (National Instruments NB-MIO16) that supports A/D, D/A, digital I/O and timer functions. The software was written in the object-oriented graphical programming language LabView. Essential elements of the source code of the LabView program are presented and explained. The use of the system is demonstrated in an experiment in which the reflex responses to muscle stretch are assessed for a single motor unit in the human masseter muscle.
Feasibility of a home-delivered Internet obesity prevention program for fourth-grade students.
Owens, Scott; Lambert, Laurel; McDonough, Suzanne; Green, Kenneth; Loftin, Mark
2009-08-01
This pilot study examined the feasibility of an interactive obesity prevention program delivered to a class of fourth-grade students utilizing daily e-mail messages sent to the students' home computers. The study involved a single intact class of 22 students, 17 (77%) of whom submitted parental permission documentation and received e-mail messages each school day over the course of one month. Concerns regarding Internet safety and children's use of e-mail were addressed fairly easily. Cost/benefit issues for the school did not seem prohibitive. Providing e-mail access to students without a home computer was accomplished by loaning them personal digital assistant (PDA) devices. In larger interventions, loaning PDAs is probably not feasible economically, although cell phones may be an acceptable alternative. It was concluded that this type of interactive obesity prevention program is feasible from most perspectives. Data from a larger scale effectiveness study is still needed.
NASA Technical Reports Server (NTRS)
Cassenti, B. N.
1983-01-01
The results of a 10-month research and development program for the development of advanced time-temperature constitutive relationships are presented. The program included (1) the effect of rate of change of temperature, (2) the development of a term to include time independent effects, and (3) improvements in computational efficiency. It was shown that rate of change of temperature could have a substantial effect on the predicted material response. A modification to include time-independent effects, applicable to many viscoplastic constitutive theories, was shown to reduce to classical plasticity. The computation time can be reduced by a factor of two if self-adaptive integration is used when compared to an integration using ordinary forward differences. During the course of the investigation, it was demonstrated that the most important single factor affecting the theoretical accuracy was the choice of material parameters.
26 CFR 1.861-18 - Classification of transactions involving computer programs.
Code of Federal Regulations, 2011 CFR
2011-04-01
... on a single disk for a one-time payment with restrictions on transfer and reverse engineering, which... license. The license is stated to be perpetual. Under the license no reverse engineering, decompilation... fee, on a World Wide Web home page on the Internet. P, the Country Z resident, in return for payment...
26 CFR 1.861-18 - Classification of transactions involving computer programs.
Code of Federal Regulations, 2010 CFR
2010-04-01
... on a single disk for a one-time payment with restrictions on transfer and reverse engineering, which... license. The license is stated to be perpetual. Under the license no reverse engineering, decompilation... fee, on a World Wide Web home page on the Internet. P, the Country Z resident, in return for payment...
2002-06-01
time, the monkey would eventually produce the collected works of Shakespeare . Unfortunately for the analogist, systems, even live ones, do not work...limited his simulated computer monkey to producing, in a single random step, the sentence uttered by Polonius in the play Hamlet : “Methinks it is
The Revenue vs. Service Balance
ERIC Educational Resources Information Center
Savarese, John
2006-01-01
Ten years ago, students at the University of Vermont (UVM) had to carry separate ID cards, meal cards, and athletic cards. Today, the single CATcard combines all of these functions, plus library privileges, an optional declining balance program called CAT$cratch, access to computer labs, use of vending machines without quarters, and even a ride on…
Ruhl, J.F.
2002-01-01
A steady state single layer, two-dimensional ground-water flow model constructed with the computer program MODFLOW,combined with the particle-tracking computer program MODPATH, was used to track water particles (upgradient) from the two well fields. A withdrawal rate of 625 m3/d was simulated for each well field. The ground-water flow paths delineated areas of contributing recharge that are 0.38 and 0.65 km2 based on 10- and 50-year travel times, respectively. The flow paths that define these areas extend for maximum distances of about 350 and 450 m, respectively, from the wells. At well field A the area of contributing recharge was delineated for each well as separate withdrawal points. At well field B the area of contributing recharge was delineated for the two wells as a single withdrawal point. Delineation of areas of contributing recharge to the well fields from land surface would require construction of a multi-layer ground-water flow model.
Circulation control propellers for general aviation, including a BASIC computer program
NASA Technical Reports Server (NTRS)
Taback, I.; Braslow, A. L.; Butterfield, A. J.
1983-01-01
The feasibility of replacing variable pitch propeller mechanisms with circulation control (Coanada effect) propellers on general aviation airplanes was examined. The study used a specially developed computer program written in BASIC which could compare the aerodynamic performance of circulation control propellers with conventional propellers. The comparison of aerodynamic performance for circulation control, fixed pitch and variable pitch propellers is based upon the requirements for a 1600 kg (3600 lb) single engine general aviation aircraft. A circulation control propeller using a supercritical airfoil was shown feasible over a representative range of design conditions. At a design condition for high speed cruise, all three types of propellers showed approximately the same performance. At low speed, the performance of the circulation control propeller exceeded the performance for a fixed pitch propeller, but did not match the performance available from a variable pitch propeller. It appears feasible to consider circulation control propellers for single engine aircraft or multiengine aircraft which have their propellers on a common axis (tractor pusher). The economics of the replacement requires a study for each specific airplane application.
A Spreadsheet for the Mixing of a Row of Jets with a Confined Crossflow
NASA Technical Reports Server (NTRS)
Holderman, J. D.; Smith, T. D.; Clisset, J. R.; Lear, W. E.
2005-01-01
An interactive computer code, written with a readily available software program, Microsoft Excel (Microsoft Corporation, Redmond, WA) is presented which displays 3 D oblique plots of a conserved scalar distribution downstream of jets mixing with a confined crossflow, for a single row, double rows, or opposed rows of jets with or without flow area convergence and/or a non-uniform crossflow scalar distribution. This project used a previously developed empirical model of jets mixing in a confined crossflow to create an Microsoft Excel spreadsheet that can output the profiles of a conserved scalar for jets injected into a confined crossflow given several input variables. The program uses multiple spreadsheets in a single Microsoft Excel notebook to carry out the modeling. The first sheet contains the main program, controls for the type of problem to be solved, and convergence criteria. The first sheet also provides for input of the specific geometry and flow conditions. The second sheet presents the results calculated with this routine to show the effects on the mixing of varying flow and geometric parameters. Comparisons are also made between results from the version of the empirical correlations implemented in the spreadsheet and the versions originally written in Applesoft BASIC (Apple Computer, Cupertino, CA) in the 1980's.
A Spreadsheet for the Mixing of a Row of Jets with a Confined Crossflow. Supplement
NASA Technical Reports Server (NTRS)
Holderman, J. D.; Smith, T. D.; Clisset, J. R.; Lear, W. E.
2005-01-01
An interactive computer code, written with a readily available software program, Microsoft Excel (Microsoft Corporation, Redmond, WA) is presented which displays 3 D oblique plots of a conserved scalar distribution downstream of jets mixing with a confined crossflow, for a single row, double rows, or opposed rows of jets with or without flow area convergence and/or a non-uniform crossflow scalar distribution. This project used a previously developed empirical model of jets mixing in a confined crossflow to create an Microsoft Excel spreadsheet that can output the profiles of a conserved scalar for jets injected into a confined crossflow given several input variables. The program uses multiple spreadsheets in a single Microsoft Excel notebook to carry out the modeling. The first sheet contains the main program, controls for the type of problem to be solved, and convergence criteria. The first sheet also provides for input of the specific geometry and flow conditions. The second sheet presents the results calculated with this routine to show the effects on the mixing of varying flow and geometric parameters. Comparisons are also made between results from the version of the empirical correlations implemented in the spreadsheet and the versions originally written in Applesoft BASIC (Apple Computer, Cupertino, CA) in the 1980's.
NASA Technical Reports Server (NTRS)
Glassman, Arthur J.; Jones, Scott M.
1991-01-01
This analysis and this computer code apply to full, split, and dual expander cycles. Heat regeneration from the turbine exhaust to the pump exhaust is allowed. The combustion process is modeled as one of chemical equilibrium in an infinite-area or a finite-area combustor. Gas composition in the nozzle may be either equilibrium or frozen during expansion. This report, which serves as a users guide for the computer code, describes the system, the analysis methodology, and the program input and output. Sample calculations are included to show effects of key variables such as nozzle area ratio and oxidizer-to-fuel mass ratio.
Parallel computing using a Lagrangian formulation
NASA Technical Reports Server (NTRS)
Liou, May-Fun; Loh, Ching Yuen
1991-01-01
A new Lagrangian formulation of the Euler equation is adopted for the calculation of 2-D supersonic steady flow. The Lagrangian formulation represents the inherent parallelism of the flow field better than the common Eulerian formulation and offers a competitive alternative on parallel computers. The implementation of the Lagrangian formulation on the Thinking Machines Corporation CM-2 Computer is described. The program uses a finite volume, first-order Godunov scheme and exhibits high accuracy in dealing with multidimensional discontinuities (slip-line and shock). By using this formulation, a better than six times speed-up was achieved on a 8192-processor CM-2 over a single processor of a CRAY-2.
Parallel computing using a Lagrangian formulation
NASA Technical Reports Server (NTRS)
Liou, May-Fun; Loh, Ching-Yuen
1992-01-01
This paper adopts a new Lagrangian formulation of the Euler equation for the calculation of two dimensional supersonic steady flow. The Lagrangian formulation represents the inherent parallelism of the flow field better than the common Eulerian formulation and offers a competitive alternative on parallel computers. The implementation of the Lagrangian formulation on the Thinking Machines Corporation CM-2 Computer is described. The program uses a finite volume, first-order Godunov scheme and exhibits high accuracy in dealing with multidimensional discontinuities (slip-line and shock). By using this formulation, we have achieved better than six times speed-up on a 8192-processor CM-2 over a single processor of a CRAY-2.
Field-Programmable Gate Array Computer in Structural Analysis: An Initial Exploration
NASA Technical Reports Server (NTRS)
Singleterry, Robert C., Jr.; Sobieszczanski-Sobieski, Jaroslaw; Brown, Samuel
2002-01-01
This paper reports on an initial assessment of using a Field-Programmable Gate Array (FPGA) computational device as a new tool for solving structural mechanics problems. A FPGA is an assemblage of binary gates arranged in logical blocks that are interconnected via software in a manner dependent on the algorithm being implemented and can be reprogrammed thousands of times per second. In effect, this creates a computer specialized for the problem that automatically exploits all the potential for parallel computing intrinsic in an algorithm. This inherent parallelism is the most important feature of the FPGA computational environment. It is therefore important that if a problem offers a choice of different solution algorithms, an algorithm of a higher degree of inherent parallelism should be selected. It is found that in structural analysis, an 'analog computer' style of programming, which solves problems by direct simulation of the terms in the governing differential equations, yields a more favorable solution algorithm than current solution methods. This style of programming is facilitated by a 'drag-and-drop' graphic programming language that is supplied with the particular type of FPGA computer reported in this paper. Simple examples in structural dynamics and statics illustrate the solution approach used. The FPGA system also allows linear scalability in computing capability. As the problem grows, the number of FPGA chips can be increased with no loss of computing efficiency due to data flow or algorithmic latency that occurs when a single problem is distributed among many conventional processors that operate in parallel. This initial assessment finds the FPGA hardware and software to be in their infancy in regard to the user conveniences; however, they have enormous potential for shrinking the elapsed time of structural analysis solutions if programmed with algorithms that exhibit inherent parallelism and linear scalability. This potential warrants further development of FPGA-tailored algorithms for structural analysis.
1992-12-28
Chief, Cloud Physics Section at the Phillips Laboratory Geophysics Directorate, for his assistance both in my research, and in preparing this paper; Lisa...American soccerball . Due to their hollow closed structure, the buckyballs can be used to "cage" other molecules. This potential has created a great deal of...forming a symmetrical sphere. 12-3 Physically modeling the fullerene on the computer began with the formation of a single pentagon. This pentagon was
Effect of Automatic Processing on Specification of Problem Solutions for Computer Programs.
1981-03-01
Number 7 ± 2" item limitaion on human short-term memory capability (Miller, 1956) should be a guiding principle in program design. Yourdon and...input either a single example solution or multiple exam’- le solutions in sequence. If a participant’s P1 has a low value - near 0 - it may be concluded... Principles in Experimental Design, Winer ,1971). 55 Table 12 ANOVA Resultt, For Performance Measure 2 Sb DF MS F Source of Variation Between Subjects
NASA Technical Reports Server (NTRS)
Melton, John E.
1994-01-01
EGADS is a comprehensive preliminary design tool for estimating the performance of light, single-engine general aviation aircraft. The software runs on the Apple Macintosh series of personal computers and assists amateur designers and aeronautical engineering students in performing the many repetitive calculations required in the aircraft design process. The program makes full use of the mouse and standard Macintosh interface techniques to simplify the input of various design parameters. Extensive graphics, plotting, and text output capabilities are also included.
Fracture Mechanics Analysis of Single and Double Rows of Fastener Holes Loaded in Bearing
1976-04-01
the following subprograms for execution: 1. ASRL FEABL-2 subroutines ASMLTV, ASMSUB, BCON, FACT, ORK, QBACK, SETUP, SIMULQ, STACON, and XTRACT. 2. IBM ...based on program code generated by IBM FORTRAN-G1 and FORTRAN-H compilers, with demonstration runs made on an IBM 370/168 computer. Programs SROW and...DROW are supplied ready to execute on systems with IBM -standard FORTRAN unit members for the card reader (unit 5) and line printer (unit 6). The
A computerized compensator design algorithm with launch vehicle applications
NASA Technical Reports Server (NTRS)
Mitchell, J. R.; Mcdaniel, W. L., Jr.
1976-01-01
This short paper presents a computerized algorithm for the design of compensators for large launch vehicles. The algorithm is applicable to the design of compensators for linear, time-invariant, control systems with a plant possessing a single control input and multioutputs. The achievement of frequency response specifications is cast into a strict constraint mathematical programming format. An improved solution algorithm for solving this type of problem is given, along with the mathematical necessities for application to systems of the above type. A computer program, compensator improvement program (CIP), has been developed and applied to a pragmatic space-industry-related example.
NASA Technical Reports Server (NTRS)
Aggarwal, Arun K.
1993-01-01
The computer program SASHBEAN (Sikorsky Aircraft Spherical Roller High Speed Bearing Analysis) analyzes and predicts the operating characteristics of a Single Row, Angular Contact, Spherical Roller Bearing (SRACSRB). The program runs on an IBM or IBM compatible personal computer, and for a given set of input data analyzes the bearing design for it's ring deflections (axial and radial), roller deflections, contact areas and stresses, induced axial thrust, rolling element and cage rotation speeds, lubrication parameters, fatigue lives, and amount of heat generated in the bearing. The dynamic loading of rollers due to centrifugal forces and gyroscopic moments, which becomes quite significant at high speeds, is fully considered in this analysis. For a known application and it's parameters, the program is also capable of performing steady-state and time-transient thermal analyses of the bearing system. The steady-state analysis capability allows the user to estimate the expected steady-state temperature map in and around the bearing under normal operating conditions. On the other hand, the transient analysis feature provides the user a means to simulate the 'lost lubricant' condition and predict a time-temperature history of various critical points in the system. The bearing's 'time-to-failure' estimate may also be made from this (transient) analysis by considering the bearing as failed when a certain temperature limit is reached in the bearing components. The program is fully interactive and allows the user to get started and access most of its features with a minimal of training. For the most part, the program is menu driven, and adequate help messages were provided to guide a new user through various menu options and data input screens. All input data, both for mechanical and thermal analyses, are read through graphical input screens, thereby eliminating any need of a separate text editor/word processor to edit/create data files. Provision is also available to select and view the contents of output files on the monitor screen if no paper printouts are required. A separate volume (Volume-2) of this documentation describes, in detail, the underlying mathematical formulations, assumptions, and solution algorithms of this program.
Hybrid Metaheuristics for Solving a Fuzzy Single Batch-Processing Machine Scheduling Problem
Molla-Alizadeh-Zavardehi, S.; Tavakkoli-Moghaddam, R.; Lotfi, F. Hosseinzadeh
2014-01-01
This paper deals with a problem of minimizing total weighted tardiness of jobs in a real-world single batch-processing machine (SBPM) scheduling in the presence of fuzzy due date. In this paper, first a fuzzy mixed integer linear programming model is developed. Then, due to the complexity of the problem, which is NP-hard, we design two hybrid metaheuristics called GA-VNS and VNS-SA applying the advantages of genetic algorithm (GA), variable neighborhood search (VNS), and simulated annealing (SA) frameworks. Besides, we propose three fuzzy earliest due date heuristics to solve the given problem. Through computational experiments with several random test problems, a robust calibration is applied on the parameters. Finally, computational results on different-scale test problems are presented to compare the proposed algorithms. PMID:24883359
NASA Astrophysics Data System (ADS)
Chang, S. L.; Lottes, S. A.; Berry, G. F.
Argonne National Laboratory is investigating the non-reacting jet-gas mixing patterns in a magnetohydrodynamics (MHD) second stage combustor by using a three-dimensional single-phase hydrodynamics computer program. The computer simulation is intended to enhance the understanding of flow and mixing patterns in the combustor, which in turn may improve downstream MHD channel performance. The code is used to examine the three-dimensional effects of the side walls and the distributed jet flows on the non-reacting jet-gas mixing patterns. The code solves the conservation equations of mass, momentum, and energy, and a transport equation of a turbulence parameter and allows permeable surfaces to be specified for any computational cell.
NASA Technical Reports Server (NTRS)
Bogart, Edward H. (Inventor); Pope, Alan T. (Inventor)
2000-01-01
A system for display on a single video display terminal of multiple physiological measurements is provided. A subject is monitored by a plurality of instruments which feed data to a computer programmed to receive data, calculate data products such as index of engagement and heart rate, and display the data in a graphical format simultaneously on a single video display terminal. In addition live video representing the view of the subject and the experimental setup may also be integrated into the single data display. The display may be recorded on a standard video tape recorder for retrospective analysis.
NASA Astrophysics Data System (ADS)
Press, William H.; Teukolsky, Saul A.; Vettering, William T.; Flannery, Brian P.
2003-05-01
The two Numerical Recipes books are marvellous. The principal book, The Art of Scientific Computing, contains program listings for almost every conceivable requirement, and it also contains a well written discussion of the algorithms and the numerical methods involved. The Example Book provides a complete driving program, with helpful notes, for nearly all the routines in the principal book. The first edition of Numerical Recipes: The Art of Scientific Computing was published in 1986 in two versions, one with programs in Fortran, the other with programs in Pascal. There were subsequent versions with programs in BASIC and in C. The second, enlarged edition was published in 1992, again in two versions, one with programs in Fortran (NR(F)), the other with programs in C (NR(C)). In 1996 the authors produced Numerical Recipes in Fortran 90: The Art of Parallel Scientific Computing as a supplement, called Volume 2, with the original (Fortran) version referred to as Volume 1. Numerical Recipes in C++ (NR(C++)) is another version of the 1992 edition. The numerical recipes are also available on a CD ROM: if you want to use any of the recipes, I would strongly advise you to buy the CD ROM. The CD ROM contains the programs in all the languages. When the first edition was published I bought it, and have also bought copies of the other editions as they have appeared. Anyone involved in scientific computing ought to have a copy of at least one version of Numerical Recipes, and there also ought to be copies in every library. If you already have NR(F), should you buy the NR(C++) and, if not, which version should you buy? In the preface to Volume 2 of NR(F), the authors say 'C and C++ programmers have not been far from our minds as we have written this volume, and we think that you will find that time spent in absorbing its principal lessons will be amply repaid in the future as C and C++ eventually develop standard parallel extensions'. In the preface and introduction to NR(C++), the authors point out some of the problems in the use of C++ in scientific computing. I have not found any mention of parallel computing in NR(C++). Fortran has quite a lot going for it. As someone who has used it in most of its versions from Fortran II, I have seen it develop and leave behind other languages promoted by various enthusiasts: who now uses Algol or Pascal? I think it unlikely that C++ will disappear: it was devised as a systems language, and can also be used for other purposes such as scientific computing. It is possible that Fortran will disappear, but Fortran has the strengths that it can develop, that there are extensive Fortran subroutine libraries, and that it has been developed for parallel computing. To argue with programmers as to which is the best language to use is sterile. If you wish to use C++, then buy NR(C++), but you should also look at volume 2 of NR(F). If you are a Fortran programmer, then make sure you have NR(F), volumes 1 and 2. But whichever language you use, make sure you have one version or the other, and the CD ROM. The Example Book provides listings of complete programs to run nearly all the routines in NR, frequently based on cases where an anlytical solution is available. It is helpful when developing a new program incorporating an unfamiliar routine to see that routine actually working, and this is what the programs in the Example Book achieve. I started teaching computational physics before Numerical Recipes was published. If I were starting again, I would make heavy use of both The Art of Scientific Computing and of the Example Book. Every computational physics teaching laboratory should have both volumes: the programs in the Example Book are included on the CD ROM, but the extra commentary in the book itself is of considerable value. P Borcherds
Acceleration for 2D time-domain elastic full waveform inversion using a single GPU card
NASA Astrophysics Data System (ADS)
Jiang, Jinpeng; Zhu, Peimin
2018-05-01
Full waveform inversion (FWI) is a challenging procedure due to the high computational cost related to the modeling, especially for the elastic case. The graphics processing unit (GPU) has become a popular device for the high-performance computing (HPC). To reduce the long computation time, we design and implement the GPU-based 2D elastic FWI (EFWI) in time domain using a single GPU card. We parallelize the forward modeling and gradient calculations using the CUDA programming language. To overcome the limitation of relatively small global memory on GPU, the boundary saving strategy is exploited to reconstruct the forward wavefield. Moreover, the L-BFGS optimization method used in the inversion increases the convergence of the misfit function. A multiscale inversion strategy is performed in the workflow to obtain the accurate inversion results. In our tests, the GPU-based implementations using a single GPU device achieve >15 times speedup in forward modeling, and about 12 times speedup in gradient calculation, compared with the eight-core CPU implementations optimized by OpenMP. The test results from the GPU implementations are verified to have enough accuracy by comparing the results obtained from the CPU implementations.
FPGA-Based, Self-Checking, Fault-Tolerant Computers
NASA Technical Reports Server (NTRS)
Some, Raphael; Rennels, David
2004-01-01
A proposed computer architecture would exploit the capabilities of commercially available field-programmable gate arrays (FPGAs) to enable computers to detect and recover from bit errors. The main purpose of the proposed architecture is to enable fault-tolerant computing in the presence of single-event upsets (SEUs). [An SEU is a spurious bit flip (also called a soft error) caused by a single impact of ionizing radiation.] The architecture would also enable recovery from some soft errors caused by electrical transients and, to some extent, from intermittent and permanent (hard) errors caused by aging of electronic components. A typical FPGA of the current generation contains one or more complete processor cores, memories, and highspeed serial input/output (I/O) channels, making it possible to shrink a board-level processor node to a single integrated-circuit chip. Custom, highly efficient microcontrollers, general-purpose computers, custom I/O processors, and signal processors can be rapidly and efficiently implemented by use of FPGAs. Unfortunately, FPGAs are susceptible to SEUs. Prior efforts to mitigate the effects of SEUs have yielded solutions that degrade performance of the system and require support from external hardware and software. In comparison with other fault-tolerant- computing architectures (e.g., triple modular redundancy), the proposed architecture could be implemented with less circuitry and lower power demand. Moreover, the fault-tolerant computing functions would require only minimal support from circuitry outside the central processing units (CPUs) of computers, would not require any software support, and would be largely transparent to software and to other computer hardware. There would be two types of modules: a self-checking processor module and a memory system (see figure). The self-checking processor module would be implemented on a single FPGA and would be capable of detecting its own internal errors. It would contain two CPUs executing identical programs in lock step, with comparison of their outputs to detect errors. It would also contain various cache local memory circuits, communication circuits, and configurable special-purpose processors that would use self-checking checkers. (The basic principle of the self-checking checker method is to utilize logic circuitry that generates error signals whenever there is an error in either the checker or the circuit being checked.) The memory system would comprise a main memory and a hardware-controlled check-pointing system (CPS) based on a buffer memory denoted the recovery cache. The main memory would contain random-access memory (RAM) chips and FPGAs that would, in addition to everything else, implement double-error-detecting and single-error-correcting memory functions to enable recovery from single-bit errors.
Optimization strategies for molecular dynamics programs on Cray computers and scalar work stations
NASA Astrophysics Data System (ADS)
Unekis, Michael J.; Rice, Betsy M.
1994-12-01
We present results of timing runs and different optimization strategies for a prototype molecular dynamics program that simulates shock waves in a two-dimensional (2-D) model of a reactive energetic solid. The performance of the program may be improved substantially by simple changes to the Fortran or by employing various vendor-supplied compiler optimizations. The optimum strategy varies among the machines used and will vary depending upon the details of the program. The effect of various compiler options and vendor-supplied subroutine calls is demonstrated. Comparison is made between two scalar workstations (IBM RS/6000 Model 370 and Model 530) and several Cray supercomputers (X-MP/48, Y-MP8/128, and C-90/16256). We find that for a scientific application program dominated by sequential, scalar statements, a relatively inexpensive high-end work station such as the IBM RS/60006 RISC series will outperform single processor performance of the Cray X-MP/48 and perform competitively with single processor performance of the Y-MP8/128 and C-9O/16256.
NASA Technical Reports Server (NTRS)
Huff, H.; You, Z.; Williams, T.; Nichols, T.; Attia, J.; Fogarty, T. N.; Kirby, K.; Wilkins, R.; Lawton, R.
1998-01-01
As integrated circuits become more sensitive to charged particles and neutrons, anomalous performance due to single event effects (SEE) is a concern and requires experimental verification and quantification. The Center for Applied Radiation Research (CARR) at Prairie View A&M University has developed experiments as a participant in the NASA ER-2 Flight Program, the APEX balloon flight program and the Student Launch Program. Other high altitude and ground level experiments of interest to DoD and commercial applications are being developed. The experiment characterizes the SEE behavior of high speed and high density SRAM's. The system includes a PC-104 computer unit, an optical drive for storage, a test board with the components under test, and a latchup detection and reset unit. The test program will continuously monitor the stored checkerboard data pattern in the SW and record errors. Since both the computer and the optical drive contain integrated circuits, they are also vulnerable to radiation effects. A latchup detection unit with discrete components will monitor the test program and reset the system when necessary. The first results will be obtained from the NASA ER-2 flights, which are now planned to take place in early 1998 from Dryden Research Center in California. The series of flights, at altitudes up to 70,000 feet, and a variety of flight profiles should yield a distribution of conditions for correlating SEES. SEE measurements will be performed from the time of aircraft power-up on the ground throughout the flight regime until systems power-off after landing.
JETSPIN: A specific-purpose open-source software for simulations of nanofiber electrospinning
NASA Astrophysics Data System (ADS)
Lauricella, Marco; Pontrelli, Giuseppe; Coluzza, Ivan; Pisignano, Dario; Succi, Sauro
2015-12-01
We present the open-source computer program JETSPIN, specifically designed to simulate the electrospinning process of nanofibers. Its capabilities are shown with proper reference to the underlying model, as well as a description of the relevant input variables and associated test-case simulations. The various interactions included in the electrospinning model implemented in JETSPIN are discussed in detail. The code is designed to exploit different computational architectures, from single to parallel processor workstations. This paper provides an overview of JETSPIN, focusing primarily on its structure, parallel implementations, functionality, performance, and availability.
A PC-based single-ADC multi-parameter data acquisition system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Woodring, M.; Kegel, G.H.R.; Egan, J.J.
1995-10-01
A personal computer (PC) based mult parameter data acquisition system using the Microsoft Window operating environment has been designed and constructed. An IBI AT compatible personal computer with an Intel 486DX5 microprocessor was combined with a National Instruments ATIDIO 32 digital I/O card, a single Canberra 8713 ADC with 13-bit resolution and a modified Canberra 8223 8-input analog multiplexer to acquil data from experiments carried out at the UML Van de Graa accelerator. The accelerator data acquisition (ADAC) computer environment was programmed in Microsoft Visual BASIC for use i Windows. ADAC allows event-mode data acquisition with up to eight parametersmore » (modifiable to 64) and the simultaneous display parameters during acquisition. Additional features of ADAC include replay of event-mode data and graphical analysis/display of data. TV ADAC environment is easy to upgrade or expand, inexpensive 1 implement, and is specifically designed to meet the needs of nuclei spectroscopy.« less
Towards the simulation of molecular collisions with a superconducting quantum computer
NASA Astrophysics Data System (ADS)
Geller, Michael
2013-05-01
I will discuss the prospects for the use of large-scale, error-corrected quantum computers to simulate complex quantum dynamics such as molecular collisions. This will likely require millions qubits. I will also discuss an alternative approach [M. R. Geller et al., arXiv:1210.5260] that is ideally suited for today's superconducting circuits, which uses the single-excitation subspace (SES) of a system of n tunably coupled qubits. The SES method allows many operations in the unitary group SU(n) to be implemented in a single step, bypassing the need for elementary gates, thereby making large computations possible without error correction. The method enables universal quantum simulation, including simulation of the time-dependent Schrodinger equation, and we argue that a 1000-qubit SES processor should be capable of achieving quantum speedup relative to a petaflop supercomputer. We speculate on the utility and practicality of such a simulator for atomic and molecular collision physics. Work supported by the US National Science Foundation CDI program.
Ahamed, Nizam U; Sundaraj, Kenneth; Poo, Tarn S
2013-03-01
This article describes the design of a robust, inexpensive, easy-to-use, small, and portable online electromyography acquisition system for monitoring electromyography signals during rehabilitation. This single-channel (one-muscle) system was connected via the universal serial bus port to a programmable Windows operating system handheld tablet personal computer for storage and analysis of the data by the end user. The raw electromyography signals were amplified in order to convert them to an observable scale. The inherent noise of 50 Hz (Malaysia) from power lines electromagnetic interference was then eliminated using a single-hybrid IC notch filter. These signals were sampled by a signal processing module and converted into 24-bit digital data. An algorithm was developed and programmed to transmit the digital data to the computer, where it was reassembled and displayed in the computer using software. Finally, the following device was furnished with the graphical user interface to display the online muscle strength streaming signal in a handheld tablet personal computer. This battery-operated system was tested on the biceps brachii muscles of 20 healthy subjects, and the results were compared to those obtained with a commercial single-channel (one-muscle) electromyography acquisition system. The results obtained using the developed device when compared to those obtained from a commercially available physiological signal monitoring system for activities involving muscle contractions were found to be comparable (the comparison of various statistical parameters) between male and female subjects. In addition, the key advantage of this developed system over the conventional desktop personal computer-based acquisition systems is its portability due to the use of a tablet personal computer in which the results are accessible graphically as well as stored in text (comma-separated value) form.
Analysis of a generalized dual reflector antenna system using physical optics
NASA Technical Reports Server (NTRS)
Acosta, Roberto J.; Lagin, Alan R.
1992-01-01
Reflector antennas are widely used in communication satellite systems because they provide high gain at low cost. Offset-fed single paraboloids and dual reflector offset Cassegrain and Gregorian antennas with multiple focal region feeds provide a simple, blockage-free means of forming multiple, shaped, and isolated beams with low sidelobes. Such antennas are applicable to communications satellite frequency reuse systems and earth stations requiring access to several satellites. While the single offset paraboloid has been the most extensively used configuration for the satellite multiple-beam antenna, the trend toward large apertures requiring minimum scanned beam degradation over the field of view 18 degrees for full earth coverage from geostationary orbit may lead to impractically long focal length and large feed arrays. Dual reflector antennas offer packaging advantages and more degrees of design freedom to improve beam scanning and cross-polarization properties. The Cassegrain and Gregorian antennas are the most commonly used dual reflector antennas. A computer program for calculating the secondary pattern and directivity of a generalized dual reflector antenna system was developed and implemented at LeRC. The theoretical foundation for this program is based on the use of physical optics methodology for describing the induced currents on the sub-reflector and main reflector. The resulting induced currents on the main reflector are integrated to obtain the antenna far-zone electric fields. The computer program is verified with other physical optics programs and with measured antenna patterns. The comparison shows good agreement in far-field sidelobe reproduction and directivity.
The M-Integral for Computing Stress Intensity Factors in Generally Anisotropic Materials
NASA Technical Reports Server (NTRS)
Warzynek, P. A.; Carter, B. J.; Banks-Sills, L.
2005-01-01
The objective of this project is to develop and demonstrate a capability for computing stress intensity factors in generally anisotropic materials. These objectives have been met. The primary deliverable of this project is this report and the information it contains. In addition, we have delivered the source code for a subroutine that will compute stress intensity factors for anisotropic materials encoded in both the C and Python programming languages and made available a version of the FRANC3D program that incorporates this subroutine. Single crystal super alloys are commonly used for components in the hot sections of contemporary jet and rocket engines. Because these components have a uniform atomic lattice orientation throughout, they exhibit anisotropic material behavior. This means that stress intensity solutions developed for isotropic materials are not appropriate for the analysis of crack growth in these materials. Until now, a general numerical technique did not exist for computing stress intensity factors of cracks in anisotropic materials and cubic materials in particular. Such a capability was developed during the project and is described and demonstrated herein.
Obtaining Reliable Predictions of Terrestrial Energy Coupling From Real-Time Solar Wind Measurement
NASA Technical Reports Server (NTRS)
Weimer, Daniel R.
2001-01-01
The first draft of a manuscript titled "Variable time delays in the propagation of the interplanetary magnetic field" has been completed, for submission to the Journal of Geophysical Research. In the preparation of this manuscript all data and analysis programs had been updated to the highest temporal resolution possible, at 16 seconds or better. The program which computes the "measured" IMF propagation time delays from these data has also undergone another improvement. In another significant development, a technique has been developed in order to predict IMF phase plane orientations, and the resulting time delays, using only measurements from a single satellite at L1. The "minimum variance" method is used for this computation. Further work will be done on optimizing the choice of several parameters for the minimum variance calculation.
NASA Technical Reports Server (NTRS)
Baumeister, K. J.; Horowitz, S. J.
1982-01-01
An iterative finite element integral technique is used to predict the sound field radiated from the JT15D turbofan inlet. The sound field is divided into two regions: the sound field within and near the inlet which is computed using the finite element method and the radiation field beyond the inlet which is calculated using an integral solution technique. The velocity potential formulation of the acoustic wave equation was employed in the program. For some single mode JT15D data, the theory and experiment are in good agreement for the far field radiation pattern as well as suppressor attenuation. Also, the computer program is used to simulate flight effects that cannot be performed on a ground static test stand.
Calculation of four-particle harmonic-oscillator transformation brackets
NASA Astrophysics Data System (ADS)
Germanas, D.; Kalinauskas, R. K.; Mickevičius, S.
2010-02-01
A procedure for precise calculation of the three- and four-particle harmonic-oscillator (HO) transformation brackets is presented. The analytical expressions of the four-particle HO transformation brackets are given. The computer code for the calculations of HO transformation brackets proves to be quick, efficient and produces results with small numerical uncertainties. Program summaryProgram title: HOTB Catalogue identifier: AEFQ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFQ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 1247 No. of bytes in distributed program, including test data, etc.: 6659 Distribution format: tar.gz Programming language: FORTRAN 90 Computer: Any computer with FORTRAN 90 compiler Operating system: Windows, Linux, FreeBSD, True64 Unix RAM: 8 MB Classification: 17.17 Nature of problem: Calculation of the three-particle and four-particle harmonic-oscillator transformation brackets. Solution method: The method is based on compact expressions of the three-particle harmonics oscillator brackets, presented in [1] and expressions of the four-particle harmonics oscillator brackets, presented in this paper. Restrictions: The three- and four-particle harmonic-oscillator transformation brackets up to the e=28. Unusual features: Possibility of calculating the four-particle harmonic-oscillator transformation brackets. Running time: Less than one second for the single harmonic-oscillator transformation bracket. References:G.P. Kamuntavičius, R.K. Kalinauskas, B.R. Barret, S. Mickevičius, D. Germanas, Nuclear Physics A 695 (2001) 191.
User's guide to four-body and three-body trajectory optimization programs
NASA Technical Reports Server (NTRS)
Pu, C. L.; Edelbaum, T. N.
1974-01-01
A collection of computer programs and subroutines written in FORTRAN to calculate 4-body (sun-earth-moon-space) and 3-body (earth-moon-space) optimal trajectories is presented. The programs incorporate a variable step integration technique and a quadrature formula to correct single step errors. The programs provide capability to solve initial value problem, two point boundary value problem of a transfer from a given initial position to a given final position in fixed time, optimal 2-impulse transfer from an earth parking orbit of given inclination to a given final position and velocity in fixed time and optimal 3-impulse transfer from a given position to a given final position and velocity in fixed time.
NASA Technical Reports Server (NTRS)
1974-01-01
The feasibility is evaluated of an evolutionary development for use of a single-axis gimbal star tracker from prior two-axis gimbal star tracker based system applications. Detailed evaluation of the star tracker gimbal encoder is considered. A brief system description is given including the aspects of tracker evolution and encoder evaluation. System analysis includes evaluation of star availability and mounting constraints for the geosynchronous orbit application, and a covariance simulation analysis to evaluate performance potential. Star availability and covariance analysis digital computer programs are included.
Smartphone Microscopy of Parasite Eggs Accumulated into a Single Field of View.
Sowerby, Stephen J; Crump, John A; Johnstone, Maree C; Krause, Kurt L; Hill, Philip C
2016-01-01
A Nokia Lumia 1020 cellular phone (Microsoft Corp., Auckland, New Zealand) was configured to image the ova of Ascaris lumbricoides converged into a single field of view but on different focal planes. The phone was programmed to acquire images at different distances and, using public domain computer software, composite images were created that brought all the eggs into sharp focus. This proof of concept informs a framework for field-deployable, point of care monitoring of soil-transmitted helminths. © The American Society of Tropical Medicine and Hygiene.
Buoyancy-Driven Instabilities in Single-Bubble Sonoluminescence
NASA Technical Reports Server (NTRS)
Matula, Thomas J.
2003-01-01
The principal objectives of this study are to determine how gravity affects the emission of light from single-bubble sonoluminescence (SBSL), and whether or not the bubble extinction is directly related to gravity. Our experimental task involves designing glass or quartz spherical levitation cells that generate very stable SL bubbles. The cells must have minimized vibration, and some temperature control. The experimental system will reside in a light-tight enclosure. Aside from acceleration, the frequency, pressure amplitude, and light intensity must be measured. A computer program will be constructed to perform all aspects of the experiment.
A programmable five qubit quantum computer using trapped atomic ions
NASA Astrophysics Data System (ADS)
Debnath, Shantanu
2017-04-01
In order to harness the power of quantum information processing, several candidate systems have been investigated, and tailored to demonstrate only specific computations. In my thesis work, we construct a general-purpose multi-qubit device using a linear chain of trapped ion qubits, which in principle can be programmed to run any quantum algorithm. To achieve such flexibility, we develop a pulse shaping technique to realize a set of fully connected two-qubit rotations that entangle arbitrary pairs of qubits using multiple motional modes of the chain. Following a computation architecture, such highly expressive two-qubit gates along with arbitrary single-qubit rotations can be used to compile modular universal logic gates that are effected by targeted optical fields and hence can be reconfigured according to any algorithm circuit programmed in the software. As a demonstration, we run the Deutsch-Jozsa and Bernstein-Vazirani algorithm, and a fully coherent quantum Fourier transform, that we use to solve the `period finding' and `quantum phase estimation' problem. Combining these results with recent demonstrations of quantum fault-tolerance, Grover's search algorithm, and simulation of boson hopping establishes the versatility of such a computation module that can potentially be connected to other modules for future large-scale computations.
Programming chemistry in DNA-addressable bioreactors
Fellermann, Harold; Cardelli, Luca
2014-01-01
We present a formal calculus, termed the chemtainer calculus, able to capture the complexity of compartmentalized reaction systems such as populations of possibly nested vesicular compartments. Compartments contain molecular cargo as well as surface markers in the form of DNA single strands. These markers serve as compartment addresses and allow for their targeted transport and fusion, thereby enabling reactions of previously separated chemicals. The overall system organization allows for the set-up of programmable chemistry in microfluidic or other automated environments. We introduce a simple sequential programming language whose instructions are motivated by state-of-the-art microfluidic technology. Our approach integrates electronic control, chemical computing and material production in a unified formal framework that is able to mimic the integrated computational and constructive capabilities of the subcellular matrix. We provide a non-deterministic semantics of our programming language that enables us to analytically derive the computational and constructive power of our machinery. This semantics is used to derive the sets of all constructable chemicals and supermolecular structures that emerge from different underlying instruction sets. Because our proofs are constructive, they can be used to automatically infer control programs for the construction of target structures from a limited set of resource molecules. Finally, we present an example of our framework from the area of oligosaccharide synthesis. PMID:25121647
Lee, Jae H.; Yao, Yushu; Shrestha, Uttam; Gullberg, Grant T.; Seo, Youngho
2014-01-01
The primary goal of this project is to implement the iterative statistical image reconstruction algorithm, in this case maximum likelihood expectation maximum (MLEM) used for dynamic cardiac single photon emission computed tomography, on Spark/GraphX. This involves porting the algorithm to run on large-scale parallel computing systems. Spark is an easy-to- program software platform that can handle large amounts of data in parallel. GraphX is a graph analytic system running on top of Spark to handle graph and sparse linear algebra operations in parallel. The main advantage of implementing MLEM algorithm in Spark/GraphX is that it allows users to parallelize such computation without any expertise in parallel computing or prior knowledge in computer science. In this paper we demonstrate a successful implementation of MLEM in Spark/GraphX and present the performance gains with the goal to eventually make it useable in clinical setting. PMID:27081299
Lee, Jae H; Yao, Yushu; Shrestha, Uttam; Gullberg, Grant T; Seo, Youngho
2014-11-01
The primary goal of this project is to implement the iterative statistical image reconstruction algorithm, in this case maximum likelihood expectation maximum (MLEM) used for dynamic cardiac single photon emission computed tomography, on Spark/GraphX. This involves porting the algorithm to run on large-scale parallel computing systems. Spark is an easy-to- program software platform that can handle large amounts of data in parallel. GraphX is a graph analytic system running on top of Spark to handle graph and sparse linear algebra operations in parallel. The main advantage of implementing MLEM algorithm in Spark/GraphX is that it allows users to parallelize such computation without any expertise in parallel computing or prior knowledge in computer science. In this paper we demonstrate a successful implementation of MLEM in Spark/GraphX and present the performance gains with the goal to eventually make it useable in clinical setting.
Parallel Computation of the Jacobian Matrix for Nonlinear Equation Solvers Using MATLAB
NASA Technical Reports Server (NTRS)
Rose, Geoffrey K.; Nguyen, Duc T.; Newman, Brett A.
2017-01-01
Demonstrating speedup for parallel code on a multicore shared memory PC can be challenging in MATLAB due to underlying parallel operations that are often opaque to the user. This can limit potential for improvement of serial code even for the so-called embarrassingly parallel applications. One such application is the computation of the Jacobian matrix inherent to most nonlinear equation solvers. Computation of this matrix represents the primary bottleneck in nonlinear solver speed such that commercial finite element (FE) and multi-body-dynamic (MBD) codes attempt to minimize computations. A timing study using MATLAB's Parallel Computing Toolbox was performed for numerical computation of the Jacobian. Several approaches for implementing parallel code were investigated while only the single program multiple data (spmd) method using composite objects provided positive results. Parallel code speedup is demonstrated but the goal of linear speedup through the addition of processors was not achieved due to PC architecture.
Gupta, Parth Sarthi Sen; Banerjee, Shyamashree; Islam, Rifat Nawaz Ul; Mondal, Sudipta; Mondal, Buddhadev; Bandyopadhyay, Amal K
2014-01-01
In the genomic and proteomic era, efficient and automated analyses of sequence properties of protein have become an important task in bioinformatics. There are general public licensed (GPL) software tools to perform a part of the job. However, computations of mean properties of large number of orthologous sequences are not possible from the above mentioned GPL sets. Further, there is no GPL software or server which can calculate window dependent sequence properties for a large number of sequences in a single run. With a view to overcome above limitations, we have developed a standalone procedure i.e. PHYSICO, which performs various stages of computation in a single run based on the type of input provided either in RAW-FASTA or BLOCK-FASTA format and makes excel output for: a) Composition, Class composition, Mean molecular weight, Isoelectic point, Aliphatic index and GRAVY, b) column based compositions, variability and difference matrix, c) 25 kinds of window dependent sequence properties. The program is fast, efficient, error free and user friendly. Calculation of mean and standard deviation of homologous sequences sets, for comparison purpose when relevant, is another attribute of the program; a property seldom seen in existing GPL softwares. PHYSICO is freely available for non-commercial/academic user in formal request to the corresponding author akbanerjee@biotech.buruniv.ac.in.
Gupta, Parth Sarthi Sen; Banerjee, Shyamashree; Islam, Rifat Nawaz Ul; Mondal, Sudipta; Mondal, Buddhadev; Bandyopadhyay, Amal K
2014-01-01
In the genomic and proteomic era, efficient and automated analyses of sequence properties of protein have become an important task in bioinformatics. There are general public licensed (GPL) software tools to perform a part of the job. However, computations of mean properties of large number of orthologous sequences are not possible from the above mentioned GPL sets. Further, there is no GPL software or server which can calculate window dependent sequence properties for a large number of sequences in a single run. With a view to overcome above limitations, we have developed a standalone procedure i.e. PHYSICO, which performs various stages of computation in a single run based on the type of input provided either in RAW-FASTA or BLOCK-FASTA format and makes excel output for: a) Composition, Class composition, Mean molecular weight, Isoelectic point, Aliphatic index and GRAVY, b) column based compositions, variability and difference matrix, c) 25 kinds of window dependent sequence properties. The program is fast, efficient, error free and user friendly. Calculation of mean and standard deviation of homologous sequences sets, for comparison purpose when relevant, is another attribute of the program; a property seldom seen in existing GPL softwares. Availability PHYSICO is freely available for non-commercial/academic user in formal request to the corresponding author akbanerjee@biotech.buruniv.ac.in PMID:24616564
The role of the host in a cooperating mainframe and workstation environment, volumes 1 and 2
NASA Technical Reports Server (NTRS)
Kusmanoff, Antone; Martin, Nancy L.
1989-01-01
In recent years, advancements made in computer systems have prompted a move from centralized computing based on timesharing a large mainframe computer to distributed computing based on a connected set of engineering workstations. A major factor in this advancement is the increased performance and lower cost of engineering workstations. The shift to distributed computing from centralized computing has led to challenges associated with the residency of application programs within the system. In a combined system of multiple engineering workstations attached to a mainframe host, the question arises as to how does a system designer assign applications between the larger mainframe host and the smaller, yet powerful, workstation. The concepts related to real time data processing are analyzed and systems are displayed which use a host mainframe and a number of engineering workstations interconnected by a local area network. In most cases, distributed systems can be classified as having a single function or multiple functions and as executing programs in real time or nonreal time. In a system of multiple computers, the degree of autonomy of the computers is important; a system with one master control computer generally differs in reliability, performance, and complexity from a system in which all computers share the control. This research is concerned with generating general criteria principles for software residency decisions (host or workstation) for a diverse yet coupled group of users (the clustered workstations) which may need the use of a shared resource (the mainframe) to perform their functions.
ADMAP (automatic data manipulation program)
NASA Technical Reports Server (NTRS)
Mann, F. I.
1971-01-01
Instructions are presented on the use of ADMAP, (automatic data manipulation program) an aerospace data manipulation computer program. The program was developed to aid in processing, reducing, plotting, and publishing electric propulsion trajectory data generated by the low thrust optimization program, HILTOP. The program has the option of generating SC4020 electric plots, and therefore requires the SC4020 routines to be available at excution time (even if not used). Several general routines are present, including a cubic spline interpolation routine, electric plotter dash line drawing routine, and single parameter and double parameter sorting routines. Many routines are tailored for the manipulation and plotting of electric propulsion data, including an automatic scale selection routine, an automatic curve labelling routine, and an automatic graph titling routine. Data are accepted from either punched cards or magnetic tape.
Verifiable Measurement-Only Blind Quantum Computing with Stabilizer Testing.
Hayashi, Masahito; Morimae, Tomoyuki
2015-11-27
We introduce a simple protocol for verifiable measurement-only blind quantum computing. Alice, a client, can perform only single-qubit measurements, whereas Bob, a server, can generate and store entangled many-qubit states. Bob generates copies of a graph state, which is a universal resource state for measurement-based quantum computing, and sends Alice each qubit of them one by one. Alice adaptively measures each qubit according to her program. If Bob is honest, he generates the correct graph state, and, therefore, Alice can obtain the correct computation result. Regarding the security, whatever Bob does, Bob cannot get any information about Alice's computation because of the no-signaling principle. Furthermore, malicious Bob does not necessarily send the copies of the correct graph state, but Alice can check the correctness of Bob's state by directly verifying the stabilizers of some copies.
Verifiable Measurement-Only Blind Quantum Computing with Stabilizer Testing
NASA Astrophysics Data System (ADS)
Hayashi, Masahito; Morimae, Tomoyuki
2015-11-01
We introduce a simple protocol for verifiable measurement-only blind quantum computing. Alice, a client, can perform only single-qubit measurements, whereas Bob, a server, can generate and store entangled many-qubit states. Bob generates copies of a graph state, which is a universal resource state for measurement-based quantum computing, and sends Alice each qubit of them one by one. Alice adaptively measures each qubit according to her program. If Bob is honest, he generates the correct graph state, and, therefore, Alice can obtain the correct computation result. Regarding the security, whatever Bob does, Bob cannot get any information about Alice's computation because of the no-signaling principle. Furthermore, malicious Bob does not necessarily send the copies of the correct graph state, but Alice can check the correctness of Bob's state by directly verifying the stabilizers of some copies.
NASA Technical Reports Server (NTRS)
Rendell, Alistair P.; Lee, Timothy J.
1991-01-01
The analytic energy gradient for the single and double excitation coupled-cluster (CCSD) wave function has been reformulated and implemented in a new set of programs. The reformulated set of gradient equations have a smaller computational cost than any previously published. The iterative solution of the linear equations and the construction of the effective density matrices are fully vectorized, being based on matrix multiplications. The new method has been used to investigate the Cl2O2 molecule, which has recently been postulated as an important intermediate in the destruction of ozone in the stratosphere. In addition to reporting computational timings, the CCSD equilibrium geometries, harmonic vibrational frequencies, infrared intensities, and relative energetics of three isomers of Cl2O2 are presented.
Algorithm-Based Fault Tolerance for Numerical Subroutines
NASA Technical Reports Server (NTRS)
Tumon, Michael; Granat, Robert; Lou, John
2007-01-01
A software library implements a new methodology of detecting faults in numerical subroutines, thus enabling application programs that contain the subroutines to recover transparently from single-event upsets. The software library in question is fault-detecting middleware that is wrapped around the numericalsubroutines. Conventional serial versions (based on LAPACK and FFTW) and a parallel version (based on ScaLAPACK) exist. The source code of the application program that contains the numerical subroutines is not modified, and the middleware is transparent to the user. The methodology used is a type of algorithm- based fault tolerance (ABFT). In ABFT, a checksum is computed before a computation and compared with the checksum of the computational result; an error is declared if the difference between the checksums exceeds some threshold. Novel normalization methods are used in the checksum comparison to ensure correct fault detections independent of algorithm inputs. In tests of this software reported in the peer-reviewed literature, this library was shown to enable detection of 99.9 percent of significant faults while generating no false alarms.
Numerical solution of differential equations by artificial neural networks
NASA Technical Reports Server (NTRS)
Meade, Andrew J., Jr.
1995-01-01
Conventionally programmed digital computers can process numbers with great speed and precision, but do not easily recognize patterns or imprecise or contradictory data. Instead of being programmed in the conventional sense, artificial neural networks (ANN's) are capable of self-learning through exposure to repeated examples. However, the training of an ANN can be a time consuming and unpredictable process. A general method is being developed by the author to mate the adaptability of the ANN with the speed and precision of the digital computer. This method has been successful in building feedforward networks that can approximate functions and their partial derivatives from examples in a single iteration. The general method also allows the formation of feedforward networks that can approximate the solution to nonlinear ordinary and partial differential equations to desired accuracy without the need of examples. It is believed that continued research will produce artificial neural networks that can be used with confidence in practical scientific computing and engineering applications.
An efficient solver for large structured eigenvalue problems in relativistic quantum chemistry
NASA Astrophysics Data System (ADS)
Shiozaki, Toru
2017-01-01
We report an efficient program for computing the eigenvalues and symmetry-adapted eigenvectors of very large quaternionic (or Hermitian skew-Hamiltonian) matrices, using which structure-preserving diagonalisation of matrices of dimension N > 10, 000 is now routine on a single computer node. Such matrices appear frequently in relativistic quantum chemistry owing to the time-reversal symmetry. The implementation is based on a blocked version of the Paige-Van Loan algorithm, which allows us to use the Level 3 BLAS subroutines for most of the computations. Taking advantage of the symmetry, the program is faster by up to a factor of 2 than state-of-the-art implementations of complex Hermitian diagonalisation; diagonalising a 12, 800 × 12, 800 matrix took 42.8 (9.5) and 85.6 (12.6) minutes with 1 CPU core (16 CPU cores) using our symmetry-adapted solver and Intel Math Kernel Library's ZHEEV that is not structure-preserving, respectively. The source code is publicly available under the FreeBSD licence.
Konstantinidis, Evdokimos I; Frantzidis, Christos A; Pappas, Costas; Bamidis, Panagiotis D
2012-07-01
In this paper the feasibility of adopting Graphic Processor Units towards real-time emotion aware computing is investigated for boosting the time consuming computations employed in such applications. The proposed methodology was employed in analysis of encephalographic and electrodermal data gathered when participants passively viewed emotional evocative stimuli. The GPU effectiveness when processing electroencephalographic and electrodermal recordings is demonstrated by comparing the execution time of chaos/complexity analysis through nonlinear dynamics (multi-channel correlation dimension/D2) and signal processing algorithms (computation of skin conductance level/SCL) into various popular programming environments. Apart from the beneficial role of parallel programming, the adoption of special design techniques regarding memory management may further enhance the time minimization which approximates a factor of 30 in comparison with ANSI C language (single-core sequential execution). Therefore, the use of GPU parallel capabilities offers a reliable and robust solution for real-time sensing the user's affective state. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Accelerating Monte Carlo simulations with an NVIDIA ® graphics processor
NASA Astrophysics Data System (ADS)
Martinsen, Paul; Blaschke, Johannes; Künnemeyer, Rainer; Jordan, Robert
2009-10-01
Modern graphics cards, commonly used in desktop computers, have evolved beyond a simple interface between processor and display to incorporate sophisticated calculation engines that can be applied to general purpose computing. The Monte Carlo algorithm for modelling photon transport in turbid media has been implemented on an NVIDIA ® 8800 GT graphics card using the CUDA toolkit. The Monte Carlo method relies on following the trajectory of millions of photons through the sample, often taking hours or days to complete. The graphics-processor implementation, processing roughly 110 million scattering events per second, was found to run more than 70 times faster than a similar, single-threaded implementation on a 2.67 GHz desktop computer. Program summaryProgram title: Phoogle-C/Phoogle-G Catalogue identifier: AEEB_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEB_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 51 264 No. of bytes in distributed program, including test data, etc.: 2 238 805 Distribution format: tar.gz Programming language: C++ Computer: Designed for Intel PCs. Phoogle-G requires a NVIDIA graphics card with support for CUDA 1.1 Operating system: Windows XP Has the code been vectorised or parallelized?: Phoogle-G is written for SIMD architectures RAM: 1 GB Classification: 21.1 External routines: Charles Karney Random number library. Microsoft Foundation Class library. NVIDA CUDA library [1]. Nature of problem: The Monte Carlo technique is an effective algorithm for exploring the propagation of light in turbid media. However, accurate results require tracing the path of many photons within the media. The independence of photons naturally lends the Monte Carlo technique to implementation on parallel architectures. Generally, parallel computing can be expensive, but recent advances in consumer grade graphics cards have opened the possibility of high-performance desktop parallel-computing. Solution method: In this pair of programmes we have implemented the Monte Carlo algorithm described by Prahl et al. [2] for photon transport in infinite scattering media to compare the performance of two readily accessible architectures: a standard desktop PC and a consumer grade graphics card from NVIDIA. Restrictions: The graphics card implementation uses single precision floating point numbers for all calculations. Only photon transport from an isotropic point-source is supported. The graphics-card version has no user interface. The simulation parameters must be set in the source code. The desktop version has a simple user interface; however some properties can only be accessed through an ActiveX client (such as Matlab). Additional comments: The random number library used has a LGPL ( http://www.gnu.org/copyleft/lesser.html) licence. Running time: Runtime can range from minutes to months depending on the number of photons simulated and the optical properties of the medium. References:http://www.nvidia.com/object/cuda_home.html. S. Prahl, M. Keijzer, Sl. Jacques, A. Welch, SPIE Institute Series 5 (1989) 102.
Architecture of the software for LAMOST fiber positioning subsystem
NASA Astrophysics Data System (ADS)
Peng, Xiaobo; Xing, Xiaozheng; Hu, Hongzhuan; Zhai, Chao; Li, Weimin
2004-09-01
The architecture of the software which controls the LAMOST fiber positioning sub-system is described. The software is composed of two parts as follows: a main control program in a computer and a unit controller program in a MCS51 single chip microcomputer ROM. And the function of the software includes: Client/Server model establishment, observation planning, collision handling, data transmission, pulse generation, CCD control, image capture and processing, and data analysis etc. Particular attention is paid to the ways in which different parts of the software can communicate. Also software techniques for multi threads, SOCKET programming, Microsoft Windows message response, and serial communications are discussed.
Suppression of combustion oscillations with mechanical damping devices
NASA Technical Reports Server (NTRS)
1971-01-01
Nonarray absorbing devices were investigated for use in rocket thrust chambers as instability suppressors. A theory for designing absorbing devices suitable for rocket application is derived, and a nonarray computer program is developed. The experimental program used to verify the theory is discussed. It is concluded that individual acoustical devices can be designed for maximum energy absorption, and it is recommended that single resonators be designed so that the ratio of the aperture diameter to the product of the quarter-wave length and cavity backing depth is less than one.
User's operating procedures. Volume 3: Projects directorate information programs
NASA Technical Reports Server (NTRS)
Haris, C. G.; Harris, D. K.
1985-01-01
A review of the user's operating procedures for the scout project automatic data system, called SPADS is presented. SPADS is the results of the past seven years of software development on a prime mini-computer. SPADS was developed as a single entry, multiple cross-reference data management and information retrieval system for the automation of Project office tasks, including engineering, financial, managerial, and clerical support. This volume, three of three, provides the instructions to operate the projects directorate information programs in data retrieval and file maintenance via the user friendly menu drivers.
The use of numerical programs in research and academic institutions
NASA Astrophysics Data System (ADS)
Scupi, A. A.
2016-08-01
This paper is conceived on the idea that numerical programs using computer models of physical processes can be used both for scientific research and academic teaching to study different phenomena. Computational Fluid Dynamics (CFD) is used today on a large scale in research and academic institutions. CFD development is not limited to computer simulations of fluid flow phenomena. Analytical solutions for most fluid dynamics problems are already available for ideal or simplified situations for different situations. CFD is based on the Navier- Stokes (N-S) equations characterizing the flow of a single phase of any liquid. For multiphase flows the integrated N-S equations are complemented with equations of the Volume of Fluid Model (VOF) and with energy equations. Different turbulent models were used in the paper, each one of them with practical engineering applications: the flow around aerodynamic surfaces used as unconventional propulsion system, multiphase flows in a settling chamber and pneumatic transport systems, heat transfer in a heat exchanger etc. Some of them numerical results were validated by experimental results. Numerical programs are also used in academic institutions where certain aspects of various phenomena are presented to students (Bachelor, Master and PhD) for a better understanding of the phenomenon itself.
Neutron Transmission of Single-crystal Sapphire Filters
NASA Astrophysics Data System (ADS)
Adib, M.; Kilany, M.; Habib, N.; Fathallah, M.
2005-05-01
An additive formula is given that permits the calculation of the nuclear capture, thermal diffuse and Bragg scattering cross-sections as a function of sapphire temperature and crystal parameters. We have developed a computer program that allows calculations of the thermal neutron transmission for the sapphire rhombohedral structure and its equivalent trigonal structure. The calculated total cross-section values and effective attenuation coefficient for single-crystalline sapphire at different temperatures are compared with measured values. Overall agreement is indicated between the formula and experimental data. We discuss the use of sapphire single crystal as a thermal neutron filter in terms of the optimum cystal thickness, mosaic spread, temperature, cutting plane and tuning for efficient transmission of thermal-reactor neutrons.
A versatile system for the rapid collection, handling and graphics analysis of multidimensional data
NASA Astrophysics Data System (ADS)
O'Brien, P. M.; Moloney, G.; O'Connor, A.; Legge, G. J. F.
1993-05-01
The aim of this work was to provide a versatile system for handling multiparameter data that may arise from a variety of experiments — nuclear, AMS, microprobe elemental analysis, 3D microtomography etc. Some of the most demanding requirements arise in the application of microprobes to quantitative elemental mapping and to microtomography. A system to handle data from such experiments had been under continuous development and use at MARC for the past 15 years. It has now been made adaptable to the needs of multiparameter (or single parameter) experiments in general. The original system has been rewritten, greatly expanded and made much more powerful and faster, by use of modern computer technology — a VME bus computer with a real time operating system and a RISC workstation running Unix and the X Window system. This provides the necessary (i) power, speed and versatility, (ii) expansion and updating capabilities (iii) standardisation and adaptability, (iv) coherent modular programming structures, (v) ability to interface to other programs and (vi) transparent operation with several levels, involving the use of menus, programmed function keys and powerful macro programming facilities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chojnowski, Grzegorz, E-mail: gchojnowski@genesilico.pl; Waleń, Tomasz; University of Warsaw, Banacha 2, 02-097 Warsaw
2015-03-01
A computer program that builds crystal structure models of nucleic acid molecules is presented. Brickworx is a computer program that builds crystal structure models of nucleic acid molecules using recurrent motifs including double-stranded helices. In a first step, the program searches for electron-density peaks that may correspond to phosphate groups; it may also take into account phosphate-group positions provided by the user. Subsequently, comparing the three-dimensional patterns of the P atoms with a database of nucleic acid fragments, it finds the matching positions of the double-stranded helical motifs (A-RNA or B-DNA) in the unit cell. If the target structure ismore » RNA, the helical fragments are further extended with recurrent RNA motifs from a fragment library that contains single-stranded segments. Finally, the matched motifs are merged and refined in real space to find the most likely conformations, including a fit of the sequence to the electron-density map. The Brickworx program is available for download and as a web server at http://iimcb.genesilico.pl/brickworx.« less
NASA Technical Reports Server (NTRS)
Lindsey, Tony; Pecheur, Charles
2004-01-01
Livingstone PathFinder (LPF) is a simulation-based computer program for verifying autonomous diagnostic software. LPF is designed especially to be applied to NASA s Livingstone computer program, which implements a qualitative-model-based algorithm that diagnoses faults in a complex automated system (e.g., an exploratory robot, spacecraft, or aircraft). LPF forms a software test bed containing a Livingstone diagnosis engine, embedded in a simulated operating environment consisting of a simulator of the system to be diagnosed by Livingstone and a driver program that issues commands and faults according to a nondeterministic scenario provided by the user. LPF runs the test bed through all executions allowed by the scenario, checking for various selectable error conditions after each step. All components of the test bed are instrumented, so that execution can be single-stepped both backward and forward. The architecture of LPF is modular and includes generic interfaces to facilitate substitution of alternative versions of its different parts. Altogether, LPF provides a flexible, extensible framework for simulation-based analysis of diagnostic software; these characteristics also render it amenable to application to diagnostic programs other than Livingstone.
21 CFR 870.1435 - Single-function, preprogrammed diagnostic computer.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Single-function, preprogrammed diagnostic computer... Single-function, preprogrammed diagnostic computer. (a) Identification. A single-function, preprogrammed diagnostic computer is a hard-wired computer that calculates a specific physiological or blood-flow parameter...
21 CFR 870.1435 - Single-function, preprogrammed diagnostic computer.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Single-function, preprogrammed diagnostic computer... Single-function, preprogrammed diagnostic computer. (a) Identification. A single-function, preprogrammed diagnostic computer is a hard-wired computer that calculates a specific physiological or blood-flow parameter...
21 CFR 870.1435 - Single-function, preprogrammed diagnostic computer.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Single-function, preprogrammed diagnostic computer... Single-function, preprogrammed diagnostic computer. (a) Identification. A single-function, preprogrammed diagnostic computer is a hard-wired computer that calculates a specific physiological or blood-flow parameter...
21 CFR 870.1435 - Single-function, preprogrammed diagnostic computer.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Single-function, preprogrammed diagnostic computer... Single-function, preprogrammed diagnostic computer. (a) Identification. A single-function, preprogrammed diagnostic computer is a hard-wired computer that calculates a specific physiological or blood-flow parameter...
System for Performing Single Query Searches of Heterogeneous and Dispersed Databases
NASA Technical Reports Server (NTRS)
Maluf, David A. (Inventor); Okimura, Takeshi (Inventor); Gurram, Mohana M. (Inventor); Tran, Vu Hoang (Inventor); Knight, Christopher D. (Inventor); Trinh, Anh Ngoc (Inventor)
2017-01-01
The present invention is a distributed computer system of heterogeneous databases joined in an information grid and configured with an Application Programming Interface hardware which includes a search engine component for performing user-structured queries on multiple heterogeneous databases in real time. This invention reduces overhead associated with the impedance mismatch that commonly occurs in heterogeneous database queries.
Shared Memory Parallelism for 3D Cartesian Discrete Ordinates Solver
NASA Astrophysics Data System (ADS)
Moustafa, Salli; Dutka-Malen, Ivan; Plagne, Laurent; Ponçot, Angélique; Ramet, Pierre
2014-06-01
This paper describes the design and the performance of DOMINO, a 3D Cartesian SN solver that implements two nested levels of parallelism (multicore+SIMD) on shared memory computation nodes. DOMINO is written in C++, a multi-paradigm programming language that enables the use of powerful and generic parallel programming tools such as Intel TBB and Eigen. These two libraries allow us to combine multi-thread parallelism with vector operations in an efficient and yet portable way. As a result, DOMINO can exploit the full power of modern multi-core processors and is able to tackle very large simulations, that usually require large HPC clusters, using a single computing node. For example, DOMINO solves a 3D full core PWR eigenvalue problem involving 26 energy groups, 288 angular directions (S16), 46 × 106 spatial cells and 1 × 1012 DoFs within 11 hours on a single 32-core SMP node. This represents a sustained performance of 235 GFlops and 40:74% of the SMP node peak performance for the DOMINO sweep implementation. The very high Flops/Watt ratio of DOMINO makes it a very interesting building block for a future many-nodes nuclear simulation tool.
Wood, Scott T; Dean, Brian C; Dean, Delphine
2013-04-01
This paper presents a novel computer vision algorithm to analyze 3D stacks of confocal images of fluorescently stained single cells. The goal of the algorithm is to create representative in silico model structures that can be imported into finite element analysis software for mechanical characterization. Segmentation of cell and nucleus boundaries is accomplished via standard thresholding methods. Using novel linear programming methods, a representative actin stress fiber network is generated by computing a linear superposition of fibers having minimum discrepancy compared with an experimental 3D confocal image. Qualitative validation is performed through analysis of seven 3D confocal image stacks of adherent vascular smooth muscle cells (VSMCs) grown in 2D culture. The presented method is able to automatically generate 3D geometries of the cell's boundary, nucleus, and representative F-actin network based on standard cell microscopy data. These geometries can be used for direct importation and implementation in structural finite element models for analysis of the mechanics of a single cell to potentially speed discoveries in the fields of regenerative medicine, mechanobiology, and drug discovery. Copyright © 2012 Elsevier B.V. All rights reserved.
Configurable software for satellite graphics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hartzman, P D
An important goal in interactive computer graphics is to provide users with both quick system responses for basic graphics functions and enough computing power for complex calculations. One solution is to have a distributed graphics system in which a minicomputer and a powerful large computer share the work. The most versatile type of distributed system is an intelligent satellite system in which the minicomputer is programmable by the application user and can do most of the work while the large remote machine is used for difficult computations. At New York University, the hardware was configured from available equipment. The levelmore » of system intelligence resulted almost completely from software development. Unlike previous work with intelligent satellites, the resulting system had system control centered in the satellite. It also had the ability to reconfigure software during realtime operation. The design of the system was done at a very high level using set theoretic language. The specification clearly illustrated processor boundaries and interfaces. The high-level specification also produced a compact, machine-independent virtual graphics data structure for picture representation. The software was written in a systems implementation language; thus, only one set of programs was needed for both machines. A user can program both machines in a single language. Tests of the system with an application program indicate that is has very high potential. A major result of this work is the demonstration that a gigantic investment in new hardware is not necessary for computing facilities interested in graphics.« less
Berlin, Konstantin; Longhini, Andrew; Dayie, T Kwaku; Fushman, David
2013-12-01
To facilitate rigorous analysis of molecular motions in proteins, DNA, and RNA, we present a new version of ROTDIF, a program for determining the overall rotational diffusion tensor from single- or multiple-field nuclear magnetic resonance relaxation data. We introduce four major features that expand the program's versatility and usability. The first feature is the ability to analyze, separately or together, (13)C and/or (15)N relaxation data collected at a single or multiple fields. A significant improvement in the accuracy compared to direct analysis of R2/R1 ratios, especially critical for analysis of (13)C relaxation data, is achieved by subtracting high-frequency contributions to relaxation rates. The second new feature is an improved method for computing the rotational diffusion tensor in the presence of biased errors, such as large conformational exchange contributions, that significantly enhances the accuracy of the computation. The third new feature is the integration of the domain alignment and docking module for relaxation-based structure determination of multi-domain systems. Finally, to improve accessibility to all the program features, we introduced a graphical user interface that simplifies and speeds up the analysis of the data. Written in Java, the new ROTDIF can run on virtually any computer platform. In addition, the new ROTDIF achieves an order of magnitude speedup over the previous version by implementing a more efficient deterministic minimization algorithm. We not only demonstrate the improvement in accuracy and speed of the new algorithm for synthetic and experimental (13)C and (15)N relaxation data for several proteins and nucleic acids, but also show that careful analysis required especially for characterizing RNA dynamics allowed us to uncover subtle conformational changes in RNA as a function of temperature that were opaque to previous analysis.
Accelerating Wright–Fisher Forward Simulations on the Graphics Processing Unit
Lawrie, David S.
2017-01-01
Forward Wright–Fisher simulations are powerful in their ability to model complex demography and selection scenarios, but suffer from slow execution on the Central Processor Unit (CPU), thus limiting their usefulness. However, the single-locus Wright–Fisher forward algorithm is exceedingly parallelizable, with many steps that are so-called “embarrassingly parallel,” consisting of a vast number of individual computations that are all independent of each other and thus capable of being performed concurrently. The rise of modern Graphics Processing Units (GPUs) and programming languages designed to leverage the inherent parallel nature of these processors have allowed researchers to dramatically speed up many programs that have such high arithmetic intensity and intrinsic concurrency. The presented GPU Optimized Wright–Fisher simulation, or “GO Fish” for short, can be used to simulate arbitrary selection and demographic scenarios while running over 250-fold faster than its serial counterpart on the CPU. Even modest GPU hardware can achieve an impressive speedup of over two orders of magnitude. With simulations so accelerated, one can not only do quick parametric bootstrapping of previously estimated parameters, but also use simulated results to calculate the likelihoods and summary statistics of demographic and selection models against real polymorphism data, all without restricting the demographic and selection scenarios that can be modeled or requiring approximations to the single-locus forward algorithm for efficiency. Further, as many of the parallel programming techniques used in this simulation can be applied to other computationally intensive algorithms important in population genetics, GO Fish serves as an exciting template for future research into accelerating computation in evolution. GO Fish is part of the Parallel PopGen Package available at: http://dl42.github.io/ParallelPopGen/. PMID:28768689
Software for Analyzing Sequences of Flow-Related Images
NASA Technical Reports Server (NTRS)
Klimek, Robert; Wright, Ted
2004-01-01
Spotlight is a computer program for analysis of sequences of images generated in combustion and fluid physics experiments. Spotlight can perform analysis of a single image in an interactive mode or a sequence of images in an automated fashion. The primary type of analysis is tracking of positions of objects over sequences of frames. Features and objects that are typically tracked include flame fronts, particles, droplets, and fluid interfaces. Spotlight automates the analysis of object parameters, such as centroid position, velocity, acceleration, size, shape, intensity, and color. Images can be processed to enhance them before statistical and measurement operations are performed. An unlimited number of objects can be analyzed simultaneously. Spotlight saves results of analyses in a text file that can be exported to other programs for graphing or further analysis. Spotlight is a graphical-user-interface-based program that at present can be executed on Microsoft Windows and Linux operating systems. A version that runs on Macintosh computers is being considered.
Block-Parallel Data Analysis with DIY2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morozov, Dmitriy; Peterka, Tom
DIY2 is a programming model and runtime for block-parallel analytics on distributed-memory machines. Its main abstraction is block-structured data parallelism: data are decomposed into blocks; blocks are assigned to processing elements (processes or threads); computation is described as iterations over these blocks, and communication between blocks is defined by reusable patterns. By expressing computation in this general form, the DIY2 runtime is free to optimize the movement of blocks between slow and fast memories (disk and flash vs. DRAM) and to concurrently execute blocks residing in memory with multiple threads. This enables the same program to execute in-core, out-of-core, serial,more » parallel, single-threaded, multithreaded, or combinations thereof. This paper describes the implementation of the main features of the DIY2 programming model and optimizations to improve performance. DIY2 is evaluated on benchmark test cases to establish baseline performance for several common patterns and on larger complete analysis codes running on large-scale HPC machines.« less
NASA Technical Reports Server (NTRS)
Mishchenko, Michael I.; Yang, Ping
2018-01-01
In this paper we make practical use of the recently developed first-principles approach to electromagnetic scattering by particles immersed in an unbounded absorbing host medium. Specifically, we introduce an actual computational tool for the calculation of pertinent far-field optical observables in the context of the classical Lorenzâ€"Mie theory. The paper summarizes the relevant theoretical formalism, explains various aspects of the corresponding numerical algorithm, specifies the input and output parameters of a FORTRAN program available at https://www.giss.nasa.gov/staff/mmishchenko/Lorenz-Mie.html, and tabulates benchmark results useful for testing purposes. This public-domain FORTRAN program enables one to solve the following two important problems: (i) simulate theoretically the reading of a remote well-collimated radiometer measuring electromagnetic scattering by an individual spherical particle or a small random group of spherical particles; and (ii) compute the single-scattering parameters that enter the vector radiative transfer equation derived directly from the Maxwell equations.
FY16 ASME High Temperature Code Activities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swindeman, M. J.; Jetter, R. I.; Sham, T. -L.
2016-09-01
One of the objectives of the ASME high temperature Code activities is to develop and validate both improvements and the basic features of Section III, Division 5, Subsection HB, Subpart B (HBB). The overall scope of this task is to develop a computer program to be used to assess whether or not a specific component under specified loading conditions will satisfy the elevated temperature design requirements for Class A components in Section III, Division 5, Subsection HB, Subpart B (HBB). There are many features and alternative paths of varying complexity in HBB. The initial focus of this task is amore » basic path through the various options for a single reference material, 316H stainless steel. However, the program will be structured for eventual incorporation all the features and permitted materials of HBB. Since this task has recently been initiated, this report focuses on the description of the initial path forward and an overall description of the approach to computer program development.« less
Automated Routines for Calculating Whole-Stream Metabolism: Theoretical Background and User's Guide
Bales, Jerad D.; Nardi, Mark R.
2007-01-01
In order to standardize methods and facilitate rapid calculation and archival of stream-metabolism variables, the Stream Metabolism Program was developed to calculate gross primary production, net ecosystem production, respiration, and selected other variables from continuous measurements of dissolved-oxygen concentration, water temperature, and other user-supplied information. Methods for calculating metabolism from continuous measurements of dissolved-oxygen concentration and water temperature are fairly well known, but a standard set of procedures and computation software for all aspects of the calculations were not available previously. The Stream Metabolism Program addresses this deficiency with a stand-alone executable computer program written in Visual Basic.NET?, which runs in the Microsoft Windows? environment. All equations and assumptions used in the development of the software are documented in this report. Detailed guidance on application of the software is presented, along with a summary of the data required to use the software. Data from either a single station or paired (upstream, downstream) stations can be used with the software to calculate metabolism variables.
NASA Astrophysics Data System (ADS)
Mishchenko, Michael I.; Yang, Ping
2018-01-01
In this paper we make practical use of the recently developed first-principles approach to electromagnetic scattering by particles immersed in an unbounded absorbing host medium. Specifically, we introduce an actual computational tool for the calculation of pertinent far-field optical observables in the context of the classical Lorenz-Mie theory. The paper summarizes the relevant theoretical formalism, explains various aspects of the corresponding numerical algorithm, specifies the input and output parameters of a FORTRAN program available at https://www.giss.nasa.gov/staff/mmishchenko/Lorenz-Mie.html, and tabulates benchmark results useful for testing purposes. This public-domain FORTRAN program enables one to solve the following two important problems: (i) simulate theoretically the reading of a remote well-collimated radiometer measuring electromagnetic scattering by an individual spherical particle or a small random group of spherical particles; and (ii) compute the single-scattering parameters that enter the vector radiative transfer equation derived directly from the Maxwell equations.
Dynamic online surveys and experiments with the free open-source software dynQuest.
Rademacher, Jens D M; Lippke, Sonia
2007-08-01
With computers and the World Wide Web widely available, collecting data through Web browsers is an attractive method utilized by the social sciences. In this article, conducting PC- and Web-based trials with the software package dynQuest is described. The software manages dynamic questionnaire-based trials over the Internet or on single computers, possibly as randomized control trials (RCT), if two or more groups are involved. The choice of follow-up questions can depend on previous responses, as needed for matched interventions. Data are collected in a simple text-based database that can be imported easily into other programs for postprocessing and statistical analysis. The software consists of platform-independent scripts written in the programming language PERL that use the common gateway interface between Web browser and server for submission of data through HTML forms. Advantages of dynQuest are parsimony, simplicity in use and installation, transparency, and reliability. The program is available as open-source freeware from the authors.
Karpievitch, Yuliya V; Almeida, Jonas S
2006-01-01
Background Matlab, a powerful and productive language that allows for rapid prototyping, modeling and simulation, is widely used in computational biology. Modeling and simulation of large biological systems often require more computational resources then are available on a single computer. Existing distributed computing environments like the Distributed Computing Toolbox, MatlabMPI, Matlab*G and others allow for the remote (and possibly parallel) execution of Matlab commands with varying support for features like an easy-to-use application programming interface, load-balanced utilization of resources, extensibility over the wide area network, and minimal system administration skill requirements. However, all of these environments require some level of access to participating machines to manually distribute the user-defined libraries that the remote call may invoke. Results mGrid augments the usual process distribution seen in other similar distributed systems by adding facilities for user code distribution. mGrid's client-side interface is an easy-to-use native Matlab toolbox that transparently executes user-defined code on remote machines (i.e. the user is unaware that the code is executing somewhere else). Run-time variables are automatically packed and distributed with the user-defined code and automated load-balancing of remote resources enables smooth concurrent execution. mGrid is an open source environment. Apart from the programming language itself, all other components are also open source, freely available tools: light-weight PHP scripts and the Apache web server. Conclusion Transparent, load-balanced distribution of user-defined Matlab toolboxes and rapid prototyping of many simple parallel applications can now be done with a single easy-to-use Matlab command. Because mGrid utilizes only Matlab, light-weight PHP scripts and the Apache web server, installation and configuration are very simple. Moreover, the web-based infrastructure of mGrid allows for it to be easily extensible over the Internet. PMID:16539707
Karpievitch, Yuliya V; Almeida, Jonas S
2006-03-15
Matlab, a powerful and productive language that allows for rapid prototyping, modeling and simulation, is widely used in computational biology. Modeling and simulation of large biological systems often require more computational resources then are available on a single computer. Existing distributed computing environments like the Distributed Computing Toolbox, MatlabMPI, Matlab*G and others allow for the remote (and possibly parallel) execution of Matlab commands with varying support for features like an easy-to-use application programming interface, load-balanced utilization of resources, extensibility over the wide area network, and minimal system administration skill requirements. However, all of these environments require some level of access to participating machines to manually distribute the user-defined libraries that the remote call may invoke. mGrid augments the usual process distribution seen in other similar distributed systems by adding facilities for user code distribution. mGrid's client-side interface is an easy-to-use native Matlab toolbox that transparently executes user-defined code on remote machines (i.e. the user is unaware that the code is executing somewhere else). Run-time variables are automatically packed and distributed with the user-defined code and automated load-balancing of remote resources enables smooth concurrent execution. mGrid is an open source environment. Apart from the programming language itself, all other components are also open source, freely available tools: light-weight PHP scripts and the Apache web server. Transparent, load-balanced distribution of user-defined Matlab toolboxes and rapid prototyping of many simple parallel applications can now be done with a single easy-to-use Matlab command. Because mGrid utilizes only Matlab, light-weight PHP scripts and the Apache web server, installation and configuration are very simple. Moreover, the web-based infrastructure of mGrid allows for it to be easily extensible over the Internet.
A computer program designed to produce tables from alphanumeric data
Ridgley, Jennie L.; Schnabel, Robert Wayne
1978-01-01
This program is designed to produce tables from alphanumeric data. Each line of data that appears in the table is entered into a data file as a single line of data. Where necessary, a predetermined delimiter is added to break up the data into column data. The program can process the following types of data: (1) title, (2) headnote, (3) footnote, (4) two levels of column headers, (5) solid lines, (6) blank lines, (7) most types of numeric data, and (8) all types of alphanumeric data. In addition, the program can produce a series of continuation tables from large data sets. Fitting of all data to the final table format is performed by the program, although provisions have been made for user-modification of the final format. The width of the table is adjustable, but may not exceed 158 characters per line. The program is useful in that it permits alteration of original data or table format without having to physically retype all or portions of the table. The final results may be obtained quickly using interactive terminals, and execution of the program requires only minimal knowledge of computer usage. Tables produced may be of publishable quality, especially when reduced. Complete user documentation and program listing are included. NOTE: Although this program has been subjected to many tests a warranty on accuracy or proper functioning is neither implied nor expressed.
NASA Astrophysics Data System (ADS)
Sanna, N.; Baccarelli, I.; Morelli, G.
2009-12-01
SCELib is a computer program which implements the Single Center Expansion (SCE) method to describe molecular electronic densities and the interaction potentials between a charged projectile (electron or positron) and a target molecular system. The first version (CPC Catalog identifier ADMG_v1_0) was submitted to the CPC Program Library in 2000, and version 2.0 (ADMG_v2_0) was submitted in 2004. We here announce the new release 3.0 which presents additional features with respect to the previous versions aiming at a significative enhance of its capabilities to deal with larger molecular systems. SCELib 3.0 allows for ab initio effective core potential (ECP) calculations of the molecular wavefunctions to be used in the SCE method in addition to the standard all-electron description of the molecule. The list of supported architectures has been updated and the code has been ported to platforms based on accelerating coprocessors, such as the NVIDIA GPGPU and the new parallel model adopted is able to efficiently run on a mixed many-core computing system. Program summaryProgram title: SCELib3.0 Catalogue identifier: ADMG_v3_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADMG_v3_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 2 018 862 No. of bytes in distributed program, including test data, etc.: 4 955 014 Distribution format: tar.gz Programming language: C Compilers used: xlc V8.x, Intel C V10.x, Portland Group V7.x, nvcc V2.x Computer: All SMP platforms based on AIX, Linux and SUNOS operating systems over SPARC, POWER, Intel Itanium2, X86, em64t and Opteron processors Operating system: SUNOS, IBM AIX, Linux RedHat (Enterprise), Linux SuSE (SLES) Has the code been vectorized or parallelized?: Yes. 1 to 32 (CPU or GPU) used RAM: Up to 32 GB depending on the molecular system and runtime parameters Classification: 16.5 Catalogue identifier of previous version: ADMG_v2_0 Journal reference of previous version: Comput. Phys. Comm. 162 (2004) 51 External routines: CUDA libraries (SDK V2.x). Does the new version supersede the previous version?: Yes Nature of problem: In this set of codes an efficient procedure is implemented to describe the wavefunction and related molecular properties of a polyatomic molecular system within the Single Center of Expansion (SCE) approximation. The resulting SCE wavefunction, electron density, electrostatic and correlation/polarization potentials can then be used in a wide variety of applications, such as electron-molecule scattering calculations, quantum chemistry studies, biomodelling and drug design. Solution method: The polycentre Hartree-Fock solution for a molecule of arbitrary geometry, based on linear combination of Gaussian-Type Orbital (GTO), is expanded over a single center, typically the Center Of Mass (C.O.M.), by means of a Gauss Legendre/Chebyschev quadrature over the θ,φ angular coordinates. The resulting SCE numerical wavefunction is then used to calculate the one-particle electron density, the electrostatic potential and two different models for the correlation/polarization potentials induced by the impinging electron, which have the correct asymptotic behavior for the leading dipole molecular polarizabilities. Reasons for new version: The present release of SCELib allows the study of larger molecular systems with respect to the previous versions by means of theoretical and technological advances, with the first implementation of the code over a many-core computing system. Summary of revisions: The major features added with respect to SCELib Version 2.0 are molecular wavefunctions obtained via the Los Alamos (Hay and Wadt) LAN ECP plus DZ description of the inner-shell electrons (on Na-La, Hf-Bi elements) [1] can now be single-center-expanded; the addition required modifications of: (i) the filtering code readgau, (ii) the main reading function setinp, (iii) the sphint code (including changes to the CalcMO code), (iv) the densty code, (v) the vst code; the classes of platforms supported now include two more architectures based on accelerated coprocessors (Nvidia GSeries GPGPU and ClearSpeed e720 (ClearSpeed version, experimental; initial preliminary porting of the sphint() function not for production runs - see the code documentation for additional detail). A single-precision representation for real numbers in the SCE mapping of the GTOs ( sphint code), has been implemented into the new code; the I h symmetry point group for the molecular systems has been added to those already allowed in the SCE procedure; the orientation of the molecular axis system for the Cs (planar) symmetry has been changed in accord with the standard orientation adopted by the latest version of the quantum chemistry code (Gaussian C03 [2]), which is used to generate the input multi-centre molecular wavefunctions ( z-axis perpendicular to the symmetry plane); the abelian subgroup for the Cs point group has been changed from C 1 to Cs; atomic basis functions including g-type GTOs can now be single-center-expanded. Restrictions: Depending on the molecular system under study and on the operating conditions the program may or may not fit into available RAM memory. In this case a feature of the program is to memory map a disk file in order to efficiently access the memory data through a disk device. The parallel GP-GPU implementation limits the number of CPU threads to the number of GPU cores present. Running time: The execution time strongly depends on the molecular target description and on the hardware/OS chosen, it is directly proportional to the ( r,θ,φ) grid size and to the number of angular basis functions used. Thus, from the program printout of the main arrays memory occupancy, the user can approximately derive the expected computer time needed for a given calculation executed in serial mode. For parallel executions the overall efficiency must be further taken into account, and this depends on the no. of processors used as well as on the parallel architecture chosen, so a simple general law is at present not determinable. References:[1] P.J. Hay, W.R. Wadt, J. Chem. Phys. 82 (1985) 270; W.R. Wadt, P.J. Hay, J. Chem. Phys. 284 (1985);P.J. Hay, W.R. Wadt, J. Chem. Phys. 299 (1985). [2] M.J. Frisch et al., Gaussian 03, revision C.02, Gaussian, Inc., Wallingford, CT, 2004.
NASA Astrophysics Data System (ADS)
McGibbney, L. J.; Rittger, K.; Painter, T. H.; Selkowitz, D.; Mattmann, C. A.; Ramirez, P.
2014-12-01
As part of a JPL-USGS collaboration to expand distribution of essential climate variables (ECV) to include on-demand fractional snow cover we describe our experience and implementation of a shift towards the use of NVIDIA's CUDA® parallel computing platform and programming model. In particular the on-demand aspect of this work involves the improvement (via faster processing and a reduction in overall running times) for determination of fractional snow-covered area (fSCA) from Landsat TM/ETM+. Our observations indicate that processing tasks associated with remote sensing including the Snow Covered Area and Grain Size Model (SCAG) when applied to MODIS or LANDSAT TM/ETM+ are computationally intensive processes. We believe the shift to the CUDA programming paradigm represents a significant improvement in the ability to more quickly assert the outcomes of such activities. We use the TMSCAG model as our subject to highlight this argument. We do this by describing how we can ingest a LANDSAT surface reflectance image (typically provided in HDF format), perform spectral mixture analysis to produce land cover fractions including snow, vegetation and rock/soil whilst greatly reducing running time for such tasks. Within the scope of this work we first document the original workflow used to assert fSCA for Landsat TM and it's primary shortcomings. We then introduce the logic and justification behind the switch to the CUDA paradigm for running single as well as batch jobs on the GPU in order to achieve parallel processing. Finally we share lessons learned from the implementation of myriad of existing algorithms to a single set of code in a single target language as well as benefits this ultimately provides scientists at the USGS.
Smith, Matthew B; Karatekin, Erdem; Gohlke, Andrea; Mizuno, Hiroaki; Watanabe, Naoki; Vavylonis, Dimitrios
2011-10-05
Analysis of particle trajectories in images obtained by fluorescence microscopy reveals biophysical properties such as diffusion coefficient or rates of association and dissociation. Particle tracking and lifetime measurement is often limited by noise, large mobilities, image inhomogeneities, and path crossings. We present Speckle TrackerJ, a tool that addresses some of these challenges using computer-assisted techniques for finding positions and tracking particles in different situations. A dynamic user interface assists in the creation, editing, and refining of particle tracks. The following are results from application of this program: 1), Tracking single molecule diffusion in simulated images. The shape of the diffusing marker on the image changes from speckle to cloud, depending on the relationship of the diffusion coefficient to the camera exposure time. We use these images to illustrate the range of diffusion coefficients that can be measured. 2), We used the program to measure the diffusion coefficient of capping proteins in the lamellipodium. We found values ∼0.5 μm(2)/s, suggesting capping protein association with protein complexes or the membrane. 3), We demonstrate efficient measuring of appearance and disappearance of EGFP-actin speckles within the lamellipodium of motile cells that indicate actin monomer incorporation into the actin filament network. 4), We marked appearance and disappearance events of fluorescently labeled vesicles to supported lipid bilayers and tracked single lipids from the fused vesicle on the bilayer. This is the first time, to our knowledge, that vesicle fusion has been detected with single molecule sensitivity and the program allowed us to perform a quantitative analysis. 5), By discriminating between undocking and fusion events, dwell times for vesicle fusion after vesicle docking to membranes can be measured. Copyright © 2011 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Generalized EC&LSS computer program configuration control
NASA Technical Reports Server (NTRS)
Blakely, R. L.
1976-01-01
The generalized environmental control and life support system (ECLSS) computer program (G189A) simulation of the shuttle orbiter ECLSS was upgraded. The G189A component model configuration was changed to represent the current PV102 and subsequent vehicle ECLSS configurations as defined by baseline ARS and ATCS schematics. The diagrammatic output schematics of the gas, water, and freon loops were also revised to agree with the new ECLSS configuration. The accuracy of the transient analysis was enhanced by incorporating the thermal mass effects of the equipment, structure, and fluid in the ARS gas and water loops and in the ATCS freon loops. The sources of the data used to upgrade the simulation are: (1) ATCS freon loop line sizes and lengths; (2) ARS water loop line sizes and lengths; (3) ARS water loop and ATCS freon loop component and equipment weights; and (4) ARS cabin and avionics bay thermal capacitance and conductance values. A single G189A combination master program library tape was generated which contains all of the master program library versions which were previously maintained on separate tapes. A new component subroutine, PIPETL, was developed and incorporated into the G189A master program library.
SPARX, a new environment for Cryo-EM image processing.
Hohn, Michael; Tang, Grant; Goodyear, Grant; Baldwin, P R; Huang, Zhong; Penczek, Pawel A; Yang, Chao; Glaeser, Robert M; Adams, Paul D; Ludtke, Steven J
2007-01-01
SPARX (single particle analysis for resolution extension) is a new image processing environment with a particular emphasis on transmission electron microscopy (TEM) structure determination. It includes a graphical user interface that provides a complete graphical programming environment with a novel data/process-flow infrastructure, an extensive library of Python scripts that perform specific TEM-related computational tasks, and a core library of fundamental C++ image processing functions. In addition, SPARX relies on the EMAN2 library and cctbx, the open-source computational crystallography library from PHENIX. The design of the system is such that future inclusion of other image processing libraries is a straightforward task. The SPARX infrastructure intelligently handles retention of intermediate values, even those inside programming structures such as loops and function calls. SPARX and all dependencies are free for academic use and available with complete source.
NASA Technical Reports Server (NTRS)
Gupta, S. K.; Tiwari, S. N.
1976-01-01
A simple procedure and computer program were developed for retrieving the surface temperature from the measurement of upwelling infrared radiance in a single spectral region in the atmosphere. The program evaluates the total upwelling radiance at any altitude in the region of the CO fundamental band (2070-2220 1/cm) for several values of surface temperature. Actual surface temperature is inferred by interpolation of the measured upwelling radiance between the computed values of radiance for the same altitude. Sensitivity calculations were made to determine the effect of uncertainty in various surface, atmospheric and experimental parameters on the inferred value of surface temperature. It is found that the uncertainties in water vapor concentration and surface emittance are the most important factors affecting the accuracy of the inferred value of surface temperature.
Hybrid Donor-Dot Devices made using Top-down Ion Implantation for Quantum Computing
NASA Astrophysics Data System (ADS)
Bielejec, Edward; Bishop, Nathan; Carroll, Malcolm
2012-02-01
We present progress towards fabricating hybrid donor -- quantum dots (QD) for quantum computing. These devices will exploit the long coherence time of the donor system and the surface state manipulation associated with a QD. Fabrication requires detection of single ions implanted with 10's of nanometer precision. We show in this talk, 100% detection efficiency for single ions using a single ion Geiger mode avalanche (SIGMA) detector integrated into a Si MOS QD process flow. The NanoImplanter (nI) a focused ion beam system is used for precision top-down placement of the implanted ion. This machine has a 10 nm resolution combined with a mass velocity filter, allowing for the use of multi-species liquid metal ion sources (LMIS) to implant P and Sb ions, and a fast blanking and chopping system for single ion implants. The combination of the nI and integration of the SIGMA with the MOS QD process flow establishes a path to fabricate hybrid single donor-dot devices. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Parallelized seeded region growing using CUDA.
Park, Seongjin; Lee, Jeongjin; Lee, Hyunna; Shin, Juneseuk; Seo, Jinwook; Lee, Kyoung Ho; Shin, Yeong-Gil; Kim, Bohyoung
2014-01-01
This paper presents a novel method for parallelizing the seeded region growing (SRG) algorithm using Compute Unified Device Architecture (CUDA) technology, with intention to overcome the theoretical weakness of SRG algorithm of its computation time being directly proportional to the size of a segmented region. The segmentation performance of the proposed CUDA-based SRG is compared with SRG implementations on single-core CPUs, quad-core CPUs, and shader language programming, using synthetic datasets and 20 body CT scans. Based on the experimental results, the CUDA-based SRG outperforms the other three implementations, advocating that it can substantially assist the segmentation during massive CT screening tests.
Computer graphics for management: An abstract of capabilities and applications of the EIS system
NASA Technical Reports Server (NTRS)
Solem, B. J.
1975-01-01
The Executive Information Services (EIS) system, developed as a computer-based, time-sharing tool for making and implementing management decisions, and including computer graphics capabilities, was described. The following resources are available through the EIS languages: centralized corporate/gov't data base, customized and working data bases, report writing, general computational capability, specialized routines, modeling/programming capability, and graphics. Nearly all EIS graphs can be created by a single, on-line instruction. A large number of options are available, such as selection of graphic form, line control, shading, placement on the page, multiple images on a page, control of scaling and labeling, plotting of cum data sets, optical grid lines, and stack charts. The following are examples of areas in which the EIS system may be used: research, estimating services, planning, budgeting, and performance measurement, national computer hook-up negotiations.
NASA Technical Reports Server (NTRS)
Munoz, E. F.; Silverman, M. P.
1979-01-01
A single-step most-probable-number method for determining the number of fecal coliform bacteria present in sewage treatment plant effluents is discussed. A single growth medium based on that of Reasoner et al. (1976) and consisting of 5.0 gr. proteose peptone, 3.0 gr. yeast extract, 10.0 gr. lactose, 7.5 gr. NaCl, 0.2 gr. sodium lauryl sulfate, and 0.1 gr. sodium desoxycholate per liter is used. The pH is adjusted to 6.5, and samples are incubated at 44.5 deg C. Bacterial growth is detected either by measuring the increase with time in the electrical impedance ratio between the innoculated sample vial and an uninnoculated reference vial or by visual examination for turbidity. Results obtained by the single-step method for chlorinated and unchlorinated effluent samples are in excellent agreement with those obtained by the standard method. It is suggested that in automated treatment plants impedance ratio data could be automatically matched by computer programs with the appropriate dilution factors and most probable number tables already in the computer memory, with the corresponding result displayed as fecal coliforms per 100 ml of effluent.
ALEGRA -- A massively parallel h-adaptive code for solid dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Summers, R.M.; Wong, M.K.; Boucheron, E.A.
1997-12-31
ALEGRA is a multi-material, arbitrary-Lagrangian-Eulerian (ALE) code for solid dynamics designed to run on massively parallel (MP) computers. It combines the features of modern Eulerian shock codes, such as CTH, with modern Lagrangian structural analysis codes using an unstructured grid. ALEGRA is being developed for use on the teraflop supercomputers to conduct advanced three-dimensional (3D) simulations of shock phenomena important to a variety of systems. ALEGRA was designed with the Single Program Multiple Data (SPMD) paradigm, in which the mesh is decomposed into sub-meshes so that each processor gets a single sub-mesh with approximately the same number of elements. Usingmore » this approach the authors have been able to produce a single code that can scale from one processor to thousands of processors. A current major effort is to develop efficient, high precision simulation capabilities for ALEGRA, without the computational cost of using a global highly resolved mesh, through flexible, robust h-adaptivity of finite elements. H-adaptivity is the dynamic refinement of the mesh by subdividing elements, thus changing the characteristic element size and reducing numerical error. The authors are working on several major technical challenges that must be met to make effective use of HAMMER on MP computers.« less
Wu, Xiao-Lin; Sun, Chuanyu; Beissinger, Timothy M; Rosa, Guilherme Jm; Weigel, Kent A; Gatti, Natalia de Leon; Gianola, Daniel
2012-09-25
Most Bayesian models for the analysis of complex traits are not analytically tractable and inferences are based on computationally intensive techniques. This is true of Bayesian models for genome-enabled selection, which uses whole-genome molecular data to predict the genetic merit of candidate animals for breeding purposes. In this regard, parallel computing can overcome the bottlenecks that can arise from series computing. Hence, a major goal of the present study is to bridge the gap to high-performance Bayesian computation in the context of animal breeding and genetics. Parallel Monte Carlo Markov chain algorithms and strategies are described in the context of animal breeding and genetics. Parallel Monte Carlo algorithms are introduced as a starting point including their applications to computing single-parameter and certain multiple-parameter models. Then, two basic approaches for parallel Markov chain Monte Carlo are described: one aims at parallelization within a single chain; the other is based on running multiple chains, yet some variants are discussed as well. Features and strategies of the parallel Markov chain Monte Carlo are illustrated using real data, including a large beef cattle dataset with 50K SNP genotypes. Parallel Markov chain Monte Carlo algorithms are useful for computing complex Bayesian models, which does not only lead to a dramatic speedup in computing but can also be used to optimize model parameters in complex Bayesian models. Hence, we anticipate that use of parallel Markov chain Monte Carlo will have a profound impact on revolutionizing the computational tools for genomic selection programs.
2012-01-01
Background Most Bayesian models for the analysis of complex traits are not analytically tractable and inferences are based on computationally intensive techniques. This is true of Bayesian models for genome-enabled selection, which uses whole-genome molecular data to predict the genetic merit of candidate animals for breeding purposes. In this regard, parallel computing can overcome the bottlenecks that can arise from series computing. Hence, a major goal of the present study is to bridge the gap to high-performance Bayesian computation in the context of animal breeding and genetics. Results Parallel Monte Carlo Markov chain algorithms and strategies are described in the context of animal breeding and genetics. Parallel Monte Carlo algorithms are introduced as a starting point including their applications to computing single-parameter and certain multiple-parameter models. Then, two basic approaches for parallel Markov chain Monte Carlo are described: one aims at parallelization within a single chain; the other is based on running multiple chains, yet some variants are discussed as well. Features and strategies of the parallel Markov chain Monte Carlo are illustrated using real data, including a large beef cattle dataset with 50K SNP genotypes. Conclusions Parallel Markov chain Monte Carlo algorithms are useful for computing complex Bayesian models, which does not only lead to a dramatic speedup in computing but can also be used to optimize model parameters in complex Bayesian models. Hence, we anticipate that use of parallel Markov chain Monte Carlo will have a profound impact on revolutionizing the computational tools for genomic selection programs. PMID:23009363
Genome complexity, robustness and genetic interactions in digital organisms
NASA Astrophysics Data System (ADS)
Lenski, Richard E.; Ofria, Charles; Collier, Travis C.; Adami, Christoph
1999-08-01
Digital organisms are computer programs that self-replicate, mutate and adapt by natural selection. They offer an opportunity to test generalizations about living systems that may extend beyond the organic life that biologists usually study. Here we have generated two classes of digital organism: simple programs selected solely for rapid replication, and complex programs selected to perform mathematical operations that accelerate replication through a set of defined `metabolic' rewards. To examine the differences in their genetic architecture, we introduced millions of single and multiple mutations into each organism and measured the effects on the organism's fitness. The complex organisms are more robust than the simple ones with respect to the average effects of single mutations. Interactions among mutations are common and usually yield higher fitness than predicted from the component mutations assuming multiplicative effects; such interactions are especially important in the complex organisms. Frequent interactions among mutations have also been seen in bacteria, fungi and fruitflies. Our findings support the view that interactions are a general feature of genetic systems.
Genome complexity, robustness and genetic interactions in digital organisms.
Lenski, R E; Ofria, C; Collier, T C; Adami, C
1999-08-12
Digital organisms are computer programs that self-replicate, mutate and adapt by natural selection. They offer an opportunity to test generalizations about living systems that may extend beyond the organic life that biologists usually study. Here we have generated two classes of digital organism: simple programs selected solely for rapid replication, and complex programs selected to perform mathematical operations that accelerate replication through a set of defined 'metabolic' rewards. To examine the differences in their genetic architecture, we introduced millions of single and multiple mutations into each organism and measured the effects on the organism's fitness. The complex organisms are more robust than the simple ones with respect to the average effects of single mutations. Interactions among mutations are common and usually yield higher fitness than predicted from the component mutations assuming multiplicative effects; such interactions are especially important in the complex organisms. Frequent interactions among mutations have also been seen in bacteria, fungi and fruitflies. Our findings support the view that interactions are a general feature of genetic systems.
Casella, Ivan Benaduce; Fukushima, Rodrigo Bono; Marques, Anita Battistini de Azevedo; Cury, Marcus Vinícius Martins; Presti, Calógero
2015-03-01
To compare a new dedicated software program and Adobe Photoshop for gray-scale median (GSM) analysis of B-mode images of carotid plaques. A series of 42 carotid plaques generating ≥50% diameter stenosis was evaluated by a single observer. The best segment for visualization of internal carotid artery plaque was identified on a single longitudinal view and images were recorded in JPEG format. Plaque analysis was performed by both programs. After normalization of image intensity (blood = 0, adventitial layer = 190), histograms were obtained after manual delineation of plaque. Results were compared with nonparametric Wilcoxon signed rank test and Kendall tau-b correlation analysis. GSM ranged from 00 to 100 with Adobe Photoshop and from 00 to 96 with IMTPC, with a high grade of similarity between image pairs, and a highly significant correlation (R = 0.94, p < .0001). IMTPC software appears suitable for the GSM analysis of carotid plaques. © 2014 Wiley Periodicals, Inc.
Single-Cell RNA-Sequencing Reveals a Continuous Spectrum of Differentiation in Hematopoietic Cells.
Macaulay, Iain C; Svensson, Valentine; Labalette, Charlotte; Ferreira, Lauren; Hamey, Fiona; Voet, Thierry; Teichmann, Sarah A; Cvejic, Ana
2016-02-02
The transcriptional programs that govern hematopoiesis have been investigated primarily by population-level analysis of hematopoietic stem and progenitor cells, which cannot reveal the continuous nature of the differentiation process. Here we applied single-cell RNA-sequencing to a population of hematopoietic cells in zebrafish as they undergo thrombocyte lineage commitment. By reconstructing their developmental chronology computationally, we were able to place each cell along a continuum from stem cell to mature cell, refining the traditional lineage tree. The progression of cells along this continuum is characterized by a highly coordinated transcriptional program, displaying simultaneous suppression of genes involved in cell proliferation and ribosomal biogenesis as the expression of lineage specific genes increases. Within this program, there is substantial heterogeneity in the expression of the key lineage regulators. Overall, the total number of genes expressed, as well as the total mRNA content of the cell, decreases as the cells undergo lineage commitment. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Closed-loop bird-computer interactions: a new method to study the role of bird calls.
Lerch, Alexandre; Roy, Pierre; Pachet, François; Nagle, Laurent
2011-03-01
In the field of songbird research, many studies have shown the role of male songs in territorial defense and courtship. Calling, another important acoustic communication signal, has received much less attention, however, because calls are assumed to contain less information about the emitter than songs do. Birdcall repertoire is diverse, and the role of calls has been found to be significant in the area of social interaction, for example, in pair, family, and group cohesion. However, standard methods for studying calls do not allow precise and systematic study of their role in communication. We propose herein a new method to study bird vocal interaction. A closed-loop computer system interacts with canaries, Serinus canaria, by (1) automatically classifying two basic types of canary vocalization, single versus repeated calls, as they are produced by the subject, and (2) responding with a preprogrammed call type recorded from another bird. This computerized animal-machine interaction requires no human interference. We show first that the birds do engage in sustained interactions with the system, by studying the rate of single and repeated calls for various programmed protocols. We then show that female canaries differentially use single and repeated calls. First, they produce significantly more single than repeated calls, and second, the rate of single calls is associated with the context in which they interact, whereas repeated calls are context independent. This experiment is the first illustration of how closed-loop bird-computer interaction can be used productively to study social relationships. © Springer-Verlag 2010
Efficient Memory Access with NumPy Global Arrays using Local Memory Access
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daily, Jeffrey A.; Berghofer, Dan C.
This paper discusses the work completed working with Global Arrays of data on distributed multi-computer systems and improving their performance. The tasks completed were done at Pacific Northwest National Laboratory in the Science Undergrad Laboratory Internship program in the summer of 2013 for the Data Intensive Computing Group in the Fundamental and Computational Sciences DIrectorate. This work was done on the Global Arrays Toolkit developed by this group. This toolkit is an interface for programmers to more easily create arrays of data on networks of computers. This is useful because scientific computation is often done on large amounts of datamore » sometimes so large that individual computers cannot hold all of it. This data is held in array form and can best be processed on supercomputers which often consist of a network of individual computers doing their computation in parallel. One major challenge for this sort of programming is that operations on arrays on multiple computers is very complex and an interface is needed so that these arrays seem like they are on a single computer. This is what global arrays does. The work done here is to use more efficient operations on that data that requires less copying of data to be completed. This saves a lot of time because copying data on many different computers is time intensive. The way this challenge was solved is when data to be operated on with binary operations are on the same computer, they are not copied when they are accessed. When they are on separate computers, only one set is copied when accessed. This saves time because of less copying done although more data access operations were done.« less
Three-dimensional reconstruction for coherent diffraction patterns obtained by XFEL.
Nakano, Miki; Miyashita, Osamu; Jonic, Slavica; Song, Changyong; Nam, Daewoong; Joti, Yasumasa; Tama, Florence
2017-07-01
The three-dimensional (3D) structural analysis of single particles using an X-ray free-electron laser (XFEL) is a new structural biology technique that enables observations of molecules that are difficult to crystallize, such as flexible biomolecular complexes and living tissue in the state close to physiological conditions. In order to restore the 3D structure from the diffraction patterns obtained by the XFEL, computational algorithms are necessary as the orientation of the incident beam with respect to the sample needs to be estimated. A program package for XFEL single-particle analysis based on the Xmipp software package, that is commonly used for image processing in 3D cryo-electron microscopy, has been developed. The reconstruction program has been tested using diffraction patterns of an aerosol nanoparticle obtained by tomographic coherent X-ray diffraction microscopy.
The calculation of aquifer chemistry in hot-water geothermal systems
Truesdell, Alfred H.; Singers, Wendy
1974-01-01
The temperature and chemical conditions (pH, gas pressure, and ion activities) in a geothermal aquifer supplying a producing bore can be calculated from the enthalpy of the total fluid (liquid + vapor) produced and chemical analyses of water and steam separated and collected at known pressures. Alternatively, if a single water phase exists in the aquifer, the complete analysis (including gases) of a sample collected from the aquifer by a downhole sampler is sufficient to determine the aquifer chemistry without a measured value of the enthalpy. The assumptions made are that the fluid is produced from a single aquifer and is homogeneous in enthalpy and chemical composition. These calculations of aquifer chemistry involving large amounts of ancillary information and many iterations require computer methods. A computer program in PL-1 to perform these calculations is available from the National Technical Information Service as document PB-219 376.
Shared Memory Parallelization of an Implicit ADI-type CFD Code
NASA Technical Reports Server (NTRS)
Hauser, Th.; Huang, P. G.
1999-01-01
A parallelization study designed for ADI-type algorithms is presented using the OpenMP specification for shared-memory multiprocessor programming. Details of optimizations specifically addressed to cache-based computer architectures are described and performance measurements for the single and multiprocessor implementation are summarized. The paper demonstrates that optimization of memory access on a cache-based computer architecture controls the performance of the computational algorithm. A hybrid MPI/OpenMP approach is proposed for clusters of shared memory machines to further enhance the parallel performance. The method is applied to develop a new LES/DNS code, named LESTool. A preliminary DNS calculation of a fully developed channel flow at a Reynolds number of 180, Re(sub tau) = 180, has shown good agreement with existing data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nash, T.; Atac, R.; Cook, A.
1989-03-06
The ACPMAPS multipocessor is a highly cost effective, local memory parallel computer with a hypercube or compound hypercube architecture. Communication requires the attention of only the two communicating nodes. The design is aimed at floating point intensive, grid like problems, particularly those with extreme computing requirements. The processing nodes of the system are single board array processors, each with a peak power of 20 Mflops, supported by 8 Mbytes of data and 2 Mbytes of instruction memory. The system currently being assembled has a peak power of 5 Gflops. The nodes are based on the Weitek XL Chip set. Themore » system delivers performance at approximately $300/Mflop. 8 refs., 4 figs.« less
Legato: Personal Computer Software for Analyzing Pressure-Sensitive Paint Data
NASA Technical Reports Server (NTRS)
Schairer, Edward T.
2001-01-01
'Legato' is personal computer software for analyzing radiometric pressure-sensitive paint (PSP) data. The software is written in the C programming language and executes under Windows 95/98/NT operating systems. It includes all operations normally required to convert pressure-paint image intensities to normalized pressure distributions mapped to physical coordinates of the test article. The program can analyze data from both single- and bi-luminophore paints and provides for both in situ and a priori paint calibration. In addition, there are functions for determining paint calibration coefficients from calibration-chamber data. The software is designed as a self-contained, interactive research tool that requires as input only the bare minimum of information needed to accomplish each function, e.g., images, model geometry, and paint calibration coefficients (for a priori calibration) or pressure-tap data (for in situ calibration). The program includes functions that can be used to generate needed model geometry files for simple model geometries (e.g., airfoils, trapezoidal wings, rotor blades) based on the model planform and airfoil section. All data files except images are in ASCII format and thus are easily created, read, and edited. The program does not use database files. This simplifies setup but makes the program inappropriate for analyzing massive amounts of data from production wind tunnels. Program output consists of Cartesian plots, false-colored real and virtual images, pressure distributions mapped to the surface of the model, assorted ASCII data files, and a text file of tabulated results. Graphical output is displayed on the computer screen and can be saved as publication-quality (PostScript) files.
Programming chemistry in DNA-addressable bioreactors.
Fellermann, Harold; Cardelli, Luca
2014-10-06
We present a formal calculus, termed the chemtainer calculus, able to capture the complexity of compartmentalized reaction systems such as populations of possibly nested vesicular compartments. Compartments contain molecular cargo as well as surface markers in the form of DNA single strands. These markers serve as compartment addresses and allow for their targeted transport and fusion, thereby enabling reactions of previously separated chemicals. The overall system organization allows for the set-up of programmable chemistry in microfluidic or other automated environments. We introduce a simple sequential programming language whose instructions are motivated by state-of-the-art microfluidic technology. Our approach integrates electronic control, chemical computing and material production in a unified formal framework that is able to mimic the integrated computational and constructive capabilities of the subcellular matrix. We provide a non-deterministic semantics of our programming language that enables us to analytically derive the computational and constructive power of our machinery. This semantics is used to derive the sets of all constructable chemicals and supermolecular structures that emerge from different underlying instruction sets. Because our proofs are constructive, they can be used to automatically infer control programs for the construction of target structures from a limited set of resource molecules. Finally, we present an example of our framework from the area of oligosaccharide synthesis. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
Parallel processors and nonlinear structural dynamics algorithms and software
NASA Technical Reports Server (NTRS)
Belytschko, Ted
1990-01-01
Techniques are discussed for the implementation and improvement of vectorization and concurrency in nonlinear explicit structural finite element codes. In explicit integration methods, the computation of the element internal force vector consumes the bulk of the computer time. The program can be efficiently vectorized by subdividing the elements into blocks and executing all computations in vector mode. The structuring of elements into blocks also provides a convenient way to implement concurrency by creating tasks which can be assigned to available processors for evaluation. The techniques were implemented in a 3-D nonlinear program with one-point quadrature shell elements. Concurrency and vectorization were first implemented in a single time step version of the program. Techniques were developed to minimize processor idle time and to select the optimal vector length. A comparison of run times between the program executed in scalar, serial mode and the fully vectorized code executed concurrently using eight processors shows speed-ups of over 25. Conjugate gradient methods for solving nonlinear algebraic equations are also readily adapted to a parallel environment. A new technique for improving convergence properties of conjugate gradients in nonlinear problems is developed in conjunction with other techniques such as diagonal scaling. A significant reduction in the number of iterations required for convergence is shown for a statically loaded rigid bar suspended by three equally spaced springs.
Bell, L T O; Gandhi, S
2018-06-01
To directly compare the accuracy and speed of analysis of two commercially available computer-assisted detection (CAD) programs in detecting colorectal polyps. In this retrospective single-centre study, patients who had colorectal polyps identified on computed tomography colonography (CTC) and subsequent lower gastrointestinal endoscopy, were analysed using two commercially available CAD programs (CAD1 and CAD2). Results were compared against endoscopy to ascertain sensitivity and positive predictive value (PPV) for colorectal polyps. Time taken for CAD analysis was also calculated. CAD1 demonstrated a sensitivity of 89.8%, PPV of 17.6% and mean analysis time of 125.8 seconds. CAD2 demonstrated a sensitivity of 75.5%, PPV of 44.0% and mean analysis time of 84.6 seconds. The sensitivity and PPV for colorectal polyps and CAD analysis times can vary widely between current commercially available CAD programs. There is still room for improvement. Generally, there is a trade-off between sensitivity and PPV, and so further developments should aim to optimise both. Information on these factors should be made routinely available, so that an informed choice on their use can be made. This information could also potentially influence the radiologist's use of CAD results. Copyright © 2018 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.
Application of Fast Multipole Methods to the NASA Fast Scattering Code
NASA Technical Reports Server (NTRS)
Dunn, Mark H.; Tinetti, Ana F.
2008-01-01
The NASA Fast Scattering Code (FSC) is a versatile noise prediction program designed to conduct aeroacoustic noise reduction studies. The equivalent source method is used to solve an exterior Helmholtz boundary value problem with an impedance type boundary condition. The solution process in FSC v2.0 requires direct manipulation of a large, dense system of linear equations, limiting the applicability of the code to small scales and/or moderate excitation frequencies. Recent advances in the use of Fast Multipole Methods (FMM) for solving scattering problems, coupled with sparse linear algebra techniques, suggest that a substantial reduction in computer resource utilization over conventional solution approaches can be obtained. Implementation of the single level FMM (SLFMM) and a variant of the Conjugate Gradient Method (CGM) into the FSC is discussed in this paper. The culmination of this effort, FSC v3.0, was used to generate solutions for three configurations of interest. Benchmarking against previously obtained simulations indicate that a twenty-fold reduction in computational memory and up to a four-fold reduction in computer time have been achieved on a single processor.
Parallel, stochastic measurement of molecular surface area.
Juba, Derek; Varshney, Amitabh
2008-08-01
Biochemists often wish to compute surface areas of proteins. A variety of algorithms have been developed for this task, but they are designed for traditional single-processor architectures. The current trend in computer hardware is towards increasingly parallel architectures for which these algorithms are not well suited. We describe a parallel, stochastic algorithm for molecular surface area computation that maps well to the emerging multi-core architectures. Our algorithm is also progressive, providing a rough estimate of surface area immediately and refining this estimate as time goes on. Furthermore, the algorithm generates points on the molecular surface which can be used for point-based rendering. We demonstrate a GPU implementation of our algorithm and show that it compares favorably with several existing molecular surface computation programs, giving fast estimates of the molecular surface area with good accuracy.
Linking and integrating computers for maternity care.
Lumb, M; Fawdry, R
1990-12-01
Functionally separate computer systems have been developed for many different areas relevant to maternity care, e.g. maternity data collection, pathology and imaging reports, staff rostering, personnel, accounting, audit, primary care etc. Using land lines, modems and network gateways, many such quite distinct computer programs or databases can be made accessible from a single terminal. If computer systems are to attain their full potential for the improvement of the maternity care, there will be a need not only for terminal emulation but also for more complex integration. Major obstacles must be overcome before such integration is widely achieved. Technical and conceptual progress towards overcoming these problems is discussed, with particular reference to the OSI (open systems interconnection) initiative, to the Read clinical classification and to the MUMMIES CBS (Common Basic Specification) Maternity Care Project. The issue of confidentiality is also briefly explored.
NASA Technical Reports Server (NTRS)
Stahara, S. S.; Klenke, D.; Trudinger, B. C.; Spreiter, J. R.
1980-01-01
Computational procedures are developed and applied to the prediction of solar wind interaction with nonmagnetic terrestrial planet atmospheres, with particular emphasis to Venus. The theoretical method is based on a single fluid, steady, dissipationless, magnetohydrodynamic continuum model, and is appropriate for the calculation of axisymmetric, supersonic, super-Alfvenic solar wind flow past terrestrial planets. The procedures, which consist of finite difference codes to determine the gasdynamic properties and a variety of special purpose codes to determine the frozen magnetic field, streamlines, contours, plots, etc. of the flow, are organized into one computational program. Theoretical results based upon these procedures are reported for a wide variety of solar wind conditions and ionopause obstacle shapes. Plasma and magnetic field comparisons in the ionosheath are also provided with actual spacecraft data obtained by the Pioneer Venus Orbiter.
Virtual network computing: cross-platform remote display and collaboration software.
Konerding, D E
1999-04-01
VNC (Virtual Network Computing) is a computer program written to address the problem of cross-platform remote desktop/application display. VNC uses a client/server model in which an image of the desktop of the server is transmitted to the client and displayed. The client collects mouse and keyboard input from the user and transmits them back to the server. The VNC client and server can run on Windows 95/98/NT, MacOS, and Unix (including Linux) operating systems. VNC is multi-user on Unix machines (any number of servers can be run are unrelated to the primary display of the computer), while it is effectively single-user on Macintosh and Windows machines (only one server can be run, displaying the contents of the primary display of the server). The VNC servers can be configured to allow more than one client to connect at one time, effectively allowing collaboration through the shared desktop. I describe the function of VNC, provide details of installation, describe how it achieves its goal, and evaluate the use of VNC for molecular modelling. VNC is an extremely useful tool for collaboration, instruction, software development, and debugging of graphical programs with remote users.
Static Schedulers for Embedded Real-Time Systems
1989-12-01
Because of the need for having efficient scheduling algorithms in large scale real time systems , software engineers put a lot of effort on developing...provide static schedulers for he Embedded Real Time Systems with single processor using Ada programming language. The independent nonpreemptable...support the Computer Aided Rapid Prototyping for Embedded Real Time Systems so that we determine whether the system, as designed, meets the required
Digital video applications in radiologic education: theory, technique, and applications.
Hennessey, J G; Fishman, E K; Ney, D R
1994-05-01
Computer-assisted instruction (CAI) has great potential in medical education. The recent explosion of multimedia platforms provides an environment for the seemless integration of text, images, and sound into a single program. This article discusses the role of digital video in the current educational environment as well as its future potential. An indepth review of the technical decisions of this new technology is also presented.
ERIC Educational Resources Information Center
Wang, Ning; Stahl, John
2012-01-01
This article discusses the use of the Many-Facets Rasch Model, via the FACETS computer program (Linacre, 2006a), to scale job/practice analysis survey data as well as to combine multiple rating scales into single composite weights representing the tasks' relative importance. Results from the Many-Facets Rasch Model are compared with those…
Guided-Wave TeO2 Acousto-Optic Devices
1991-01-12
In this research program, Guided-wave TeO2 Acousto - Optic Devices, the properties of surface acoustic waves on tellurium dioxide single crystal...surfaces has been studied for its potential applications as acousto - optic signal processing devices. Personal computer based numerical method has been...interaction with laser beams. Use of the acousto - optic probe, the surface acoustic wave velocity and field distribution have been obtained and compared
ERIC Educational Resources Information Center
Polanco, Rodrigo; Calderon, Patricia; Delgado, Franciso
A 3-year follow-up evaluation was conducted of an experimental problem-based learning (PBL) integrated curriculum directed to students of the first 2 years of engineering. The PBL curriculum brought together the contents of physics, mathematics, and computer science courses in a single course in which students worked on real-life problems. In…
ATTACK WARNING: Better Management Required to Resolve NORAD Integration Deficiencies
1989-07-01
protocols, Cumbersome Integration different manufacturers’ computer systems can communicate with eachother . The warning and assessment subsystems...by treating TW/AA system as a single system subject to program review and oversight by the Defense Acquisition Board. Within this management...restore the unit to operation quickly enough after a power loss to meet NORAD mis- sion requirements. The Air Force intends to have the contractor
Autonomous Learning in Mobile Cognitive Machines
2017-11-25
2016). [18] Liu, Wei, et al. "Ssd: Single shot multibox detector." European conference on computer vision. Springer, Cham, 2016. [19] Vinyals, Oriol...the brain being evolved to support its mobility has been raised. In fact, as the project progressed, the researchers discovered that if one of the...deductive, relies on rule-based programming, and can solve complex problems, however, faces difficulties in learning and adaptability. The latter
Image Understanding and Intelligent Parallel Systems
1991-05-09
a common user interface for the interactive , graphical manipulation of those histories, and...Circuits and Systems, August 1987. Yap, S.-K. and M.L. Scott, "PenGuin: A language for reactive graphical user interface programming," to appear, INTERACT , Cambridge, United Kingdom, 1990. 74 ...of up to a factor of 100 over single-workstation implementations. User interfaces to large multiprocessor computers are a difficult issue addressed
Development and implementation of an automated quantitative film digitizer quality control program
NASA Astrophysics Data System (ADS)
Fetterly, Kenneth A.; Avula, Ramesh T. V.; Hangiandreou, Nicholas J.
1999-05-01
A semi-automated, quantitative film digitizer quality control program that is based on the computer analysis of the image data from a single digitized test film was developed. This program includes measurements of the geometric accuracy, optical density performance, signal to noise ratio, and presampled modulation transfer function. The variability of the measurements was less than plus or minus 5%. Measurements were made on a group of two clinical and two laboratory laser film digitizers during a trial period of approximately four months. Quality control limits were established based on clinical necessity, vendor specifications and digitizer performance. During the trial period, one of the digitizers failed the performance requirements and was corrected by calibration.
Crespo, Alejandro C.; Dominguez, Jose M.; Barreiro, Anxo; Gómez-Gesteira, Moncho; Rogers, Benedict D.
2011-01-01
Smoothed Particle Hydrodynamics (SPH) is a numerical method commonly used in Computational Fluid Dynamics (CFD) to simulate complex free-surface flows. Simulations with this mesh-free particle method far exceed the capacity of a single processor. In this paper, as part of a dual-functioning code for either central processing units (CPUs) or Graphics Processor Units (GPUs), a parallelisation using GPUs is presented. The GPU parallelisation technique uses the Compute Unified Device Architecture (CUDA) of nVidia devices. Simulations with more than one million particles on a single GPU card exhibit speedups of up to two orders of magnitude over using a single-core CPU. It is demonstrated that the code achieves different speedups with different CUDA-enabled GPUs. The numerical behaviour of the SPH code is validated with a standard benchmark test case of dam break flow impacting on an obstacle where good agreement with the experimental results is observed. Both the achieved speed-ups and the quantitative agreement with experiments suggest that CUDA-based GPU programming can be used in SPH methods with efficiency and reliability. PMID:21695185
Best bang for your buck: GPU nodes for GROMACS biomolecular simulations
Páll, Szilárd; Fechner, Martin; Esztermann, Ansgar; de Groot, Bert L.; Grubmüller, Helmut
2015-01-01
The molecular dynamics simulation package GROMACS runs efficiently on a wide variety of hardware from commodity workstations to high performance computing clusters. Hardware features are well‐exploited with a combination of single instruction multiple data, multithreading, and message passing interface (MPI)‐based single program multiple data/multiple program multiple data parallelism while graphics processing units (GPUs) can be used as accelerators to compute interactions off‐loaded from the CPU. Here, we evaluate which hardware produces trajectories with GROMACS 4.6 or 5.0 in the most economical way. We have assembled and benchmarked compute nodes with various CPU/GPU combinations to identify optimal compositions in terms of raw trajectory production rate, performance‐to‐price ratio, energy efficiency, and several other criteria. Although hardware prices are naturally subject to trends and fluctuations, general tendencies are clearly visible. Adding any type of GPU significantly boosts a node's simulation performance. For inexpensive consumer‐class GPUs this improvement equally reflects in the performance‐to‐price ratio. Although memory issues in consumer‐class GPUs could pass unnoticed as these cards do not support error checking and correction memory, unreliable GPUs can be sorted out with memory checking tools. Apart from the obvious determinants for cost‐efficiency like hardware expenses and raw performance, the energy consumption of a node is a major cost factor. Over the typical hardware lifetime until replacement of a few years, the costs for electrical power and cooling can become larger than the costs of the hardware itself. Taking that into account, nodes with a well‐balanced ratio of CPU and consumer‐class GPU resources produce the maximum amount of GROMACS trajectory over their lifetime. © 2015 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc. PMID:26238484
Best bang for your buck: GPU nodes for GROMACS biomolecular simulations.
Kutzner, Carsten; Páll, Szilárd; Fechner, Martin; Esztermann, Ansgar; de Groot, Bert L; Grubmüller, Helmut
2015-10-05
The molecular dynamics simulation package GROMACS runs efficiently on a wide variety of hardware from commodity workstations to high performance computing clusters. Hardware features are well-exploited with a combination of single instruction multiple data, multithreading, and message passing interface (MPI)-based single program multiple data/multiple program multiple data parallelism while graphics processing units (GPUs) can be used as accelerators to compute interactions off-loaded from the CPU. Here, we evaluate which hardware produces trajectories with GROMACS 4.6 or 5.0 in the most economical way. We have assembled and benchmarked compute nodes with various CPU/GPU combinations to identify optimal compositions in terms of raw trajectory production rate, performance-to-price ratio, energy efficiency, and several other criteria. Although hardware prices are naturally subject to trends and fluctuations, general tendencies are clearly visible. Adding any type of GPU significantly boosts a node's simulation performance. For inexpensive consumer-class GPUs this improvement equally reflects in the performance-to-price ratio. Although memory issues in consumer-class GPUs could pass unnoticed as these cards do not support error checking and correction memory, unreliable GPUs can be sorted out with memory checking tools. Apart from the obvious determinants for cost-efficiency like hardware expenses and raw performance, the energy consumption of a node is a major cost factor. Over the typical hardware lifetime until replacement of a few years, the costs for electrical power and cooling can become larger than the costs of the hardware itself. Taking that into account, nodes with a well-balanced ratio of CPU and consumer-class GPU resources produce the maximum amount of GROMACS trajectory over their lifetime. © 2015 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc.
GRAbB: Selective Assembly of Genomic Regions, a New Niche for Genomic Research
Zhang, Hao; van Diepeningen, Anne D.; van der Lee, Theo A. J.; Waalwijk, Cees; de Hoog, G. Sybren
2016-01-01
GRAbB (Genomic Region Assembly by Baiting) is a new program that is dedicated to assemble specific genomic regions from NGS data. This approach is especially useful when dealing with multi copy regions, such as mitochondrial genome and the rDNA repeat region, parts of the genome that are often neglected or poorly assembled, although they contain interesting information from phylogenetic or epidemiologic perspectives, but also single copy regions can be assembled. The program is capable of targeting multiple regions within a single run. Furthermore, GRAbB can be used to extract specific loci from NGS data, based on homology, like sequences that are used for barcoding. To make the assembly specific, a known part of the region, such as the sequence of a PCR amplicon or a homologous sequence from a related species must be specified. By assembling only the region of interest, the assembly process is computationally much less demanding and may lead to assemblies of better quality. In this study the different applications and functionalities of the program are demonstrated such as: exhaustive assembly (rDNA region and mitochondrial genome), extracting homologous regions or genes (IGS, RPB1, RPB2 and TEF1a), as well as extracting multiple regions within a single run. The program is also compared with MITObim, which is meant for the exhaustive assembly of a single target based on a similar query sequence. GRAbB is shown to be more efficient than MITObim in terms of speed, memory and disk usage. The other functionalities (handling multiple targets simultaneously and extracting homologous regions) of the new program are not matched by other programs. The program is available with explanatory documentation at https://github.com/b-brankovics/grabb. GRAbB has been tested on Ubuntu (12.04 and 14.04), Fedora (23), CentOS (7.1.1503) and Mac OS X (10.7). Furthermore, GRAbB is available as a docker repository: brankovics/grabb (https://hub.docker.com/r/brankovics/grabb/). PMID:27308864
GRAbB: Selective Assembly of Genomic Regions, a New Niche for Genomic Research.
Brankovics, Balázs; Zhang, Hao; van Diepeningen, Anne D; van der Lee, Theo A J; Waalwijk, Cees; de Hoog, G Sybren
2016-06-01
GRAbB (Genomic Region Assembly by Baiting) is a new program that is dedicated to assemble specific genomic regions from NGS data. This approach is especially useful when dealing with multi copy regions, such as mitochondrial genome and the rDNA repeat region, parts of the genome that are often neglected or poorly assembled, although they contain interesting information from phylogenetic or epidemiologic perspectives, but also single copy regions can be assembled. The program is capable of targeting multiple regions within a single run. Furthermore, GRAbB can be used to extract specific loci from NGS data, based on homology, like sequences that are used for barcoding. To make the assembly specific, a known part of the region, such as the sequence of a PCR amplicon or a homologous sequence from a related species must be specified. By assembling only the region of interest, the assembly process is computationally much less demanding and may lead to assemblies of better quality. In this study the different applications and functionalities of the program are demonstrated such as: exhaustive assembly (rDNA region and mitochondrial genome), extracting homologous regions or genes (IGS, RPB1, RPB2 and TEF1a), as well as extracting multiple regions within a single run. The program is also compared with MITObim, which is meant for the exhaustive assembly of a single target based on a similar query sequence. GRAbB is shown to be more efficient than MITObim in terms of speed, memory and disk usage. The other functionalities (handling multiple targets simultaneously and extracting homologous regions) of the new program are not matched by other programs. The program is available with explanatory documentation at https://github.com/b-brankovics/grabb. GRAbB has been tested on Ubuntu (12.04 and 14.04), Fedora (23), CentOS (7.1.1503) and Mac OS X (10.7). Furthermore, GRAbB is available as a docker repository: brankovics/grabb (https://hub.docker.com/r/brankovics/grabb/).
New computing systems and their impact on structural analysis and design
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.
1989-01-01
A review is given of the recent advances in computer technology that are likely to impact structural analysis and design. The computational needs for future structures technology are described. The characteristics of new and projected computing systems are summarized. Advances in programming environments, numerical algorithms, and computational strategies for new computing systems are reviewed, and a novel partitioning strategy is outlined for maximizing the degree of parallelism. The strategy is designed for computers with a shared memory and a small number of powerful processors (or a small number of clusters of medium-range processors). It is based on approximating the response of the structure by a combination of symmetric and antisymmetric response vectors, each obtained using a fraction of the degrees of freedom of the original finite element model. The strategy was implemented on the CRAY X-MP/4 and the Alliant FX/8 computers. For nonlinear dynamic problems on the CRAY X-MP with four CPUs, it resulted in an order of magnitude reduction in total analysis time, compared with the direct analysis on a single-CPU CRAY X-MP machine.
Single top quark photoproduction at the LHC
NASA Astrophysics Data System (ADS)
de Favereau de Jeneret, J.; Ovyn, S.
2008-08-01
High-energy photon-proton interactions at the LHC offer interesting possibilities for the study of the electroweak sector up to TeV scale and searches for processes beyond the Standard Model. An analysis of the W associated single top photoproduction has been performed using the adapted MadGraph/MadEvent [F. Maltoni and T. Stelzer, JHEP 0302, (2003) 027; T. Stelzer and W.F. Long, Phys. Commun. 81, (1994) 357-371] and CalcHEP [A. Pukhov, Nucl. Inst. Meth A 502, (2003) 596-598] programs interfaced to the Pythia [T. Sjöstrand et al., Comput. Phys. Commun. 135, (2001) 238] generator and a fast detector simulation program. Event selection and suppression of main backgrounds have been studied. A comparable sensitivity to |V| to those obtained using the standard single top production in pp collisions has been achieved already for 10 fb of integrated luminosity. Photoproduction at the LHC provides also an attractive framework for observation of the anomalous production of single top due to Flavour-Changing Neutral Currents. The sensitivity to anomalous coupling parameters, k and k is presented and indicates that stronger limits can be placed on anomalous couplings after 1 fb.
Creating CAD designs and performing their subsequent analysis using opensource solutions in Python
NASA Astrophysics Data System (ADS)
Iakushkin, Oleg O.; Sedova, Olga S.
2018-01-01
The paper discusses the concept of a system that encapsulates the transition from geometry building to strength tests. The solution we propose views the engineer as a programmer who is capable of coding the procedure for working with the modeli.e., to outline the necessary transformations and create cases for boundary conditions. We propose a prototype of such system. In our work, we used: Python programming language to create the program; Jupyter framework to create a single workspace visualization; pythonOCC library to implement CAD; FeniCS library to implement FEM; GMSH and VTK utilities. The prototype is launched on a platform which is a dynamically expandable multi-tenant cloud service providing users with all computing resources on demand. However, the system may be deployed locally for prototyping or work that does not involve resource-intensive computing. To make it possible, we used containerization, isolating the system in a Docker container.
Design of the aerosol sampling manifold for the Southern Great Plains site
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leifer, R.; Knuth, R.H.; Guggenheim, S.F.
1995-04-01
To meet the needs of the ARM program, the Environmental Measurements Laboratory (EML) has the responsibility to establish a surface aerosol measurements program at the Southern Great Plains (SGP) site in Lamont, OK. At the present time, EML has scheduled installation of five instruments at SGP: a single wavelength nephelometer, an optical particle counter (OPC), a condensation particle counter (CPC), an optical absorption monitor (OAM), and an ozone monitor. ARM`s operating protocol requires that all the observational data be placed online and sent to the main computer facility in real time. EML currently maintains a computer file containing back trajectorymore » (BT) analyses for the SGP site. These trajectories are used to characterize air mass types as they pass over the site. EML is continuing to calculate and store the resulting trajectory analyses for future use by the ARM science team.« less
NASA Technical Reports Server (NTRS)
Stagliano, T. R.; Spilker, R. L.; Witmer, E. A.
1976-01-01
A user-oriented computer program CIVM-JET 4B is described to predict the large-deflection elastic-plastic structural responses of fragment impacted single-layer: (a) partial-ring fragment containment or deflector structure or (b) complete-ring fragment containment structure. These two types of structures may be either free or supported in various ways. Supports accommodated include: (1) point supports such as pinned-fixed, ideally-clamped, or supported by a structural branch simulating mounting-bracket structure and (2) elastic foundation support distributed over selected regions of the structure. The initial geometry of each partial or complete ring may be circular or arbitrarily curved; uniform or variable thicknesses of the structure are accommodated. The structural material is assumed to be initially isotropic; strain hardening and strain rate effects are taken into account.
Prediction of elemental creep. [steady state and cyclic data from regression analysis
NASA Technical Reports Server (NTRS)
Davis, J. W.; Rummler, D. R.
1975-01-01
Cyclic and steady-state creep tests were performed to provide data which were used to develop predictive equations. These equations, describing creep as a function of stress, temperature, and time, were developed through the use of a least squares regression analyses computer program for both the steady-state and cyclic data sets. Comparison of the data from the two types of tests, revealed that there was no significant difference between the cyclic and steady-state creep strains for the L-605 sheet under the experimental conditions investigated (for the same total time at load). Attempts to develop a single linear equation describing the combined steady-state and cyclic creep data resulted in standard errors of estimates higher than obtained for the individual data sets. A proposed approach to predict elemental creep in metals uses the cyclic creep equation and a computer program which applies strain and time hardening theories of creep accumulation.
V/STOLAND digital avionics system for XV-15 tilt rotor
NASA Technical Reports Server (NTRS)
Liden, S.
1980-01-01
A digital flight control system for the tilt rotor research aircraft provides sophisticated navigation, guidance, control, display and data acquisition capabilities for performing terminal area navigation, guidance and control research. All functions of the XV-15 V/STOLAND system were demonstrated on the NASA-ARC S-19 simulation facility under a comprehensive dynamic acceptance test. The most noteworthy accomplishments of the system are: (1) automatic configuration control of a tilt-rotor aircraft over the total operating range; (2) total hands-off landing to touchdown on various selectable straight-in glide slopes and on a flight path that includes a two-revolution helix; (3) automatic guidance along a programmed three-dimensional reference flight path; (4) navigation data for the automatic guidance computed on board, based on VOR/DME, TACAN, or MLS navid data; and (5) integration of a large set of functions in a single computer, utilizing 16k words of storage for programs and data.
Forman, Bruce H.; Eccles, Randy; Piggins, Judith; Raila, Wayne; Estey, Greg; Barnett, G. Octo
1990-01-01
We have developed a visually oriented, computer-controlled learning environment designed for use by students of gross anatomy. The goals of this module are to reinforce the concepts of organ relationships and topography by using computed axial tomographic (CAT) images accessed from a videodisc integrated with color graphics and to introduce students to cross-sectional radiographic anatomy. We chose to build the program around CAT scan images because they not only provide excellent structural detail but also offer an anatomic orientation (transverse) that complements that used in the dissection laboratory (basically a layer-by-layer, anterior-to-posterior, or coronal approach). Our system, built using a Microsoft Windows-386 based authoring environment which we designed and implemented, integrates text, video images, and graphics into a single screen display. The program allows both user browsing of information, facilitated by hypertext links, and didactic sessions including mini-quizzes for self-assessment.
Corona performance of a compact 230-kV line
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chartier, V.L.; Blair, D.E.; Easley, M.D.
Permitting requirements and the acquisition of new rights-of-way for transmission facilities has in recent years become increasingly difficult for most utilities, including Puget Sound Power and Light Company. In order to maintain a high degree of reliability of service while being responsive to public concerns regarding the siting of high voltage (HV) transmission facilities, Puget Power has found it necessary to more heavily rely upon the use of compact lines in franchise corridors. Compaction does, however, precipitate increased levels of audible noise (AN) and radio and TV interference (RI and TVI) due to corona on the conductors and insulator assemblies.more » Puget Power relies upon the Bonneville Power Administration (BPA) Corona and Field Effects computer program to calculate AN and RI for new lines. Since there was some question of the program`s ability to accurately represent quiet 230-kV compact designs, a joint project was undertaken with BPA to verify the program`s algorithms. Long-term measurements made on an operating Puget Power 230-kV compact line confirmed the accuracy of BPA`s AN model; however, the RI measurements were much lower than predicted by the BPA and other programs. This paper also describes how the BPA computer program can be used to calculate the voltage needed to expose insulator assemblies to the correct electric field in single test setups in HV laboratories.« less
LENMODEL: A forward model for calculating length distributions and fission-track ages in apatite
NASA Astrophysics Data System (ADS)
Crowley, Kevin D.
1993-05-01
The program LENMODEL is a forward model for annealing of fission tracks in apatite. It provides estimates of the track-length distribution, fission-track age, and areal track density for any user-supplied thermal history. The program approximates the thermal history, in which temperature is represented as a continuous function of time, by a series of isothermal steps of various durations. Equations describing the production of tracks as a function of time and annealing of tracks as a function of time and temperature are solved for each step. The step calculations are summed to obtain estimates for the entire thermal history. Computational efficiency is maximized by performing the step calculations backwards in model time. The program incorporates an intuitive and easy-to-use graphical interface. Thermal history is input to the program using a mouse. Model options are specified by selecting context-sensitive commands from a bar menu. The program allows for considerable selection of equations and parameters used in the calculations. The program was written for PC-compatible computers running DOS TM 3.0 and above (and Windows TM 3.0 or above) with VGA or SVGA graphics and a Microsoft TM-compatible mouse. Single copies of a runtime version of the program are available from the author by written request as explained in the last section of this paper.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goltz, G.; Kaiser, L.M.; Weiner, H.
A major mission of the U.S. Coast Guard is the task of providing and maintaining Maritime Aids to Navigation. These aids are located on and near the coastline and inland waters of the United States and its possessions. A computer program, Design Synthesis and Performance Analysis (DSPA), has been developed by the Jet Propulsion Laboratory to demonstrate the feasibility of low-cost solar array/battery power systems for use on flashing lamp buoys. To provide detailed, realistic temperature, wind, and solar insolation data for analysis of the flashing lamp buoy power systems, the two DSPA support computer program sets: MERGE and STATmore » were developed. A general description of these two packages is presented in this program summary report. The MERGE program set will enable the Coast Guard to combine temperature and wind velocity data (NOAA TDF-14 tapes) with solar insolation data (NOAA DECK-280 tapes) onto a single sequential MERGE file containing up to 12 years of hourly observations. This MERGE file can then be used as direct input to the DSPA program. The STAT program set will enable a statistical analysis to be performed of the MERGE data and produce high or low or mean profiles of the data and/or do a worst case analysis. The STAT output file consists of a one-year set of hourly statistical weather data which can be used as input to the DSPA program.« less
Comparing errors in ED computer-assisted vs conventional pediatric drug dosing and administration.
Yamamoto, Loren; Kanemori, Joan
2010-06-01
Compared to fixed-dose single-vial drug administration in adults, pediatric drug dosing and administration requires a series of calculations, all of which are potentially error prone. The purpose of this study is to compare error rates and task completion times for common pediatric medication scenarios using computer program assistance vs conventional methods. Two versions of a 4-part paper-based test were developed. Each part consisted of a set of medication administration and/or dosing tasks. Emergency department and pediatric intensive care unit nurse volunteers completed these tasks using both methods (sequence assigned to start with a conventional or a computer-assisted approach). Completion times, errors, and the reason for the error were recorded. Thirty-eight nurses completed the study. Summing the completion of all 4 parts, the mean conventional total time was 1243 seconds vs the mean computer program total time of 879 seconds (P < .001). The conventional manual method had a mean of 1.8 errors vs the computer program with a mean of 0.7 errors (P < .001). Of the 97 total errors, 36 were due to misreading the drug concentration on the label, 34 were due to calculation errors, and 8 were due to misplaced decimals. Of the 36 label interpretation errors, 18 (50%) occurred with digoxin or insulin. Computerized assistance reduced errors and the time required for drug administration calculations. A pattern of errors emerged, noting that reading/interpreting certain drug labels were more error prone. Optimizing the layout of drug labels could reduce the error rate for error-prone labels. Copyright (c) 2010 Elsevier Inc. All rights reserved.
Parallel Wavefront Analysis for a 4D Interferometer
NASA Technical Reports Server (NTRS)
Rao, Shanti R.
2011-01-01
This software provides a programming interface for automating data collection with a PhaseCam interferometer from 4D Technology, and distributing the image-processing algorithm across a cluster of general-purpose computers. Multiple instances of 4Sight (4D Technology s proprietary software) run on a networked cluster of computers. Each connects to a single server (the controller) and waits for instructions. The controller directs the interferometer to several images, then assigns each image to a different computer for processing. When the image processing is finished, the server directs one of the computers to collate and combine the processed images, saving the resulting measurement in a file on a disk. The available software captures approximately 100 images and analyzes them immediately. This software separates the capture and analysis processes, so that analysis can be done at a different time and faster by running the algorithm in parallel across several processors. The PhaseCam family of interferometers can measure an optical system in milliseconds, but it takes many seconds to process the data so that it is usable. In characterizing an adaptive optics system, like the next generation of astronomical observatories, thousands of measurements are required, and the processing time quickly becomes excessive. A programming interface distributes data processing for a PhaseCam interferometer across a Windows computing cluster. A scriptable controller program coordinates data acquisition from the interferometer, storage on networked hard disks, and parallel processing. Idle time of the interferometer is minimized. This architecture is implemented in Python and JavaScript, and may be altered to fit a customer s needs.
Quantum Entanglement Molecular Absorption Spectrum Simulator
NASA Technical Reports Server (NTRS)
Nguyen, Quang-Viet; Kojima, Jun
2006-01-01
Quantum Entanglement Molecular Absorption Spectrum Simulator (QE-MASS) is a computer program for simulating two photon molecular-absorption spectroscopy using quantum-entangled photons. More specifically, QE-MASS simulates the molecular absorption of two quantum-entangled photons generated by the spontaneous parametric down-conversion (SPDC) of a fixed-frequency photon from a laser. The two-photon absorption process is modeled via a combination of rovibrational and electronic single-photon transitions, using a wave-function formalism. A two-photon absorption cross section as a function of the entanglement delay time between the two photons is computed, then subjected to a fast Fourier transform to produce an energy spectrum. The program then detects peaks in the Fourier spectrum and displays the energy levels of very short-lived intermediate quantum states (or virtual states) of the molecule. Such virtual states were only previously accessible using ultra-fast (femtosecond) laser systems. However, with the use of a single-frequency continuous wave laser to produce SPDC photons, and QEMASS program, these short-lived molecular states can now be studied using much simpler laser systems. QE-MASS can also show the dependence of the Fourier spectrum on the tuning range of the entanglement time of any externally introduced optical-path delay time. QE-MASS can be extended to any molecule for which an appropriate spectroscopic database is available. It is a means of performing an a priori parametric analysis of entangled photon spectroscopy for development and implementation of emerging quantum-spectroscopic sensing techniques. QE-MASS is currently implemented using the Mathcad software package.
In silico FRET from simulated dye dynamics
NASA Astrophysics Data System (ADS)
Hoefling, Martin; Grubmüller, Helmut
2013-03-01
Single molecule fluorescence resonance energy transfer (smFRET) experiments probe molecular distances on the nanometer scale. In such experiments, distances are recorded from FRET transfer efficiencies via the Förster formula, E=1/(1+(). The energy transfer however also depends on the mutual orientation of the two dyes used as distance reporter. Since this information is typically inaccessible in FRET experiments, one has to rely on approximations, which reduce the accuracy of these distance measurements. A common approximation is an isotropic and uncorrelated dye orientation distribution. To assess the impact of such approximations, we present the algorithms and implementation of a computational toolkit for the simulation of smFRET on the basis of molecular dynamics (MD) trajectory ensembles. In this study, the dye orientation dynamics, which are used to determine dynamic FRET efficiencies, are extracted from MD simulations. In a subsequent step, photons and bursts are generated using a Monte Carlo algorithm. The application of the developed toolkit on a poly-proline system demonstrated good agreement between smFRET simulations and experimental results and therefore confirms our computational method. Furthermore, it enabled the identification of the structural basis of measured heterogeneity. The presented computational toolkit is written in Python, available as open-source, applicable to arbitrary systems and can easily be extended and adapted to further problems. Catalogue identifier: AENV_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENV_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GPLv3, the bundled SIMD friendly Mersenne twister implementation [1] is provided under the SFMT-License. No. of lines in distributed program, including test data, etc.: 317880 No. of bytes in distributed program, including test data, etc.: 54774217 Distribution format: tar.gz Programming language: Python, Cython, C (ANSI C99). Computer: Any (see memory requirements). Operating system: Any OS with CPython distribution (e.g. Linux, MacOSX, Windows). Has the code been vectorised or parallelized?: Yes, in Ref. [2], 4 CPU cores were used. RAM: About 700MB per process for the simulation setup in Ref. [2]. Classification: 16.1, 16.7, 23. External routines: Calculation of Rκ2-trajectories from GROMACS [3] MD trajectories requires the GromPy Python module described in Ref. [4] or a GROMACS 4.6 installation. The md2fret program uses a standard Python interpreter (CPython) v2.6+ and < v3.0 as well as the NumPy module. The analysis examples require the Matplotlib Python module. Nature of problem: Simulation and interpretation of single molecule FRET experiments. Solution method: Combination of force-field based molecular dynamics (MD) simulating the dye dynamics and Monte Carlo sampling to obtain photon statistics of FRET kinetics. Additional comments: !!!!! The distribution file for this program is over 50 Mbytes and therefore is not delivered directly when download or Email is requested. Instead a html file giving details of how the program can be obtained is sent. !!!!! Running time: A single run in Ref. [2] takes about 10 min on a Quad Core Intel Xeon CPU W3520 2.67GHz with 6GB physical RAM References: [1] M. Saito, M. Matsumoto, SIMD-oriented fast Mersenne twister: a 128-bit pseudorandom number generator, in: A. Keller, S. Heinrich, H. Niederreiter (Eds.), Monte Carlo and Quasi-Monte Carlo Methods 2006, Springer; Berlin, Heidelberg, 2008, pp. 607-622. [2] M. Hoefling, N. Lima, D. Hänni, B. Schuler, C. A. M. Seidel, H. Grubmüller, Structural heterogeneity and quantitative FRET efficiency distributions of polyprolines through a hybrid atomistic simulation and Monte Carlo approach, PLoS ONE 6 (5) (2011) e19791. [3] D. V. D. Spoel, E. Lindahl, B. Hess, G. Groenhof, A. E. Mark, H. J. C. Berendsen, GROMACS: fast, flexible, and free., J Comput Chem 26 (16) (2005) 1701-1718. [4] R. Pool, A. Feenstra, M. Hoefling, R. Schulz, J. C. Smith, J. Heringa, Enabling grand-canonical Monte Carlo: Extending the flexibility of gromacs through the GromPy Python interface module, Journal of Chemical Theory and Computation 33 (12) (2012) 1207-1214.
MPT Prediction of Aircraft-Engine Fan Noise
NASA Technical Reports Server (NTRS)
Connell, Stuart D.
2004-01-01
A collection of computer programs has been developed that implements a procedure for predicting multiple-pure-tone (MPT) noise generated by fan blades of an aircraft engine (e.g., a turbofan engine). MPT noise arises when the fan is operating with supersonic relative tip Mach No. Under this flow condition, there is a strong upstream running shock. The strength and position of this shock are very sensitive to blade geometry variations. For a fan where all the blades are identical, the primary tone observed upstream of the fan will be the blade passing frequency. If there are small variations in geometry between blades, then tones below the blade passing frequency arise MPTs. Stagger angle differences as small as 0.1 can give rise to significant MPT. It is also noted that MPT noise is more pronounced when the fan is operating in an unstarted mode. Computational results using a three-dimensional flow solver to compute the complete annulus flow with non-uniform fans indicate that MPT noise can be estimated in a relatively simple way. Hence, once the effect of a typical geometry variation of one blade in an otherwise uniform blade row is known, the effect of all the blades being different can be quickly computed via superposition. Two computer programs that were developed as part of this work are used in conjunction with a user s computational fluid dynamics (CFD) code to predict MPT spectra for a fan with a specified set of geometric variations: (1) The first program ROTBLD reads the users CFD solution files for a single blade passage via an API (Application Program Interface). There are options to replicate and perturb the geometry with typical variations stagger, camber, thickness, and pitch. The multi-passage CFD solution files are then written in the user s file format using the API. (2) The second program SUPERPOSE requires two input files: the first is the circumferential upstream pressure distribution extracted from the CFD solution on the multi-passage mesh, the second file defines the geometry variations of each blade in a complete fan. Superposition is used to predict the spectra resulting from the geometric variations.
2014-01-01
The emergence of massive datasets in a clinical setting presents both challenges and opportunities in data storage and analysis. This so called “big data” challenges traditional analytic tools and will increasingly require novel solutions adapted from other fields. Advances in information and communication technology present the most viable solutions to big data analysis in terms of efficiency and scalability. It is vital those big data solutions are multithreaded and that data access approaches be precisely tailored to large volumes of semi-structured/unstructured data. The MapReduce programming framework uses two tasks common in functional programming: Map and Reduce. MapReduce is a new parallel processing framework and Hadoop is its open-source implementation on a single computing node or on clusters. Compared with existing parallel processing paradigms (e.g. grid computing and graphical processing unit (GPU)), MapReduce and Hadoop have two advantages: 1) fault-tolerant storage resulting in reliable data processing by replicating the computing tasks, and cloning the data chunks on different computing nodes across the computing cluster; 2) high-throughput data processing via a batch processing framework and the Hadoop distributed file system (HDFS). Data are stored in the HDFS and made available to the slave nodes for computation. In this paper, we review the existing applications of the MapReduce programming framework and its implementation platform Hadoop in clinical big data and related medical health informatics fields. The usage of MapReduce and Hadoop on a distributed system represents a significant advance in clinical big data processing and utilization, and opens up new opportunities in the emerging era of big data analytics. The objective of this paper is to summarize the state-of-the-art efforts in clinical big data analytics and highlight what might be needed to enhance the outcomes of clinical big data analytics tools. This paper is concluded by summarizing the potential usage of the MapReduce programming framework and Hadoop platform to process huge volumes of clinical data in medical health informatics related fields. PMID:25383096
Mohammed, Emad A; Far, Behrouz H; Naugler, Christopher
2014-01-01
The emergence of massive datasets in a clinical setting presents both challenges and opportunities in data storage and analysis. This so called "big data" challenges traditional analytic tools and will increasingly require novel solutions adapted from other fields. Advances in information and communication technology present the most viable solutions to big data analysis in terms of efficiency and scalability. It is vital those big data solutions are multithreaded and that data access approaches be precisely tailored to large volumes of semi-structured/unstructured data. THE MAPREDUCE PROGRAMMING FRAMEWORK USES TWO TASKS COMMON IN FUNCTIONAL PROGRAMMING: Map and Reduce. MapReduce is a new parallel processing framework and Hadoop is its open-source implementation on a single computing node or on clusters. Compared with existing parallel processing paradigms (e.g. grid computing and graphical processing unit (GPU)), MapReduce and Hadoop have two advantages: 1) fault-tolerant storage resulting in reliable data processing by replicating the computing tasks, and cloning the data chunks on different computing nodes across the computing cluster; 2) high-throughput data processing via a batch processing framework and the Hadoop distributed file system (HDFS). Data are stored in the HDFS and made available to the slave nodes for computation. In this paper, we review the existing applications of the MapReduce programming framework and its implementation platform Hadoop in clinical big data and related medical health informatics fields. The usage of MapReduce and Hadoop on a distributed system represents a significant advance in clinical big data processing and utilization, and opens up new opportunities in the emerging era of big data analytics. The objective of this paper is to summarize the state-of-the-art efforts in clinical big data analytics and highlight what might be needed to enhance the outcomes of clinical big data analytics tools. This paper is concluded by summarizing the potential usage of the MapReduce programming framework and Hadoop platform to process huge volumes of clinical data in medical health informatics related fields.
Anderson, Jeri L.; Apostoaei, A. Iulian; Thomas, Brian A.
2015-01-01
The National Institute for Occupational Safety and Health (NIOSH) is currently studying mortality in a cohort of 6409 workers at a former uranium processing facility. As part of this study, over 220 000 urine samples were used to reconstruct organ doses due to internal exposure to uranium. Most of the available computational programs designed for analysis of bioassay data handle a single case at a time, and thus require a significant outlay of time and resources for the exposure assessment of a large cohort. NIOSH is currently supporting the development of a computer program, InDEP (Internal Dose Evaluation Program), to facilitate internal radiation exposure assessment as part of epidemiological studies of both uranium- and plutonium-exposed cohorts. A novel feature of InDEP is its batch processing capability which allows for the evaluation of multiple study subjects simultaneously. InDEP analyses bioassay data and derives intakes and organ doses with uncertainty estimates using least-squares regression techniques or using the Bayes’ Theorem as applied to internal dosimetry (Bayesian method). This paper describes the application of the current version of InDEP to formulate assumptions about the characteristics of exposure at the study facility that were used in a detailed retrospective intake and organ dose assessment of the cohort. PMID:22683620
NASA Astrophysics Data System (ADS)
Sharqawy, Mostafa H.
2016-12-01
Pore network models (PNM) of Berea and Fontainebleau sandstones were constructed using nonlinear programming (NLP) and optimization methods. The constructed PNMs are considered as a digital representation of the rock samples which were based on matching the macroscopic properties of the porous media and used to conduct fluid transport simulations including single and two-phase flow. The PNMs consisted of cubic networks of randomly distributed pores and throats sizes and with various connectivity levels. The networks were optimized such that the upper and lower bounds of the pore sizes are determined using the capillary tube bundle model and the Nelder-Mead method instead of guessing them, which reduces the optimization computational time significantly. An open-source PNM framework was employed to conduct transport and percolation simulations such as invasion percolation and Darcian flow. The PNM model was subsequently used to compute the macroscopic properties; porosity, absolute permeability, specific surface area, breakthrough capillary pressure, and primary drainage curve. The pore networks were optimized to allow for the simulation results of the macroscopic properties to be in excellent agreement with the experimental measurements. This study demonstrates that non-linear programming and optimization methods provide a promising method for pore network modeling when computed tomography imaging may not be readily available.
THERMINATOR: THERMal heavy-IoN generATOR
NASA Astrophysics Data System (ADS)
Kisiel, Adam; Tałuć, Tomasz; Broniowski, Wojciech; Florkowski, Wojciech
2006-04-01
THERMINATOR is a Monte Carlo event generator designed for studying of particle production in relativistic heavy-ion collisions performed at such experimental facilities as the SPS, RHIC, or LHC. The program implements thermal models of particle production with single freeze-out. It performs the following tasks: (1) generation of stable particles and unstable resonances at the chosen freeze-out hypersurface with the local phase-space density of particles given by the statistical distribution factors, (2) subsequent space-time evolution and decays of hadronic resonances in cascades, (3) calculation of the transverse-momentum spectra and numerous other observables related to the space-time evolution. The geometry of the freeze-out hypersurface and the collective velocity of expansion may be chosen from two successful models, the Cracow single-freeze-out model and the Blast-Wave model. All particles from the Particle Data Tables are used. The code is written in the object-oriented c++ language and complies to the standards of the ROOT environment. Program summaryProgram title:THERMINATOR Catalogue identifier:ADXL_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXL_v1_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland RAM required to execute with typical data:50 Mbytes Number of processors used:1 Computer(s) for which the program has been designed: PC, Pentium III, IV, or Athlon, 512 MB RAM not hardware dependent (any computer with the c++ compiler and the ROOT environment [R. Brun, F. Rademakers, Nucl. Instrum. Methods A 389 (1997) 81, http://root.cern.ch] Operating system(s) for which the program has been designed:Linux: Mandrake 9.0, Debian 3.0, SuSE 9.0, Red Hat FEDORA 3, etc., Windows XP with Cygwin ver. 1.5.13-1 and gcc ver. 3.3.3 (cygwin special)—not system dependent External routines/libraries used: ROOT ver. 4.02.00 Programming language:c++ Size of the package: (324 KB directory 40 KB compressed distribution archive), without the ROOT libraries (see http://root.cern.ch for details on the ROOT [R. Brun, F. Rademakers, Nucl. Instrum. Methods A 389 (1997) 81, http://root.cern.ch] requirements). The output files created by the code need 1.1 GB for each 500 events. Distribution format: tar gzip file Number of lines in distributed program, including test data, etc.: 6534 Number of bytes in ditribution program, including test data, etc.:41 828 Nature of the physical problem: Statistical models have proved to be very useful in the description of soft physics in relativistic heavy-ion collisions [P. Braun-Munzinger, K. Redlich, J. Stachel, 2003, nucl-th/0304013. [2
NASA Technical Reports Server (NTRS)
Cockrell, Charles E., Jr.
2003-01-01
The Next Generation Launch Technology (NGLT) program, Vehicle Systems Research and Technology (VSR&T) project is pursuing technology advancements in aerothermodynamics, aeropropulsion and flight mechanics to enable development of future reusable launch vehicle (RLV) systems. The current design trade space includes rocket-propelled, hypersonic airbreathing and hybrid systems in two-stage and single-stage configurations. Aerothermodynamics technologies include experimental and computational databases to evaluate stage separation of two-stage vehicles as well as computational and trajectory simulation tools for this problem. Additionally, advancements in high-fidelity computational tools and measurement techniques are being pursued along with the study of flow physics phenomena, such as boundary-layer transition. Aero-propulsion technology development includes scramjet flowpath development and integration, with a current emphasis on hypervelocity (Mach 10 and above) operation, as well as the study of aero-propulsive interactions and the impact on overall vehicle performance. Flight mechanics technology development is focused on advanced guidance, navigation and control (GN&C) algorithms and adaptive flight control systems for both rocket-propelled and airbreathing vehicles.
Hydrologic data-verification management program plan
Alexander, C.W.
1982-01-01
Data verification refers to the performance of quality control on hydrologic data that have been retrieved from the field and are being prepared for dissemination to water-data users. Water-data users now have access to computerized data files containing unpublished, unverified hydrologic data. Therefore, it is necessary to develop techniques and systems whereby the computer can perform some data-verification functions before the data are stored in user-accessible files. Computerized data-verification routines can be developed for this purpose. A single, unified concept describing master data-verification program using multiple special-purpose subroutines, and a screen file containing verification criteria, can probably be adapted to any type and size of computer-processing system. Some traditional manual-verification procedures can be adapted for computerized verification, but new procedures can also be developed that would take advantage of the powerful statistical tools and data-handling procedures available to the computer. Prototype data-verification systems should be developed for all three data-processing environments as soon as possible. The WATSTORE system probably affords the greatest opportunity for long-range research and testing of new verification subroutines. (USGS)
Cytobank: providing an analytics platform for community cytometry data analysis and collaboration.
Chen, Tiffany J; Kotecha, Nikesh
2014-01-01
Cytometry is used extensively in clinical and laboratory settings to diagnose and track cell subsets in blood and tissue. High-throughput, single-cell approaches leveraging cytometry are developed and applied in the computational and systems biology communities by researchers, who seek to improve the diagnosis of human diseases, map the structures of cell signaling networks, and identify new cell types. Data analysis and management present a bottleneck in the flow of knowledge from bench to clinic. Multi-parameter flow and mass cytometry enable identification of signaling profiles of patient cell samples. Currently, this process is manual, requiring hours of work to summarize multi-dimensional data and translate these data for input into other analysis programs. In addition, the increase in the number and size of collaborative cytometry studies as well as the computational complexity of analytical tools require the ability to assemble sufficient and appropriately configured computing capacity on demand. There is a critical need for platforms that can be used by both clinical and basic researchers who routinely rely on cytometry. Recent advances provide a unique opportunity to facilitate collaboration and analysis and management of cytometry data. Specifically, advances in cloud computing and virtualization are enabling efficient use of large computing resources for analysis and backup. An example is Cytobank, a platform that allows researchers to annotate, analyze, and share results along with the underlying single-cell data.
A Novel Implementation of Efficient Algorithms for Quantum Circuit Synthesis
NASA Astrophysics Data System (ADS)
Zeller, Luke
In this project, we design and develop a computer program to effectively approximate arbitrary quantum gates using the discrete set of Clifford Gates together with the T gate (π/8 gate). Employing recent results from Mosca et. al. and Giles and Selinger, we implement a decomposition scheme that outputs a sequence of Clifford, T, and Tt gates that approximate the input to within a specified error range ɛ. Specifically, the given gate is first rounded to an element of Z[1/2, i] with a precision determined by ɛ, and then exact synthesis is employed to produce the resulting gate. It is known that this procedure is optimal in approximating an arbitrary single qubit gate. Our program, written in Matlab and Python, can complete both approximate and exact synthesis of qubits. It can be used to assist in the experimental implementation of an arbitrary fault-tolerant single qubit gate, for which direct implementation isn't feasible.
Analysis and testing of high entrainment single nozzle jet pumps with variable mixing tubes
NASA Technical Reports Server (NTRS)
Hickman, K. E.; Hill, P. G.; Gilbert, G. B.
1972-01-01
An analytical model was developed to predict the performance characteristics of axisymmetric single-nozzle jet pumps with variable area mixing tubes. The primary flow may be subsonic or supersonic. The computer program uses integral techniques to calculate the velocity profiles and the wall static pressures that result from the mixing of the supersonic primary jet and the subsonic secondary flow. An experimental program was conducted to measure mixing tube wall static pressure variations, velocity profiles, and temperature profiles in a variable area mixing tube with a supersonic primary jet. Static pressure variations were measured at four different secondary flow rates. These test results were used to evaluate the analytical model. The analytical results compared well to the experimental data. Therefore, the analysis is believed to be ready for use to relate jet pump performance characteristics to mixing tube design.
Computer modeling of batteries from nonlinear circuit elements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Waaben, S.; Dyer, C.K.; Federico, J.
1985-06-01
Circuit analogs for a single battery cell have previously been composed of resistors, capacitors, and inductors. This work introduces a nonlinear circuit model for cell behavior. The circuit is configured around the PIN junction diode, whose charge-storage behavior has features similar to those of electrochemical cells. A user-friendly integrated circuit simulation computer program has reproduced a variety of complex cell responses including electrica isolation effects causing capacity loss, as well as potentiodynamic peaks and discharge phenomena hitherto thought to be thermodynamic in origin. However, in this work, they are shown to be simply due to spatial distribution of stored chargemore » within a practical electrode.« less
Parallelized Seeded Region Growing Using CUDA
Park, Seongjin; Lee, Hyunna; Seo, Jinwook; Lee, Kyoung Ho; Shin, Yeong-Gil; Kim, Bohyoung
2014-01-01
This paper presents a novel method for parallelizing the seeded region growing (SRG) algorithm using Compute Unified Device Architecture (CUDA) technology, with intention to overcome the theoretical weakness of SRG algorithm of its computation time being directly proportional to the size of a segmented region. The segmentation performance of the proposed CUDA-based SRG is compared with SRG implementations on single-core CPUs, quad-core CPUs, and shader language programming, using synthetic datasets and 20 body CT scans. Based on the experimental results, the CUDA-based SRG outperforms the other three implementations, advocating that it can substantially assist the segmentation during massive CT screening tests. PMID:25309619
A computer-aided approach to nonlinear control systhesis
NASA Technical Reports Server (NTRS)
Wie, Bong; Anthony, Tobin
1988-01-01
The major objective of this project is to develop a computer-aided approach to nonlinear stability analysis and nonlinear control system design. This goal is to be obtained by refining the describing function method as a synthesis tool for nonlinear control design. The interim report outlines the approach by this study to meet these goals including an introduction to the INteractive Controls Analysis (INCA) program which was instrumental in meeting these study objectives. A single-input describing function (SIDF) design methodology was developed in this study; coupled with the software constructed in this study, the results of this project provide a comprehensive tool for design and integration of nonlinear control systems.
NASA Astrophysics Data System (ADS)
Francisco, E.; Pendás, A. Martín; Blanco, M. A.
2008-04-01
Given an N-electron molecule and an exhaustive partition of the real space ( R) into m arbitrary regions Ω,Ω,…,Ω ( ⋃i=1mΩ=R), the edf program computes all the probabilities P(n,n,…,n) of having exactly n electrons in Ω, n electrons in Ω,…, and n electrons ( n+n+⋯+n=N) in Ω. Each Ω may correspond to a single basin (atomic domain) or several such basins (functional group). In the later case, each atomic domain must belong to a single Ω. The program can manage both single- and multi-determinant wave functions which are read in from an aimpac-like wave function description ( .wfn) file (T.A. Keith et al., The AIMPAC95 programs, http://www.chemistry.mcmaster.ca/aimpac, 1995). For multi-determinantal wave functions a generalization of the original .wfn file has been introduced. The new format is completely backwards compatible, adding to the previous structure a description of the configuration interaction (CI) coefficients and the determinants of correlated wave functions. Besides the .wfn file, edf only needs the overlap integrals over all the atomic domains between the molecular orbitals (MO). After the P(n,n,…,n) probabilities are computed, edf obtains from them several magnitudes relevant to chemical bonding theory, such as average electronic populations and localization/delocalization indices. Regarding spin, edf may be used in two ways: with or without a splitting of the P(n,n,…,n) probabilities into α and β spin components. Program summaryProgram title: edf Catalogue identifier: AEAJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAJ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 5387 No. of bytes in distributed program, including test data, etc.: 52 381 Distribution format: tar.gz Programming language: Fortran 77 Computer: 2.80 GHz Intel Pentium IV CPU Operating system: GNU/Linux RAM: 55 992 KB Word size: 32 bits Classification: 2.7 External routines: Netlib Nature of problem: Let us have an N-electron molecule and define an exhaustive partition of the physical space into m three-dimensional regions. The edf program computes the probabilities P(n,n,…,n)≡P({n}) of all possible allocations of n electrons to Ω, n electrons to Ω,…, and n electrons to Ω,{n} being integers. Solution method: Let us assume that the N-electron molecular wave function, Ψ(1,N), is a linear combination of M Slater determinants, Ψ(1,N)=∑rMCψ(1,N). Calling SΩrs the overlap matrix over the 3D region Ω between the (real) molecular spin-orbitals (MSO) in ψ(χ1r,…χNr) and the MSOs in ψ,(χ1s,…,χNs), edf finds all the P({n})'s by solving the linear system ∑{n}{∏kmtkn}P({n})=∑r,sMCCdet[∑kmtSΩrs], where t=1 and t,…,t are arbitrary real numbers. Restrictions: The number of {n} sets grows very fast with m and N, so that the dimension of the linear system (1) soon becomes very large. Moreover, the computer time required to obtain the determinants in the second member of Eq. (1) scales quadratically with M. These two facts limit the applicability of the method to relatively small molecules. Unusual features: Most of the real variables are of precision real*16. Running time: 0.030, 2.010, and 0.620 seconds for Test examples 1, 2, and 3, respectively. References: [1] A. Martín Pendás, E. Francisco, M.A. Blanco, Faraday Discuss. 135 (2007) 423-438. [2] A. Martín Pendás, E. Francisco, M.A. Blanco, J. Phys. Chem. A 111 (2007) 1084-1090. [3] A. Martín Pendás, E. Francisco, M.A. Blanco, Phys. Chem. Chem. Phys. 9 (2007) 1087-1092. [4] E. Francisco, A. Martín Pendás, M.A. Blanco, J. Chem. Phys. 126 (2007) 094102. [5] A. Martín Pendás, E. Francisco, M.A. Blanco, C. Gatti, Chemistry: A European Journal 113 (2007) 9362-9371.
MOLAR: Modular Linux and Adaptive Runtime Support for HEC OS/R Research
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frank Mueller
2009-02-05
MOLAR is a multi-institution research effort that concentrates on adaptive, reliable,and efficient operating and runtime system solutions for ultra-scale high-end scientific computing on the next generation of supercomputers. This research addresses the challenges outlined by the FAST-OS - forum to address scalable technology for runtime and operating systems --- and HECRTF --- high-end computing revitalization task force --- activities by providing a modular Linux and adaptable runtime support for high-end computing operating and runtime systems. The MOLAR research has the following goals to address these issues. (1) Create a modular and configurable Linux system that allows customized changes based onmore » the requirements of the applications, runtime systems, and cluster management software. (2) Build runtime systems that leverage the OS modularity and configurability to improve efficiency, reliability, scalability, ease-of-use, and provide support to legacy and promising programming models. (3) Advance computer reliability, availability and serviceability (RAS) management systems to work cooperatively with the OS/R to identify and preemptively resolve system issues. (4) Explore the use of advanced monitoring and adaptation to improve application performance and predictability of system interruptions. The overall goal of the research conducted at NCSU is to develop scalable algorithms for high-availability without single points of failure and without single points of control.« less
High Strain Rate Material Behavior
1985-12-01
data. iii Mr. Dennis Paisely conducted the single plate impact test. Mr. Danny Yaziv is responsible for developing the double flyer plate technique and...neck developed . The sharp rise in the flow stress is due to the increased strain-rates during necking. The maximum observed value of effective stress...for the material modeling. Computer programs and special purpose subroutines were developed to use the Bodner-Partom model in the STEALTH finite
Communication-Gateway Software For NETEX, DECnet, And TCP/IP
NASA Technical Reports Server (NTRS)
Keith, B.; Ferry, D.; Fendler, E.
1990-01-01
Communications gateway software, GATEWAY, provides process-to-process communication between remote applications programs in different protocol domains. Communicating peer processes may be resident on any paired combination of NETEX, DECnet, or TCP/IP hosts. Provides necessary mapping from one protocol to another and facilitates practical intermachine communications in cost-effective manner by eliminating need to standardize on single protocol or to implement multiple protocols in host computers. Written in Ada.
A Data-Based Financial Management Information System (FMIS) for Administrative Sciences Department
1990-12-01
Financial Management Information System that would result in improved management of financial assets, better use of clerical skills, and more detailed...develops and implements a personal computer-based Management Information System for the Management of the many funding accounts controlled by the...different software programs, into a single all-encompassing Management Information System . The system was written using dBASE IV and is currently operational.
Microprocessor-based cardiotachometer
NASA Technical Reports Server (NTRS)
Crosier, W. G.; Donaldson, J. A.
1981-01-01
Instrument operates reliably even with stress-test electrocardiogram (ECG) signals subject to noise, baseline wandering, and amplitude change. It records heart rate from preamplified, single-lead ECG input signal and produces digital and analog heart-rate outputs which are fed elsewhere. Analog hardware processes ECG input signal, producing 10-ms pulse for each heartbeat. Microprocessor analyzes resulting pulse train, identifying irregular heartbeats and maintaining stable output during lead switching. Easily modified computer program provides analysis.
A Multiple Period Problem in Distributed Energy Management Systems Considering CO2 Emissions
NASA Astrophysics Data System (ADS)
Muroda, Yuki; Miyamoto, Toshiyuki; Mori, Kazuyuki; Kitamura, Shoichi; Yamamoto, Takaya
Consider a special district (group) which is composed of multiple companies (agents), and where each agent responds to an energy demand and has a CO2 emission allowance imposed. A distributed energy management system (DEMS) optimizes energy consumption of a group through energy trading in the group. In this paper, we extended the energy distribution decision and optimal planning problem in DEMSs from a single period problem to a multiple periods one. The extension enabled us to consider more realistic constraints such as demand patterns, the start-up cost, and minimum running/outage times of equipment. At first, we extended the market-oriented programming (MOP) method for deciding energy distribution to the multiple periods problem. The bidding strategy of each agent is formulated by a 0-1 mixed non-linear programming problem. Secondly, we proposed decomposing the problem into a set of single period problems in order to solve it faster. In order to decompose the problem, we proposed a CO2 emission allowance distribution method, called an EP method. We confirmed that the proposed method was able to produce solutions whose group costs were close to lower-bound group costs by computational experiments. In addition, we verified that reduction in computational time was achieved without losing the quality of solutions by using the EP method.
Lancioni, Giulio E.; Bosco, Andrea; Olivetti Belardinelli, Marta; Singh, Nirbhay N.; O'Reilly, Mark F.; Sigafoos, Jeff; Oliva, Doretta
2013-01-01
Post-coma persons in a minimally conscious state and with extensive motor impairment or emerging/emerged from such a state, but affected by lack of speech and motor impairment, tend to be passive and isolated. A way to help them develop functional responding to control environmental events and communication involves the use of intervention programs relying on assistive technology. This paper provides an overview of technology-based intervention programs for enabling the participants to (a) access brief periods of stimulation through one or two microswitches, (b) pursue stimulation and social contact through the combination of a microswitch and a sensor connected to a speech generating device (SGD) or through two SGD-related sensors, (c) control stimulation options through computer or radio systems and a microswitch, (d) communicate through modified messaging or telephone systems operated via microswitch, and (e) control combinations of leisure and communication options through computer systems operated via microswitch. Twenty-six studies, involving a total of 52 participants, were included in this paper. The intervention programs were carried out using single-subject methodology, and their outcomes were generally considered positive from the standpoint of the participants and their context. Practical implications of the programs are discussed. PMID:24574992
Giessner-Prettre, C; Ribas Prado, F; Pullman, B; Kan, L; Kast, J R; Ts'o, P O
1981-01-01
A FORTRAN computer program called SHIFTS is described. Through SHIFTS, one can calculate the NMR chemical shifts of the proton resonances of single and double-stranded nucleic acids of known sequences and of predetermined conformations. The program can handle RNA and DNA for an arbitrary sequence of a set of 4 out of the 6 base types A,U,G,C,I and T. Data files for the geometrical parameters are available for A-, A'-, B-, D- and S-conformations. The positions of all the atoms are calculated using a modified version of the SEQ program [1]. Then, based on this defined geometry three chemical shift effects exerted by the atoms of the neighboring nucleotides on the protons of each monomeric unit are calculated separately: the ring current shielding effect: the local atomic magnetic susceptibility effect (including both diamagnetic and paramagnetic terms); and the polarization or electric field effect. Results of the program are compared with experimental results for a gamma (ApApGpCpUpU) 2 helical duplex and with calculated results on this same helix based on model building of A'-form and B-form and on graphical procedure for evaluating the ring current effects.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Archer, Charles J; Blocksome, Michael A; Cernohous, Bob R
Methods, apparatuses, and computer program products for endpoint-based parallel data processing with non-blocking collective instructions in a parallel active messaging interface (`PAMI`) of a parallel computer are provided. Embodiments include establishing by a parallel application a data communications geometry, the geometry specifying a set of endpoints that are used in collective operations of the PAMI, including associating with the geometry a list of collective algorithms valid for use with the endpoints of the geometry. Embodiments also include registering in each endpoint in the geometry a dispatch callback function for a collective operation and executing without blocking, through a single onemore » of the endpoints in the geometry, an instruction for the collective operation.« less
Interactive design and analysis of future large spacecraft concepts
NASA Technical Reports Server (NTRS)
Garrett, L. B.
1981-01-01
An interactive computer aided design program used to perform systems level design and analysis of large spacecraft concepts is presented. Emphasis is on rapid design, analysis of integrated spacecraft, and automatic spacecraft modeling for lattice structures. Capabilities and performance of multidiscipline applications modules, the executive and data management software, and graphics display features are reviewed. A single user at an interactive terminal create, design, analyze, and conduct parametric studies of Earth orbiting spacecraft with relative ease. Data generated in the design, analysis, and performance evaluation of an Earth-orbiting large diameter antenna satellite are used to illustrate current capabilities. Computer run time statistics for the individual modules quantify the speed at which modeling, analysis, and design evaluation of integrated spacecraft concepts is accomplished in a user interactive computing environment.
Human-Centered Design of Human-Computer-Human Dialogs in Aerospace Systems
NASA Technical Reports Server (NTRS)
Mitchell, Christine M.
1998-01-01
A series of ongoing research programs at Georgia Tech established a need for a simulation support tool for aircraft computer-based aids. This led to the design and development of the Georgia Tech Electronic Flight Instrument Research Tool (GT-EFIRT). GT-EFIRT is a part-task flight simulator specifically designed to study aircraft display design and single pilot interaction. ne simulator, using commercially available graphics and Unix workstations, replicates to a high level of fidelity the Electronic Flight Instrument Systems (EFIS), Flight Management Computer (FMC) and Auto Flight Director System (AFDS) of the Boeing 757/767 aircraft. The simulator can be configured to present information using conventional looking B757n67 displays or next generation Primary Flight Displays (PFD) such as found on the Beech Starship and MD-11.
Parallelization and checkpointing of GPU applications through program transformation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Solano-Quinde, Lizandro Damian
2012-01-01
GPUs have emerged as a powerful tool for accelerating general-purpose applications. The availability of programming languages that makes writing general-purpose applications for running on GPUs tractable have consolidated GPUs as an alternative for accelerating general purpose applications. Among the areas that have benefited from GPU acceleration are: signal and image processing, computational fluid dynamics, quantum chemistry, and, in general, the High Performance Computing (HPC) Industry. In order to continue to exploit higher levels of parallelism with GPUs, multi-GPU systems are gaining popularity. In this context, single-GPU applications are parallelized for running in multi-GPU systems. Furthermore, multi-GPU systems help to solvemore » the GPU memory limitation for applications with large application memory footprint. Parallelizing single-GPU applications has been approached by libraries that distribute the workload at runtime, however, they impose execution overhead and are not portable. On the other hand, on traditional CPU systems, parallelization has been approached through application transformation at pre-compile time, which enhances the application to distribute the workload at application level and does not have the issues of library-based approaches. Hence, a parallelization scheme for GPU systems based on application transformation is needed. Like any computing engine of today, reliability is also a concern in GPUs. GPUs are vulnerable to transient and permanent failures. Current checkpoint/restart techniques are not suitable for systems with GPUs. Checkpointing for GPU systems present new and interesting challenges, primarily due to the natural differences imposed by the hardware design, the memory subsystem architecture, the massive number of threads, and the limited amount of synchronization among threads. Therefore, a checkpoint/restart technique suitable for GPU systems is needed. The goal of this work is to exploit higher levels of parallelism and to develop support for application-level fault tolerance in applications using multiple GPUs. Our techniques reduce the burden of enhancing single-GPU applications to support these features. To achieve our goal, this work designs and implements a framework for enhancing a single-GPU OpenCL application through application transformation.« less
Scheduling double round-robin tournaments with divisional play using constraint programming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlsson, Mats; Johansson, Mikael; Larson, Jeffrey
We study a tournament format that extends a traditional double round-robin format with divisional single round-robin tournaments. Elitserien, the top Swedish handball league, uses such a format for its league schedule. We present a constraint programming model that characterizes the general double round-robin plus divisional single round-robin format. This integrated model allows scheduling to be performed in a single step, as opposed to common multistep approaches that decompose scheduling into smaller problems and possibly miss optimal solutions. In addition to general constraints, we introduce Elitserien-specific requirements for its tournament. These general and league-specific constraints allow us to identify implicit andmore » symmetry-breaking properties that reduce the time to solution from hours to seconds. A scalability study of the number of teams shows that our approach is reasonably fast for even larger league sizes. The experimental evaluation of the integrated approach takes considerably less computational effort to schedule Elitserien than does the previous decomposed approach. (C) 2016 Elsevier B.V. All rights reserved« less
NASA Technical Reports Server (NTRS)
Reddy, T. S. R.; Srivastava, R.
1996-01-01
This guide describes the input data required for using MSAP2D (Multi Stage Aeroelastic analysis Program - Two Dimensional) computer code. MSAP2D can be used for steady, unsteady aerodynamic, and aeroelastic (flutter and forced response) analysis of bladed disks arranged in multiple blade rows such as those found in compressors, turbines, counter rotating propellers or propfans. The code can also be run for single blade row. MSAP2D code is an extension of the original NPHASE code for multiblade row aerodynamic and aeroelastic analysis. Euler equations are used to obtain aerodynamic forces. The structural dynamic equations are written for a rigid typical section undergoing pitching (torsion) and plunging (bending) motion. The aeroelastic equations are solved in time domain. For single blade row analysis, frequency domain analysis is also provided to obtain unsteady aerodynamic coefficients required in an eigen analysis for flutter. In this manual, sample input and output are provided for a single blade row example, two blade row example with equal and unequal number of blades in the blade rows.
Genetic aspect of Alzheimer disease: Results of complex segregation analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sadonvick, A.D.; Lee, I.M.L.; Bailey-Wilson, J.E.
1994-09-01
The study was designed to evaluate the possibility that a single major locus will explain the segregation of Alzheimer disease (AD). The data were from the population-based AD Genetic Database and consisted of 402 consecutive, unrelated probands, diagnosed to have either `probable` or `autopsy confirmed` AD and their 2,245 first-degree relatives. In this analysis, a relative was considered affected with AD only when there were sufficient medical/autopsy data to support diagnosis of AD being the most likely cause of the dementia. Transmission probability models allowing for a genotype-dependent and logistically distributed age-of-onset were used. The program REGTL in the S.A.G.E.more » computer program package was used for a complex segregation analysis. The models included correction for single ascertainment. Regressive familial effects were not estimated. The data were analyzed to test for single major locus (SML), random transmission and no transmission (environmental) hypotheses. The results of the complex segregation analysis showed that (1) the SML was the best fit, and (2) the non-genetic models could be rejected.« less
A coded tracking telemetry system
Howey, P.W.; Seegar, W.S.; Fuller, M.R.; Titus, K.; Amlaner, Charles J.
1989-01-01
We describe the general characteristics of an automated radio telemetry system designed to operate for prolonged periods on a single frequency. Each transmitter sends a unique coded signal to a receiving system that encodes and records only the appropriater, pre-programmed codes. A record of the time of each reception is stored on diskettes in a micro-computer. This system enables continuous monitoring of infrequent signals (e.g. one per minute or one per hour), thus extending operation life or allowing size reduction of the transmitter, compared to conventional wildlife telemetry. Furthermore, when using unique codes transmitted on a single frequency, biologists can monitor many individuals without exceeding the radio frequency allocations for wildlife.
ORBIT: an integrated environment for user-customized bioinformatics tools.
Bellgard, M I; Hiew, H L; Hunter, A; Wiebrands, M
1999-10-01
There are a large number of computational programs freely available to bioinformaticians via a client/server, web-based environment. However, the client interface to these tools (typically an html form page) cannot be customized from the client side as it is created by the service provider. The form page is usually generic enough to cater for a wide range of users. However, this implies that a user cannot set as 'default' advanced program parameters on the form or even customize the interface to his/her specific requirements or preferences. Currently, there is a lack of end-user interface environments that can be modified by the user when accessing computer programs available on a remote server running on an intranet or over the Internet. We have implemented a client/server system called ORBIT (Online Researcher's Bioinformatics Interface Tools) where individual clients can have interfaces created and customized to command-line-driven, server-side programs. Thus, Internet-based interfaces can be tailored to a user's specific bioinformatic needs. As interfaces are created on the client machine independent of the server, there can be different interfaces to the same server-side program to cater for different parameter settings. The interface customization is relatively quick (between 10 and 60 min) and all client interfaces are integrated into a single modular environment which will run on any computer platform supporting Java. The system has been developed to allow for a number of future enhancements and features. ORBIT represents an important advance in the way researchers gain access to bioinformatics tools on the Internet.
Archer, Charles J.; Blocksome, Michael A.
2012-12-11
Methods, parallel computers, and computer program products are disclosed for remote direct memory access. Embodiments include transmitting, from an origin DMA engine on an origin compute node to a plurality target DMA engines on target compute nodes, a request to send message, the request to send message specifying a data to be transferred from the origin DMA engine to data storage on each target compute node; receiving, by each target DMA engine on each target compute node, the request to send message; preparing, by each target DMA engine, to store data according to the data storage reference and the data length, including assigning a base storage address for the data storage reference; sending, by one or more of the target DMA engines, an acknowledgment message acknowledging that all the target DMA engines are prepared to receive a data transmission from the origin DMA engine; receiving, by the origin DMA engine, the acknowledgement message from the one or more of the target DMA engines; and transferring, by the origin DMA engine, data to data storage on each of the target compute nodes according to the data storage reference using a single direct put operation.
Development of a New System for Transport Simulation and Analysis at General Atomics
NASA Astrophysics Data System (ADS)
St. John, H. E.; Peng, Q.; Freeman, J.; Crotinger, J.
1997-11-01
General Atomics has begun a long term program to improve all aspects of experimental data analysis related to DIII--D. The object is to make local and visiting physicists as productive as possible, with only a small investment in training, by developing intuitive, sophisticated interfaces to existing and newly created computer programs. Here we describe our initial work and results of a pilot project in this program. The pilot project is a collaboratory effort between LLNL and GA which will ultimately result in the merger of Corsica and ONETWO (and selected modules from other codes) into a new advanced transport code system. The initial goal is to produce a graphical user interface to the transport code ONETWO which will couple to a programmable (steerable) front end designed for the transport system. This will be an object oriented scheme written primarily in python. The programmable application will integrate existing C, C^++, and Fortran methods in a single computational paradigm. Its most important feature is the use of plug in physics modules which will allow a high degree of customization.
NASA Technical Reports Server (NTRS)
Siegel, P. H.; Kerr, A. R.
1979-01-01
A user oriented computer program for analyzing microwave and millimeter wave mixers with a single Schottky barrier diode of known I-V and C-V characteristics is described. The program first performs a nonlinear analysis to determine the diode conductance and capacitance waveforms produced by the local oscillator. A small signal linear analysis is then used to find the conversion loss, port impedances, and input noise temperature of the mixer. Thermal noise from the series resistance of the diode and shot noise from the periodically pumped current in the diode conductance are considered. The effects of the series inductance and diode capacitance on the performance of some simple mixer circuits using a conventional Schottky diode, a Schottky diode in which there is no capacitance variation, and a Mott diode are studied. It is shown that the parametric effects of the voltage dependent capacitance of a conventional Schottky diode may be either detrimental or beneficial depending on the diode and circuit parameters.
Overview of Aro Program on Network Science for Human Decision Making
NASA Astrophysics Data System (ADS)
West, Bruce J.
This program brings together researchers from disparate disciplines to work on a complex research problem that defies confinement within any single discipline. Consequently, not only are new and rewarding solutions sought and obtained for a problem of importance to society and the Army, that is, the human dimension of complex networks, but, in addition, collaborations are established that would not otherwise have formed given the traditional disciplinary compartmentalization of research. This program develops the basic research foundation of a science of networks supporting the linkage between the physical and human (cognitive and social) domains as they relate to human decision making. The strategy is to extend the recent methods of non-equilibrium statistical physics to non-stationary, renewal stochastic processes that appear to be characteristic of the interactions among nodes in complex networks. We also pursue understanding of the phenomenon of synchronization, whose mathematical formulation has recently provided insight into how complex networks reach accommodation and cooperation. The theoretical analyses of complex networks, although mathematically rigorous, often elude analytic solutions and require computer simulation and computation to analyze the underlying dynamic process.
System life and reliability modeling for helicopter transmissions
NASA Technical Reports Server (NTRS)
Savage, M.; Brikmanis, C. K.
1986-01-01
A computer program which simulates life and reliability of helicopter transmissions is presented. The helicopter transmissions may be composed of spiral bevel gear units and planetary gear units - alone, in series or in parallel. The spiral bevel gear units may have either single or dual input pinions, which are identical. The planetary gear units may be stepped or unstepped and the number of planet gears carried by the planet arm may be varied. The reliability analysis used in the program is based on the Weibull distribution lives of the transmission components. The computer calculates the system lives and dynamic capacities of the transmission components and the transmission. The system life is defined as the life of the component or transmission at an output torque at which the probability of survival is 90 percent. The dynamic capacity of a component or transmission is defined as the output torque which can be applied for one million output shaft cycles for a probability of survival of 90 percent. A complete summary of the life and dynamic capacity results is produced by the program.