Sample records for independent computer programs

  1. DIALOG: An executive computer program for linking independent programs

    NASA Technical Reports Server (NTRS)

    Glatt, C. R.; Hague, D. S.; Watson, D. A.

    1973-01-01

    A very large scale computer programming procedure called the DIALOG executive system was developed for the CDC 6000 series computers. The executive computer program, DIALOG, controls the sequence of execution and data management function for a library of independent computer programs. Communication of common information is accomplished by DIALOG through a dynamically constructed and maintained data base of common information. Each computer program maintains its individual identity and is unaware of its contribution to the large scale program. This feature makes any computer program a candidate for use with the DIALOG executive system. The installation and uses of the DIALOG executive system are described.

  2. DIALOG: An executive computer program for linking independent programs

    NASA Technical Reports Server (NTRS)

    Glatt, C. R.; Hague, D. S.; Watson, D. A.

    1973-01-01

    A very large scale computer programming procedure called the DIALOG Executive System has been developed for the Univac 1100 series computers. The executive computer program, DIALOG, controls the sequence of execution and data management function for a library of independent computer programs. Communication of common information is accomplished by DIALOG through a dynamically constructed and maintained data base of common information. The unique feature of the DIALOG Executive System is the manner in which computer programs are linked. Each program maintains its individual identity and as such is unaware of its contribution to the large scale program. This feature makes any computer program a candidate for use with the DIALOG Executive System. The installation and use of the DIALOG Executive System are described at Johnson Space Center.

  3. FIT: Computer Program that Interactively Determines Polynomial Equations for Data which are a Function of Two Independent Variables

    NASA Technical Reports Server (NTRS)

    Arbuckle, P. D.; Sliwa, S. M.; Roy, M. L.; Tiffany, S. H.

    1985-01-01

    A computer program for interactively developing least-squares polynomial equations to fit user-supplied data is described. The program is characterized by the ability to compute the polynomial equations of a surface fit through data that are a function of two independent variables. The program utilizes the Langley Research Center graphics packages to display polynomial equation curves and data points, facilitating a qualitative evaluation of the effectiveness of the fit. An explanation of the fundamental principles and features of the program, as well as sample input and corresponding output, are included.

  4. NASA Ames potential flow analysis (POTFAN) geometry program (POTGEM), version 1

    NASA Technical Reports Server (NTRS)

    Medan, R. T.; Bullock, R. B.

    1976-01-01

    A computer program known as POTGEM is reported which has been developed as an independent segment of a three-dimensional linearized, potential flow analysis system and which is used to generate a panel point description of arbitrary, three-dimensional bodies from convenient engineering descriptions consisting of equations and/or tables. Due to the independent, modular nature of the program, it may be used to generate corner points for other computer programs.

  5. DoE Early Career Research Program: Final Report: Model-Independent Dark-Matter Searches at the ATLAS Experiment and Applications of Many-core Computing to High Energy Physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farbin, Amir

    2015-07-15

    This is the final report of for DoE Early Career Research Program Grant Titled "Model-Independent Dark-Matter Searches at the ATLAS Experiment and Applications of Many-core Computing to High Energy Physics".

  6. The non-independence discussion about cycle structure in the computer language: the final simplification of computer language in the structural design

    NASA Astrophysics Data System (ADS)

    Yang, Peilu

    2013-03-01

    In the first place, the article discusses the theory, content, development, and questions about structured programming design. The further extension on this basement provides the cycle structure in computer language is the sequence structure, branch structure, and the cycle structure with independence. Through the deeply research by the writer, we find the non-independence and reach the final simplification about the computer language design. In the first, the writer provides the language structure of linear structure (I structure) and curvilinear structure (Y structure). This makes the computer language has high proficiency with simplification during the program exploration. The research in this article is corresponding with the widely used dualistic structure in the computer field. Moreover, it is greatly promote the evolution of computer language.

  7. Computer program determines chemical equilibria in complex systems

    NASA Technical Reports Server (NTRS)

    Gordon, S.; Zeleznik, F. J.

    1966-01-01

    Computer program numerically solves nonlinear algebraic equations for chemical equilibrium based on iteration equations independent of choice of components. This program calculates theoretical performance for frozen and equilibrium composition during expansion and Chapman-Jouguet flame properties, studies combustion, and designs hardware.

  8. Space Ultrareliable Modular Computer (SUMC) instruction simulator

    NASA Technical Reports Server (NTRS)

    Curran, R. T.

    1972-01-01

    The design principles, description, functional operation, and recommended expansion and enhancements are presented for the Space Ultrareliable Modular Computer interpretive simulator. Included as appendices are the user's manual, program module descriptions, target instruction descriptions, simulator source program listing, and a sample program printout. In discussing the design and operation of the simulator, the key problems involving host computer independence and target computer architectural scope are brought into focus.

  9. A computer graphics display and data compression technique

    NASA Technical Reports Server (NTRS)

    Teague, M. J.; Meyer, H. G.; Levenson, L. (Editor)

    1974-01-01

    The computer program discussed is intended for the graphical presentation of a general dependent variable X that is a function of two independent variables, U and V. The required input to the program is the variation of the dependent variable with one of the independent variables for various fixed values of the other. The computer program is named CRP, and the output is provided by the SD 4060 plotter. Program CRP is an extremely flexible program that offers the user a wide variety of options. The dependent variable may be presented in either a linear or a logarithmic manner. Automatic centering of the plot is provided in the ordinate direction, and the abscissa is scaled automatically for a logarithmic plot. A description of the carpet plot technique is given along with the coordinates system used in the program. Various aspects of the program logic are discussed and detailed documentation of the data card format is presented.

  10. MIX: a computer program to evaluate interaction between chemicals

    Treesearch

    Jacqueline L. Robertson; Kimberly C. Smith

    1989-01-01

    A computer program, MIX, was designed to identify pairs of chemicals whose interaction results in a response that departs significantly from the model predicated on the assumption of independent, uncorrelated joint action. This report describes the MIX program, its statistical basis, and instructions for its use.

  11. Architecture independent environment for developing engineering software on MIMD computers

    NASA Technical Reports Server (NTRS)

    Valimohamed, Karim A.; Lopez, L. A.

    1990-01-01

    Engineers are constantly faced with solving problems of increasing complexity and detail. Multiple Instruction stream Multiple Data stream (MIMD) computers have been developed to overcome the performance limitations of serial computers. The hardware architectures of MIMD computers vary considerably and are much more sophisticated than serial computers. Developing large scale software for a variety of MIMD computers is difficult and expensive. There is a need to provide tools that facilitate programming these machines. First, the issues that must be considered to develop those tools are examined. The two main areas of concern were architecture independence and data management. Architecture independent software facilitates software portability and improves the longevity and utility of the software product. It provides some form of insurance for the investment of time and effort that goes into developing the software. The management of data is a crucial aspect of solving large engineering problems. It must be considered in light of the new hardware organizations that are available. Second, the functional design and implementation of a software environment that facilitates developing architecture independent software for large engineering applications are described. The topics of discussion include: a description of the model that supports the development of architecture independent software; identifying and exploiting concurrency within the application program; data coherence; engineering data base and memory management.

  12. The Use of Reverse Engineering to Analyse Student Computer Programs.

    ERIC Educational Resources Information Center

    Vanneste, Philip; And Others

    1996-01-01

    Discusses how the reverse engineering approach can generate feedback on computer programs without the user having any prior knowledge of what the program was designed to do. This approach uses the cognitive model of programming knowledge to interpret both context independent and dependent errors in the same words and concepts as human programmers.…

  13. Independent Study in 1983. A Research Report of the NUCEA Independent Study Division. Final Report.

    ERIC Educational Resources Information Center

    Feasley, Charles E.

    Information on institutional programs offering independent study by correspondence was studied in 1983, with attention to enrollments, staff size, fees, services, the use of computer grading, and compensation paid to staff for grading and course development in college, high school, and noncredit programs. The survey population consisted of 73…

  14. Technique to eliminate computational instability in multibody simulations employing the Lagrange multiplier

    NASA Technical Reports Server (NTRS)

    Watts, G.

    1992-01-01

    A programming technique to eliminate computational instability in multibody simulations that use the Lagrange multiplier is presented. The computational instability occurs when the attached bodies drift apart and violate the constraints. The programming technique uses the constraint equation, instead of integration, to determine the coordinates that are not independent. Although the equations of motion are unchanged, a complete derivation of the incorporation of the Lagrange multiplier into the equation of motion for two bodies is presented. A listing of a digital computer program which uses the programming technique to eliminate computational instability is also presented. The computer program simulates a solid rocket booster and parachute connected by a frictionless swivel.

  15. The 20 kW battery study program

    NASA Technical Reports Server (NTRS)

    1971-01-01

    Six battery configurations were selected for detailed study and these are described. A computer program was modified for use in estimation of the weights, costs, and reliabilities of each of the configurations, as a function of several important independent variables, such as system voltage, battery voltage ratio (battery voltage/bus voltage), and the number of parallel units into which each of the components of the power subsystem was divided. The computer program was used to develop the relationship between the independent variables alone and in combination, and the dependent variables: weight, cost, and availability. Parametric data, including power loss curves, are given.

  16. Refinement Of Hexahedral Cells In Euler Flow Computations

    NASA Technical Reports Server (NTRS)

    Melton, John E.; Cappuccio, Gelsomina; Thomas, Scott D.

    1996-01-01

    Topologically Independent Grid, Euler Refinement (TIGER) computer program solves Euler equations of three-dimensional, unsteady flow of inviscid, compressible fluid by numerical integration on unstructured hexahedral coordinate grid refined where necessary to resolve shocks and other details. Hexahedral cells subdivided, each into eight smaller cells, as needed to refine computational grid in regions of high flow gradients. Grid Interactive Refinement and Flow-Field Examination (GIRAFFE) computer program written in conjunction with TIGER program to display computed flow-field data and to assist researcher in verifying specified boundary conditions and refining grid.

  17. A modular finite-element model (MODFE) for areal and axisymmetric ground-water-flow problems, Part 3: Design philosophy and programming details

    USGS Publications Warehouse

    Torak, L.J.

    1993-01-01

    A MODular Finite-Element, digital-computer program (MODFE) was developed to simulate steady or unsteady-state, two-dimensional or axisymmetric ground-water-flow. The modular structure of MODFE places the computationally independent tasks that are performed routinely by digital-computer programs simulating ground-water flow into separate subroutines, which are executed from the main program by control statements. Each subroutine consists of complete sets of computations, or modules, which are identified by comment statements, and can be modified by the user without affecting unrelated computations elsewhere in the program. Simulation capabilities can be added or modified by either adding or modifying subroutines that perform specific computational tasks, and the modular-program structure allows the user to create versions of MODFE that contain only the simulation capabilities that pertain to the ground-water problem of interest. MODFE is written in a Fortran programming language that makes it virtually device independent and compatible with desk-top personal computers and large mainframes. MODFE uses computer storage and execution time efficiently by taking advantage of symmetry and sparseness within the coefficient matrices of the finite-element equations. Parts of the matrix coefficients are computed and stored as single-subscripted variables, which are assembled into a complete coefficient just prior to solution. Computer storage is reused during simulation to decrease storage requirements. Descriptions of subroutines that execute the computational steps of the modular-program structure are given in tables that cross reference the subroutines with particular versions of MODFE. Programming details of linear and nonlinear hydrologic terms are provided. Structure diagrams for the main programs show the order in which subroutines are executed for each version and illustrate some of the linear and nonlinear versions of MODFE that are possible. Computational aspects of changing stresses and boundary conditions with time and of mass-balance and error terms are given for each hydrologic feature. Program variables are listed and defined according to their occurrence in the main programs and in subroutines. Listings of the main programs and subroutines are given.

  18. CONTOUR; a modification of G.I. Evenden's general purpose contouring program

    USGS Publications Warehouse

    Godson, R.H.; Webring, M.W.

    1982-01-01

    A contouring program written for the DEC-10 computer (Evenden, 1975) has been modified and enhanced to operate on a Honeywell Multics 68/80 computer. The program uses a device independent plotting system (Wahl, 1977) so that output can be directed to any of several plotting devices by simply specifying one input variable.

  19. Experience with an Independent Study Program in Pathophysiology for Doctor of Pharmacy Students.

    ERIC Educational Resources Information Center

    Nahata, Milap C.

    1986-01-01

    A pharmacy doctoral program's independent-study component in pathophysiology, supported by computer-assisted instruction and self-evaluation, has the advantages of self-pacing, reduced faculty time commitment, and increased ability to work effectively with physicians. Disadvantages include student feeling of isolation, imbalanced content, and…

  20. Representation-Independent Iteration of Sparse Data Arrays

    NASA Technical Reports Server (NTRS)

    James, Mark

    2007-01-01

    An approach is defined that describes a method of iterating over massively large arrays containing sparse data using an approach that is implementation independent of how the contents of the sparse arrays are laid out in memory. What is unique and important here is the decoupling of the iteration over the sparse set of array elements from how they are internally represented in memory. This enables this approach to be backward compatible with existing schemes for representing sparse arrays as well as new approaches. What is novel here is a new approach for efficiently iterating over sparse arrays that is independent of the underlying memory layout representation of the array. A functional interface is defined for implementing sparse arrays in any modern programming language with a particular focus for the Chapel programming language. Examples are provided that show the translation of a loop that computes a matrix vector product into this representation for both the distributed and not-distributed cases. This work is directly applicable to NASA and its High Productivity Computing Systems (HPCS) program that JPL and our current program are engaged in. The goal of this program is to create powerful, scalable, and economically viable high-powered computer systems suitable for use in national security and industry by 2010. This is important to NASA for its computationally intensive requirements for analyzing and understanding the volumes of science data from our returned missions.

  1. Ridge: a computer program for calculating ridge regression estimates

    Treesearch

    Donald E. Hilt; Donald W. Seegrist

    1977-01-01

    Least-squares coefficients for multiple-regression models may be unstable when the independent variables are highly correlated. Ridge regression is a biased estimation procedure that produces stable estimates of the coefficients. Ridge regression is discussed, and a computer program for calculating the ridge coefficients is presented.

  2. A Computer Program for Preliminary Data Analysis

    Treesearch

    Dennis L. Schweitzer

    1967-01-01

    ABSTRACT. -- A computer program written in FORTRAN has been designed to summarize data. Class frequencies, means, and standard deviations are printed for as many as 100 independent variables. Cross-classifications of an observed dependent variable and of a dependent variable predicted by a multiple regression equation can also be generated.

  3. Computer-Assisted Learning in Elementary Reading: A Randomized Control Trial

    ERIC Educational Resources Information Center

    Shannon, Lisa Cassidy; Styers, Mary Koenig; Wilkerson, Stephanie Baird; Peery, Elizabeth

    2015-01-01

    This study evaluated the efficacy of Accelerated Reader, a computer-based learning program, at improving student reading. Accelerated Reader is a progress-monitoring, assessment, and practice tool that supports classroom instruction and guides independent reading. Researchers used a randomized controlled trial to evaluate the program with 344…

  4. OPDOT: A computer program for the optimum preliminary design of a transport airplane

    NASA Technical Reports Server (NTRS)

    Sliwa, S. M.; Arbuckle, P. D.

    1980-01-01

    A description of a computer program, OPDOT, for the optimal preliminary design of transport aircraft is given. OPDOT utilizes constrained parameter optimization to minimize a performance index (e.g., direct operating cost per block hour) while satisfying operating constraints. The approach in OPDOT uses geometric descriptors as independent design variables. The independent design variables are systematically iterated to find the optimum design. The technical development of the program is provided and a program listing with sample input and output are utilized to illustrate its use in preliminary design. It is not meant to be a user's guide, but rather a description of a useful design tool developed for studying the application of new technologies to transport airplanes.

  5. The engineering design integration (EDIN) system. [digital computer program complex

    NASA Technical Reports Server (NTRS)

    Glatt, C. R.; Hirsch, G. N.; Alford, G. E.; Colquitt, W. N.; Reiners, S. J.

    1974-01-01

    A digital computer program complex for the evaluation of aerospace vehicle preliminary designs is described. The system consists of a Univac 1100 series computer and peripherals using the Exec 8 operating system, a set of demand access terminals of the alphanumeric and graphics types, and a library of independent computer programs. Modification of the partial run streams, data base maintenance and construction, and control of program sequencing are provided by a data manipulation program called the DLG processor. The executive control of library program execution is performed by the Univac Exec 8 operating system through a user established run stream. A combination of demand and batch operations is employed in the evaluation of preliminary designs. Applications accomplished with the EDIN system are described.

  6. A Program for Computing Steady Inviscid Three-Dimensional Supersonic Flow on Reentry Vehicles. Volume I: Analysis and Programming

    DTIC Science & Technology

    1977-02-11

    Continue an reverse aide If necessaty and Identify by block number) A comprehensive computational procedure is presented for predicting the...Aeroballistic Reentry Technology ( ART ) program with some of the fundamental analytical and numerical work supported by NSWC Independent Research Funds. Most of...the Aerospace Corporation. The authors gratefully acknowledge the efforts of Mr. R. Feldhuhn, NSWC coordinator for the ART program, who was responsible

  7. A Randomized Field Trial of the Fast ForWord Language Computer-Based Training Program

    ERIC Educational Resources Information Center

    Borman, Geoffrey D.; Benson, James G.; Overman, Laura

    2009-01-01

    This article describes an independent assessment of the Fast ForWord Language computer-based training program developed by Scientific Learning Corporation. Previous laboratory research involving children with language-based learning impairments showed strong effects on their abilities to recognize brief and fast sequences of nonspeech and speech…

  8. Research in mathematical theory of computation. [computer programming applications

    NASA Technical Reports Server (NTRS)

    Mccarthy, J.

    1973-01-01

    Research progress in the following areas is reviewed: (1) new version of computer program LCF (logic for computable functions) including a facility to search for proofs automatically; (2) the description of the language PASCAL in terms of both LCF and in first order logic; (3) discussion of LISP semantics in LCF and attempt to prove the correctness of the London compilers in a formal way; (4) design of both special purpose and domain independent proving procedures specifically program correctness in mind; (5) design of languages for describing such proof procedures; and (6) the embedding of ideas in the first order checker.

  9. Cumulative Poisson Distribution Program

    NASA Technical Reports Server (NTRS)

    Bowerman, Paul N.; Scheuer, Ernest M.; Nolty, Robert

    1990-01-01

    Overflow and underflow in sums prevented. Cumulative Poisson Distribution Program, CUMPOIS, one of two computer programs that make calculations involving cumulative Poisson distributions. Both programs, CUMPOIS (NPO-17714) and NEWTPOIS (NPO-17715), used independently of one another. CUMPOIS determines cumulative Poisson distribution, used to evaluate cumulative distribution function (cdf) for gamma distributions with integer shape parameters and cdf for X (sup2) distributions with even degrees of freedom. Used by statisticians and others concerned with probabilities of independent events occurring over specific units of time, area, or volume. Written in C.

  10. ODIN system technology module library, 1972 - 1973

    NASA Technical Reports Server (NTRS)

    Hague, D. S.; Watson, D. A.; Glatt, C. R.; Jones, R. T.; Galipeau, J.; Phoa, Y. T.; White, R. J.

    1978-01-01

    ODIN/RLV is a digital computing system for the synthesis and optimization of reusable launch vehicle preliminary designs. The system consists of a library of technology modules in the form of independent computer programs and an executive program, ODINEX, which operates on the technology modules. The technology module library contains programs for estimating all major military flight vehicle system characteristics, for example, geometry, aerodynamics, economics, propulsion, inertia and volumetric properties, trajectories and missions, steady state aeroelasticity and flutter, and stability and control. A general system optimization module, a computer graphics module, and a program precompiler are available as user aids in the ODIN/RLV program technology module library.

  11. Kruskal-Wallis test: BASIC computer program to perform nonparametric one-way analysis of variance and multiple comparisons on ranks of several independent samples.

    PubMed

    Theodorsson-Norheim, E

    1986-08-01

    Multiple t tests at a fixed p level are frequently used to analyse biomedical data where analysis of variance followed by multiple comparisons or the adjustment of the p values according to Bonferroni would be more appropriate. The Kruskal-Wallis test is a nonparametric 'analysis of variance' which may be used to compare several independent samples. The present program is written in an elementary subset of BASIC and will perform Kruskal-Wallis test followed by multiple comparisons between the groups on practically any computer programmable in BASIC.

  12. HOPI: on-line injection optimization program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    LeMaire, J L

    1977-10-26

    A method of matching the beam from the 200 MeV linac to the AGS without the necessity of making emittance measurements is presented. An on-line computer program written on the PDP10 computer performs the matching by modifying independently the horizontal and vertical emittance. Experimental results show success with this method, which can be applied to any matching section.

  13. The Simultaneous Production Model; A Model for the Construction, Testing, Implementation and Revision of Educational Computer Simulation Environments.

    ERIC Educational Resources Information Center

    Zillesen, Pieter G. van Schaick

    This paper introduces a hardware and software independent model for producing educational computer simulation environments. The model, which is based on the results of 32 studies of educational computer simulations program production, implies that educational computer simulation environments are specified, constructed, tested, implemented, and…

  14. Computation of transonic potential flow about 3 dimensional inlets, ducts, and bodies

    NASA Technical Reports Server (NTRS)

    Reyhner, T. A.

    1982-01-01

    An analysis was developed and a computer code, P465 Version A, written for the prediction of transonic potential flow about three dimensional objects including inlet, duct, and body geometries. Finite differences and line relaxation are used to solve the complete potential flow equation. The coordinate system used for the calculations is independent of body geometry. Cylindrical coordinates are used for the computer code. The analysis is programmed in extended FORTRAN 4 for the CYBER 203 vector computer. The programming of the analysis is oriented toward taking advantage of the vector processing capabilities of this computer. Comparisons of computed results with experimental measurements are presented to verify the analysis. Descriptions of program input and output formats are also presented.

  15. Newton/Poisson-Distribution Program

    NASA Technical Reports Server (NTRS)

    Bowerman, Paul N.; Scheuer, Ernest M.

    1990-01-01

    NEWTPOIS, one of two computer programs making calculations involving cumulative Poisson distributions. NEWTPOIS (NPO-17715) and CUMPOIS (NPO-17714) used independently of one another. NEWTPOIS determines Poisson parameter for given cumulative probability, from which one obtains percentiles for gamma distributions with integer shape parameters and percentiles for X(sup2) distributions with even degrees of freedom. Used by statisticians and others concerned with probabilities of independent events occurring over specific units of time, area, or volume. Program written in C.

  16. PISCES 2 users manual

    NASA Technical Reports Server (NTRS)

    Pratt, Terrence W.

    1987-01-01

    PISCES 2 is a programming environment and set of extensions to Fortran 77 for parallel programming. It is intended to provide a basis for writing programs for scientific and engineering applications on parallel computers in a way that is relatively independent of the particular details of the underlying computer architecture. This user's manual provides a complete description of the PISCES 2 system as it is currently implemented on the 20 processor Flexible FLEX/32 at NASA Langley Research Center.

  17. Consistent and efficient processing of ADCP streamflow measurements

    USGS Publications Warehouse

    Mueller, David S.; Constantinescu, George; Garcia, Marcelo H.; Hanes, Dan

    2016-01-01

    The use of Acoustic Doppler Current Profilers (ADCPs) from a moving boat is a commonly used method for measuring streamflow. Currently, the algorithms used to compute the average depth, compute edge discharge, identify invalid data, and estimate velocity and discharge for invalid data vary among manufacturers. These differences could result in different discharges being computed from identical data. Consistent computational algorithm, automated filtering, and quality assessment of ADCP streamflow measurements that are independent of the ADCP manufacturer are being developed in a software program that can process ADCP moving-boat discharge measurements independent of the ADCP used to collect the data.

  18. SnapAnatomy, a computer-based interactive tool for independent learning of human anatomy.

    PubMed

    Yip, George W; Rajendran, Kanagasuntheram

    2008-06-01

    Computer-aided instruction materials are becoming increasing popular in medical education and particularly in the teaching of human anatomy. This paper describes SnapAnatomy, a new interactive program that the authors designed for independent learning of anatomy. SnapAnatomy is primarily tailored for the beginner student to encourage the learning of anatomy by developing a three-dimensional visualization of human structure that is essential to applications in clinical practice and the understanding of function. The program allows the student to take apart and to accurately put together body components in an interactive, self-paced and variable manner to achieve the learning outcome.

  19. Instructional Support Software System. Final Report.

    ERIC Educational Resources Information Center

    McDonnell Douglas Astronautics Co. - East, St. Louis, MO.

    This report describes the development of the Instructional Support System (ISS), a large-scale, computer-based training system that supports both computer-assisted instruction and computer-managed instruction. Written in the Ada programming language, the ISS software package is designed to be machine independent. It is also grouped into functional…

  20. Implications of Windowing Techniques for CAI.

    ERIC Educational Resources Information Center

    Heines, Jesse M.; Grinstein, Georges G.

    This paper discusses the use of a technique called windowing in computer assisted instruction to allow independent control of functional areas in complex CAI displays and simultaneous display of output from a running computer program and coordinated instructional material. Two obstacles to widespread use of CAI in computer science courses are…

  1. Gifted Students and Logo: Teacher's Role.

    ERIC Educational Resources Information Center

    Flickinger, Gayle Glidden

    1987-01-01

    The Logo computer program is well-suited to gifted students' learning style characteristics (independence, fluency, persistence); learning style preferences (learning alone, use of tactile and kinesthetic senses, and sound in the learning environment); and teaching method preferences (independent projects, discussion, flexibility, and traditional…

  2. Computer program to simulate Raman scattering

    NASA Technical Reports Server (NTRS)

    Zilles, B.; Carter, R.

    1977-01-01

    A computer program is described for simulating the vibration-rotation and pure rotational spectrum of a combustion system consisting of various diatomic molecules and CO2 as a function of temperature and number density. Two kinds of spectra are generated: a pure rotational spectrum for any mixture of diatomic and linear triatomic molecules, and a vibrational spectrum for diatomic molecules. The program is designed to accept independent rotational and vibrational temperatures for each molecule, as well as number densities.

  3. BASIC Instructional Program: System Documentation.

    ERIC Educational Resources Information Center

    Dageforde, Mary L.

    This report documents the BASIC Instructional Program (BIP), a "hands-on laboratory" that teaches elementary programming in the BASIC language, as implemented in the MAINSAIL language, a machine-independent revision of SAIL which should facilitate implementation of BIP on other computing systems. Eight instructional modules which make up…

  4. A FORTRAN technique for correlating a circular environmental variable with a linear physiological variable in the sugar maple.

    PubMed

    Pease, J M; Morselli, M F

    1987-01-01

    This paper deals with a computer program adapted to a statistical method for analyzing an unlimited quantity of binary recorded data of an independent circular variable (e.g. wind direction), and a linear variable (e.g. maple sap flow volume). Circular variables cannot be statistically analyzed with linear methods, unless they have been transformed. The program calculates a critical quantity, the acrophase angle (PHI, phi o). The technique is adapted from original mathematics [1] and is written in Fortran 77 for easier conversion between computer networks. Correlation analysis can be performed following the program or regression which, because of the circular nature of the independent variable, becomes periodic regression. The technique was tested on a file of approximately 4050 data pairs.

  5. The Impact of Socioeconomic Status on Achievement of High School Students Participating in a One-to-One Laptop Computer Program

    ERIC Educational Resources Information Center

    Weers, Anthony J.

    2012-01-01

    The purpose of this study was to determine the impact of socioeconomic status on the achievement of high school students participating in a one-to-one laptop computer program. Students living in poverty struggle to achieve in schools across the country, educators must address this issue. The independent variable in this study is socioeconomic…

  6. An improved multiple linear regression and data analysis computer program package

    NASA Technical Reports Server (NTRS)

    Sidik, S. M.

    1972-01-01

    NEWRAP, an improved version of a previous multiple linear regression program called RAPIER, CREDUC, and CRSPLT, allows for a complete regression analysis including cross plots of the independent and dependent variables, correlation coefficients, regression coefficients, analysis of variance tables, t-statistics and their probability levels, rejection of independent variables, plots of residuals against the independent and dependent variables, and a canonical reduction of quadratic response functions useful in optimum seeking experimentation. A major improvement over RAPIER is that all regression calculations are done in double precision arithmetic.

  7. SU-E-T-49: A Multi-Institutional Study of Independent Dose Verification for IMRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baba, H; Tachibana, H; Kamima, T

    2015-06-15

    Purpose: AAPM TG114 does not cover the independent verification for IMRT. We conducted a study of independent dose verification for IMRT in seven institutes to show the feasibility. Methods: 384 IMRT plans in the sites of prostate and head and neck (HN) were collected from the institutes, where the planning was performed using Eclipse and Pinnacle3 with the two techniques of step and shoot (S&S) and sliding window (SW). All of the institutes used a same independent dose verification software program (Simple MU Analysis: SMU, Triangle Product, Ishikawa, JP), which is Clarkson-based and CT images were used to compute radiologicalmore » path length. An ion-chamber measurement in a water-equivalent slab phantom was performed to compare the doses computed using the TPS and an independent dose verification program. Additionally, the agreement in dose computed in patient CT images between using the TPS and using the SMU was assessed. The dose of the composite beams in the plan was evaluated. Results: The agreement between the measurement and the SMU were −2.3±1.9 % and −5.6±3.6 % for prostate and HN sites, respectively. The agreement between the TPSs and the SMU were −2.1±1.9 % and −3.0±3.7 for prostate and HN sites, respectively. There was a negative systematic difference with similar standard deviation and the difference was larger in the HN site. The S&S technique showed a statistically significant difference between the SW. Because the Clarkson-based method in the independent program underestimated (cannot consider) the dose under the MLC. Conclusion: The accuracy would be improved when the Clarkson-based algorithm should be modified for IMRT and the tolerance level would be within 5%.« less

  8. Programmable Pulse Generator

    NASA Technical Reports Server (NTRS)

    Rhim, W. K.; Dart, J. A.

    1982-01-01

    New pulse generator programmed to produce pulses from several ports at different pulse lengths and intervals and virtually any combination and sequence. Unit contains a 256-word-by-16-bit memory loaded with instructions either manually or by computer. Once loaded, unit operates independently of computer.

  9. Pseudo-random number generator for the Sigma 5 computer

    NASA Technical Reports Server (NTRS)

    Carroll, S. N.

    1983-01-01

    A technique is presented for developing a pseudo-random number generator based on the linear congruential form. The two numbers used for the generator are a prime number and a corresponding primitive root, where the prime is the largest prime number that can be accurately represented on a particular computer. The primitive root is selected by applying Marsaglia's lattice test. The technique presented was applied to write a random number program for the Sigma 5 computer. The new program, named S:RANDOM1, is judged to be superior to the older program named S:RANDOM. For applications requiring several independent random number generators, a table is included showing several acceptable primitive roots. The technique and programs described can be applied to any computer having word length different from that of the Sigma 5.

  10. GASPRNG: GPU accelerated scalable parallel random number generator library

    NASA Astrophysics Data System (ADS)

    Gao, Shuang; Peterson, Gregory D.

    2013-04-01

    Graphics processors represent a promising technology for accelerating computational science applications. Many computational science applications require fast and scalable random number generation with good statistical properties, so they use the Scalable Parallel Random Number Generators library (SPRNG). We present the GPU Accelerated SPRNG library (GASPRNG) to accelerate SPRNG in GPU-based high performance computing systems. GASPRNG includes code for a host CPU and CUDA code for execution on NVIDIA graphics processing units (GPUs) along with a programming interface to support various usage models for pseudorandom numbers and computational science applications executing on the CPU, GPU, or both. This paper describes the implementation approach used to produce high performance and also describes how to use the programming interface. The programming interface allows a user to be able to use GASPRNG the same way as SPRNG on traditional serial or parallel computers as well as to develop tightly coupled programs executing primarily on the GPU. We also describe how to install GASPRNG and use it. To help illustrate linking with GASPRNG, various demonstration codes are included for the different usage models. GASPRNG on a single GPU shows up to 280x speedup over SPRNG on a single CPU core and is able to scale for larger systems in the same manner as SPRNG. Because GASPRNG generates identical streams of pseudorandom numbers as SPRNG, users can be confident about the quality of GASPRNG for scalable computational science applications. Catalogue identifier: AEOI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOI_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: UTK license. No. of lines in distributed program, including test data, etc.: 167900 No. of bytes in distributed program, including test data, etc.: 1422058 Distribution format: tar.gz Programming language: C and CUDA. Computer: Any PC or workstation with NVIDIA GPU (Tested on Fermi GTX480, Tesla C1060, Tesla M2070). Operating system: Linux with CUDA version 4.0 or later. Should also run on MacOS, Windows, or UNIX. Has the code been vectorized or parallelized?: Yes. Parallelized using MPI directives. RAM: 512 MB˜ 732 MB (main memory on host CPU, depending on the data type of random numbers.) / 512 MB (GPU global memory) Classification: 4.13, 6.5. Nature of problem: Many computational science applications are able to consume large numbers of random numbers. For example, Monte Carlo simulations are able to consume limitless random numbers for the computation as long as resources for the computing are supported. Moreover, parallel computational science applications require independent streams of random numbers to attain statistically significant results. The SPRNG library provides this capability, but at a significant computational cost. The GASPRNG library presented here accelerates the generators of independent streams of random numbers using graphical processing units (GPUs). Solution method: Multiple copies of random number generators in GPUs allow a computational science application to consume large numbers of random numbers from independent, parallel streams. GASPRNG is a random number generators library to allow a computational science application to employ multiple copies of random number generators to boost performance. Users can interface GASPRNG with software code executing on microprocessors and/or GPUs. Running time: The tests provided take a few minutes to run.

  11. HPCCP/CAS Workshop Proceedings 1998

    NASA Technical Reports Server (NTRS)

    Schulbach, Catherine; Mata, Ellen (Editor); Schulbach, Catherine (Editor)

    1999-01-01

    This publication is a collection of extended abstracts of presentations given at the HPCCP/CAS (High Performance Computing and Communications Program/Computational Aerosciences Project) Workshop held on August 24-26, 1998, at NASA Ames Research Center, Moffett Field, California. The objective of the Workshop was to bring together the aerospace high performance computing community, consisting of airframe and propulsion companies, independent software vendors, university researchers, and government scientists and engineers. The Workshop was sponsored by the HPCCP Office at NASA Ames Research Center. The Workshop consisted of over 40 presentations, including an overview of NASA's High Performance Computing and Communications Program and the Computational Aerosciences Project; ten sessions of papers representative of the high performance computing research conducted within the Program by the aerospace industry, academia, NASA, and other government laboratories; two panel sessions; and a special presentation by Mr. James Bailey.

  12. Avionic Data Bus Integration Technology

    DTIC Science & Technology

    1991-12-01

    address the hardware-software interaction between a digital data bus and an avionic system. Very Large Scale Integration (VLSI) ICs and multiversion ...the SCP. In 1984, the Sperry Corporation developed a fault tolerant system which employed multiversion programming, voting, and monitoring for error... MULTIVERSION PROGRAMMING. N-version programming. 226 N-VERSION PROGRAMMING. The independent coding of a number, N, of redundant computer programs that

  13. A survey of compiler optimization techniques

    NASA Technical Reports Server (NTRS)

    Schneck, P. B.

    1972-01-01

    Major optimization techniques of compilers are described and grouped into three categories: machine dependent, architecture dependent, and architecture independent. Machine-dependent optimizations tend to be local and are performed upon short spans of generated code by using particular properties of an instruction set to reduce the time or space required by a program. Architecture-dependent optimizations are global and are performed while generating code. These optimizations consider the structure of a computer, but not its detailed instruction set. Architecture independent optimizations are also global but are based on analysis of the program flow graph and the dependencies among statements of source program. A conceptual review of a universal optimizer that performs architecture-independent optimizations at source-code level is also presented.

  14. The Implementation of Blended Learning Using Android-Based Tutorial Video in Computer Programming Course II

    NASA Astrophysics Data System (ADS)

    Huda, C.; Hudha, M. N.; Ain, N.; Nandiyanto, A. B. D.; Abdullah, A. G.; Widiaty, I.

    2018-01-01

    Computer programming course is theoretical. Sufficient practice is necessary to facilitate conceptual understanding and encouraging creativity in designing computer programs/animation. The development of tutorial video in an Android-based blended learning is needed for students’ guide. Using Android-based instructional material, students can independently learn anywhere and anytime. The tutorial video can facilitate students’ understanding about concepts, materials, and procedures of programming/animation making in detail. This study employed a Research and Development method adapting Thiagarajan’s 4D model. The developed Android-based instructional material and tutorial video were validated by experts in instructional media and experts in physics education. The expert validation results showed that the Android-based material was comprehensive and very feasible. The tutorial video was deemed feasible as it received average score of 92.9%. It was also revealed that students’ conceptual understanding, skills, and creativity in designing computer program/animation improved significantly.

  15. Generalized fish life-cycle poplulation model and computer program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeAngelis, D. L.; Van Winkle, W.; Christensen, S. W.

    1978-03-01

    A generalized fish life-cycle population model and computer program have been prepared to evaluate the long-term effect of changes in mortality in age class 0. The general question concerns what happens to a fishery when density-independent sources of mortality are introduced that act on age class 0, particularly entrainment and impingement at power plants. This paper discusses the model formulation and computer program, including sample results. The population model consists of a system of difference equations involving age-dependent fecundity and survival. The fecundity for each age class is assumed to be a function of both the fraction of females sexuallymore » mature and the weight of females as they enter each age class. Natural mortality for age classes 1 and older is assumed to be independent of population size. Fishing mortality is assumed to vary with the number and weight of fish available to the fishery. Age class 0 is divided into six life stages. The probability of survival for age class 0 is estimated considering both density-independent mortality (natural and power plant) and density-dependent mortality for each life stage. Two types of density-dependent mortality are included. These are cannibalism of each life stage by older age classes and intra-life-stage competition.« less

  16. Applications of automatic differentiation in computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Carle, A.; Bischof, C.; Haigler, Kara J.; Newman, Perry A.

    1994-01-01

    Automatic differentiation (AD) is a powerful computational method that provides for computing exact sensitivity derivatives (SD) from existing computer programs for multidisciplinary design optimization (MDO) or in sensitivity analysis. A pre-compiler AD tool for FORTRAN programs called ADIFOR has been developed. The ADIFOR tool has been easily and quickly applied by NASA Langley researchers to assess the feasibility and computational impact of AD in MDO with several different FORTRAN programs. These include a state-of-the-art three dimensional multigrid Navier-Stokes flow solver for wings or aircraft configurations in transonic turbulent flow. With ADIFOR the user specifies sets of independent and dependent variables with an existing computer code. ADIFOR then traces the dependency path throughout the code, applies the chain rule to formulate derivative expressions, and generates new code to compute the required SD matrix. The resulting codes have been verified to compute exact non-geometric and geometric SD for a variety of cases. in less time than is required to compute the SD matrix using centered divided differences.

  17. Java: A New Brew for Educators, Administrators and Students.

    ERIC Educational Resources Information Center

    Gordon, Barbara

    1996-01-01

    Java is an object-oriented programming language developed by Sun Microsystems; its benefits include platform independence, security, and interactivity. Within the college community, Java is being used in programming courses, collaborative technology research projects, computer graphics instruction, and distance education. (AEF)

  18. Analytical evaluation of ILM sensors. Volume 2: Appendices

    NASA Technical Reports Server (NTRS)

    Kirk, R. J.

    1975-01-01

    The applicability of various sensing concepts to independent landing monitor systems was analyzed. Microwave landing system MLS accuracy requirements are presented along with a description of MLS airborne equipment. Computer programs developed during the analysis are described and include: a mathematical computer model for use in the performance assessment of reconnaissance sensor systems; a theoretical formulation of electromagnetic scattering to generate data at high incidence angles; atmospheric attenuation of microwaves; and microwave radiometry, programs

  19. NASA-Ames three-dimensional potential flow analysis system (POTFAN) equation solver code (SOLN) version 1

    NASA Technical Reports Server (NTRS)

    Davis, J. E.; Bonnett, W. S.; Medan, R. T.

    1976-01-01

    A computer program known as SOLN was developed as an independent segment of the NASA-Ames three-dimensional potential flow analysis systems of linear algebraic equations. Methods used include: LU decomposition, Householder's method, a partitioning scheme, and a block successive relaxation method. Due to the independent modular nature of the program, it may be used by itself and not necessarily in conjunction with other segments of the POTFAN system.

  20. Integrating computer programs for engineering analysis and design

    NASA Technical Reports Server (NTRS)

    Wilhite, A. W.; Crisp, V. K.; Johnson, S. C.

    1983-01-01

    The design of a third-generation system for integrating computer programs for engineering and design has been developed for the Aerospace Vehicle Interactive Design (AVID) system. This system consists of an engineering data management system, program interface software, a user interface, and a geometry system. A relational information system (ARIS) was developed specifically for the computer-aided engineering system. It is used for a repository of design data that are communicated between analysis programs, for a dictionary that describes these design data, for a directory that describes the analysis programs, and for other system functions. A method is described for interfacing independent analysis programs into a loosely-coupled design system. This method emphasizes an interactive extension of analysis techniques and manipulation of design data. Also, integrity mechanisms exist to maintain database correctness for multidisciplinary design tasks by an individual or a team of specialists. Finally, a prototype user interface program has been developed to aid in system utilization.

  1. Programming for physicians: A free online course.

    PubMed

    Kubben, Pieter L

    2016-01-01

    This article is an introduction for clinical readers into programming and computational thinking using the programming language Python. Exercises can be done completely online without any need for installation of software. Participants will be taught the fundamentals of programming, which are necessarily independent of the sort of application (stand-alone, web, mobile, engineering, and statistical/machine learning) that is to be developed afterward.

  2. SU-E-T-455: Impact of Different Independent Dose Verification Software Programs for Secondary Check

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Itano, M; Yamazaki, T; Kosaka, M

    2015-06-15

    Purpose: There have been many reports for different dose calculation algorithms for treatment planning system (TPS). Independent dose verification program (IndpPro) is essential to verify clinical plans from the TPS. However, the accuracy of different independent dose verification programs was not evident. We conducted a multi-institutional study to reveal the impact of different IndpPros using different TPSs. Methods: Three institutes participated in this study. They used two different IndpPros (RADCALC and Simple MU Analysis (SMU), which implemented the Clarkson algorithm. RADCALC needed the input of radiological path length (RPL) computed by the TPSs (Eclipse or Pinnacle3). SMU used CT imagesmore » to compute the RPL independently from TPS). An ion-chamber measurement in water-equivalent phantom was performed to evaluate the accuracy of two IndpPros and the TPS in each institute. Next, the accuracy of dose calculation using the two IndpPros compared to TPS was assessed in clinical plan. Results: The accuracy of IndpPros and the TPSs in the homogenous phantom was +/−1% variation to the measurement. 1543 treatment fields were collected from the patients treated in the institutes. The RADCALC showed better accuracy (0.9 ± 2.2 %) than the SMU (1.7 ± 2.1 %). However, the accuracy was dependent on the TPS (Eclipse: 0.5%, Pinnacle3: 1.0%). The accuracy of RADCALC with Eclipse was similar to that of SMU in one of the institute. Conclusion: Depending on independent dose verification program, the accuracy shows systematic dose accuracy variation even though the measurement comparison showed a similar variation. The variation was affected by radiological path length calculation. IndpPro with Pinnacle3 has different variation because Pinnacle3 computed the RPL using physical density. Eclipse and SMU uses electron density, though.« less

  3. Is the GUI approach to Computer Development (For Example, Mac, and Windows Technology) a Threat to Computer Users Who Are Blind?

    ERIC Educational Resources Information Center

    Melrose, S.; And Others

    1995-01-01

    In this point/counterpoint feature, S. Melrose contends that complex graphical user interfaces (GUIs) threaten the independence and equal employment of individuals with blindness. D. Wakefield then points out that access to the Windows software program for blind computer users is extremely unpredictable, and J. Gill describes a major European…

  4. Learners' Field Dependence and the Effects of Personalized Narration on Learners' Computer Perceptions and Task-Related Attitudes in Multimedia Learning

    ERIC Educational Resources Information Center

    Liew, Tze Wei; Tan, Su-Mae; Seydali, Rouzbeh

    2014-01-01

    In this article, the effects of personalized narration in multimedia learning on learners' computer perceptions and task-related attitudes were examined. Twenty-six field independent and 22 field dependent participants studied the computer-based multimedia lessons on C-Programming, either with personalized narration or non-personalized narration.…

  5. Research and development program for the development of advanced time-temperature dependent constitutive relationships. Volume 1: Theoretical discussion

    NASA Technical Reports Server (NTRS)

    Cassenti, B. N.

    1983-01-01

    The results of a 10-month research and development program for the development of advanced time-temperature constitutive relationships are presented. The program included (1) the effect of rate of change of temperature, (2) the development of a term to include time independent effects, and (3) improvements in computational efficiency. It was shown that rate of change of temperature could have a substantial effect on the predicted material response. A modification to include time-independent effects, applicable to many viscoplastic constitutive theories, was shown to reduce to classical plasticity. The computation time can be reduced by a factor of two if self-adaptive integration is used when compared to an integration using ordinary forward differences. During the course of the investigation, it was demonstrated that the most important single factor affecting the theoretical accuracy was the choice of material parameters.

  6. A Modular Three-Dimensional Finite-Difference Ground-Water Flow Model

    USGS Publications Warehouse

    McDonald, Michael G.; Harbaugh, Arlen W.; Guo, Weixing; Lu, Guoping

    1988-01-01

    This report presents a finite-difference model and its associated modular computer program. The model simulates flow in three dimensions. The report includes detailed explanations of physical and mathematical concepts on which the model is based and an explanation of how those concepts are incorporated in the modular structure of the computer program. The modular structure consists of a Main Program and a series of highly independent subroutines called 'modules.' The modules are grouped into 'packages.' Each package deals with a specific feature of the hydrologic system which is to be simulated, such as flow from rivers or flow into drains, or with a specific method of solving linear equations which describe the flow system, such as the Strongly Implicit Procedure or Slice-Successive Overrelaxation. The division of the program into modules permits the user to examine specific hydrologic features of the model independently. This also facilita development of additional capabilities because new packages can be added to the program without modifying the existing packages. The input and output systems of the computer program are also designed to permit maximum flexibility. Ground-water flow within the aquifer is simulated using a block-centered finite-difference approach. Layers can be simulated as confined, unconfined, or a combination of confined and unconfined. Flow associated with external stresses, such as wells, areal recharge, evapotranspiration, drains, and streams, can also be simulated. The finite-difference equations can be solved using either the Strongly Implicit Procedure or Slice-Successive Overrelaxation. The program is written in FORTRAN 77 and will run without modification on most computers that have a FORTRAN 77 compiler. For each program ,module, this report includes a narrative description, a flow chart, a list of variables, and a module listing.

  7. Distributed Computing with Centralized Support Works at Brigham Young.

    ERIC Educational Resources Information Center

    McDonald, Kelly; Stone, Brad

    1992-01-01

    Brigham Young University (Utah) has addressed the need for maintenance and support of distributed computing systems on campus by implementing a program patterned after a national business franchise, providing the support and training of a centralized administration but allowing each unit to operate much as an independent small business.…

  8. CASKS (Computer Analysis of Storage casKS): A microcomputer based analysis system for storage cask design review. User`s manual to Version 1b (including program reference)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, T.F.; Gerhard, M.A.; Trummer, D.J.

    CASKS (Computer Analysis of Storage casKS) is a microcomputer-based system of computer programs and databases developed at the Lawrence Livermore National Laboratory (LLNL) for evaluating safety analysis reports on spent-fuel storage casks. The bulk of the complete program and this user`s manual are based upon the SCANS (Shipping Cask ANalysis System) program previously developed at LLNL. A number of enhancements and improvements were added to the original SCANS program to meet requirements unique to storage casks. CASKS is an easy-to-use system that calculates global response of storage casks to impact loads, pressure loads and thermal conditions. This provides reviewers withmore » a tool for an independent check on analyses submitted by licensees. CASKS is based on microcomputers compatible with the IBM-PC family of computers. The system is composed of a series of menus, input programs, cask analysis programs, and output display programs. All data is entered through fill-in-the-blank input screens that contain descriptive data requests.« less

  9. F-Nets and Software Cabling: Deriving a Formal Model and Language for Portable Parallel Programming

    NASA Technical Reports Server (NTRS)

    DiNucci, David C.; Saini, Subhash (Technical Monitor)

    1998-01-01

    Parallel programming is still being based upon antiquated sequence-based definitions of the terms "algorithm" and "computation", resulting in programs which are architecture dependent and difficult to design and analyze. By focusing on obstacles inherent in existing practice, a more portable model is derived here, which is then formalized into a model called Soviets which utilizes a combination of imperative and functional styles. This formalization suggests more general notions of algorithm and computation, as well as insights into the meaning of structured programming in a parallel setting. To illustrate how these principles can be applied, a very-high-level graphical architecture-independent parallel language, called Software Cabling, is described, with many of the features normally expected from today's computer languages (e.g. data abstraction, data parallelism, and object-based programming constructs).

  10. An Analysis of the Assignment of the Responsible Test Organization in Simulator Testing.

    DTIC Science & Technology

    1981-09-01

    tie the tasks to the funds. This agreement would prevent the program mana- ger from redirecting funds without knowing the implications of deleting the...of take-over can be regulated by the exper- tise advances made by AFFTC on each simulator program. This would prevent an overload on AFFTC and possible...support.?) Response: independent ontractor & 3 er E. Computer support not required for my prgram (Omit #3 S. DOD cersornel used for computer support

  11. Programming for physicians: A free online course

    PubMed Central

    Kubben, Pieter L.

    2016-01-01

    This article is an introduction for clinical readers into programming and computational thinking using the programming language Python. Exercises can be done completely online without any need for installation of software. Participants will be taught the fundamentals of programming, which are necessarily independent of the sort of application (stand-alone, web, mobile, engineering, and statistical/machine learning) that is to be developed afterward. PMID:27127694

  12. Image-Processing Software For A Hypercube Computer

    NASA Technical Reports Server (NTRS)

    Lee, Meemong; Mazer, Alan S.; Groom, Steven L.; Williams, Winifred I.

    1992-01-01

    Concurrent Image Processing Executive (CIPE) is software system intended to develop and use image-processing application programs on concurrent computing environment. Designed to shield programmer from complexities of concurrent-system architecture, it provides interactive image-processing environment for end user. CIPE utilizes architectural characteristics of particular concurrent system to maximize efficiency while preserving architectural independence from user and programmer. CIPE runs on Mark-IIIfp 8-node hypercube computer and associated SUN-4 host computer.

  13. MODFLOW-2005 : the U.S. Geological Survey modular ground-water model--the ground-water flow process

    USGS Publications Warehouse

    Harbaugh, Arlen W.

    2005-01-01

    This report presents MODFLOW-2005, which is a new version of the finite-difference ground-water model commonly called MODFLOW. Ground-water flow is simulated using a block-centered finite-difference approach. Layers can be simulated as confined or unconfined. Flow associated with external stresses, such as wells, areal recharge, evapotranspiration, drains, and rivers, also can be simulated. The report includes detailed explanations of physical and mathematical concepts on which the model is based, an explanation of how those concepts are incorporated in the modular structure of the computer program, instructions for using the model, and details of the computer code. The modular structure consists of a MAIN Program and a series of highly independent subroutines. The subroutines are grouped into 'packages.' Each package deals with a specific feature of the hydrologic system that is to be simulated, such as flow from rivers or flow into drains, or with a specific method of solving the set of simultaneous equations resulting from the finite-difference method. Several solution methods are incorporated, including the Preconditioned Conjugate-Gradient method. The division of the program into packages permits the user to examine specific hydrologic features of the model independently. This also facilitates development of additional capabilities because new packages can be added to the program without modifying the existing packages. The input and output systems of the computer program also are designed to permit maximum flexibility. The program is designed to allow other capabilities, such as transport and optimization, to be incorporated, but this report is limited to describing the ground-water flow capability. The program is written in Fortran 90 and will run without modification on most computers that have a Fortran 90 compiler.

  14. Eyewitness to history: Landmarks in the development of computerized electrocardiography.

    PubMed

    Rautaharju, Pentti M

    2016-01-01

    The use of digital computers for ECG processing was pioneered in the early 1960s by two immigrants to the US, Hubert Pipberger, who initiated a collaborative VA project to collect an ECG-independent Frank lead data base, and Cesar Caceres at NIH who selected for his ECAN program standard 12-lead ECGs processed as single leads. Ray Bonner in the early 1970s placed his IBM 5880 program in a cart to print ECGs with interpretation, and computer-ECG programs were developed by Telemed, Marquette, HP-Philips and Mortara. The "Common Standards for quantitative Electrocardiography (CSE)" directed by Jos Willems evaluated nine ECG programs and eight cardiologists in clinically-defined categories. The total accuracy by a representative "average" cardiologist (75.5%) was 5.8% higher than that of the average program (69.7, p<0.001). Future comparisons of computer-based and expert reader performance are likely to show evolving results with continuing improvement of computer-ECG algorithms and changing expertise of ECG interpreters. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Senior Computational Scientist | Center for Cancer Research

    Cancer.gov

    The Basic Science Program (BSP) pursues independent, multidisciplinary research in basic and applied molecular biology, immunology, retrovirology, cancer biology, and human genetics. Research efforts and support are an integral part of the Center for Cancer Research (CCR) at the Frederick National Laboratory for Cancer Research (FNLCR). The Cancer & Inflammation Program (CIP),

  16. Building Software Development Capacity to Advance the State of Educational Technology

    ERIC Educational Resources Information Center

    Luterbach, Kenneth J.

    2013-01-01

    Educational technologists may advance the state of the field by increasing capacity to develop software tools and instructional applications. Presently, few academic programs in educational technology require even a single computer programming course. Further, the educational technologists who develop software generally work independently or in…

  17. Constructing Contracts: Making Discrete Mathematics Relevant to Beginning Programmers

    ERIC Educational Resources Information Center

    Gegg-Harrison, Timothy S.

    2005-01-01

    Although computer scientists understand the importance of discrete mathematics to the foundations of their field, computer science (CS) students do not always see the relevance. Thus, it is important to find a way to show students its relevance. The concept of program correctness is generally taught as an activity independent of the programming…

  18. Computer Assisted Mathematics Prescription Learning Pull-out Program in an Elementary School.

    ERIC Educational Resources Information Center

    Swarm, Christine C.

    Summaries of recent research have found computer-assisted instruction to be a highly motivating method of instruction that fosters independent study and provides for the immediate feedback necessary for the encouragement of individualized learning. A nonexperimental study was conducted with fourth, fifth, and sixth grade students (n=88) in a…

  19. Using Storytelling to Hone Language Skills

    ERIC Educational Resources Information Center

    Snider, Michelle

    2008-01-01

    A first glance into the classroom where Phillip Tillery teaches may leave visitors overwhelmed due by the array of high-tech equipment wired and ready for access by his students. Some students are working independently at computers while others are immersed in teams at a green screen and motion-capture setup. Various computer programs with myriad…

  20. Integrating a Music Curriculum into an External Degree Program Using Computer Assisted Instruction.

    ERIC Educational Resources Information Center

    Brinkley, Robert C.

    This paper outlines the method and theoretical basis for establishing and implementing an independent study music curriculum. The curriculum combines practical and theoretical paradigms and leads to an external degree. The computer, in direct interaction with the student, is the primary instructional tool, and the teacher is involved in indirect…

  1. System support software for the Space Ultrareliable Modular Computer (SUMC)

    NASA Technical Reports Server (NTRS)

    Hill, T. E.; Hintze, G. C.; Hodges, B. C.; Austin, F. A.; Buckles, B. P.; Curran, R. T.; Lackey, J. D.; Payne, R. E.

    1974-01-01

    The highly transportable programming system designed and implemented to support the development of software for the Space Ultrareliable Modular Computer (SUMC) is described. The SUMC system support software consists of program modules called processors. The initial set of processors consists of the supervisor, the general purpose assembler for SUMC instruction and microcode input, linkage editors, an instruction level simulator, a microcode grid print processor, and user oriented utility programs. A FORTRAN 4 compiler is undergoing development. The design facilitates the addition of new processors with a minimum effort and provides the user quasi host independence on the ground based operational software development computer. Additional capability is provided to accommodate variations in the SUMC architecture without consequent major modifications in the initial processors.

  2. After-effects of human-computer interaction indicated by P300 of the event-related brain potential.

    PubMed

    Trimmel, M; Huber, R

    1998-05-01

    After-effects of human-computer interaction (HCI) were investigated by using the P300 component of the event-related brain potential (ERP). Forty-nine subjects (naive non-users, beginners, experienced users, programmers) completed three paper/pencil tasks (text editing, solving intelligence test items, filling out a questionnaire on sensation seeking) and three HCI tasks (text editing, executing a tutor program or programming, playing Tetris). The sequence of 7-min tasks was randomized between subjects and balanced between groups. After each experimental condition ERPs were recorded during an acoustic discrimination task at F3, F4, Cz, P3 and P4. Data indicate that: (1) mental after-effects of HCI can be detected by P300 of the ERP; (2) HCI showed in general a reduced amplitude; (3) P300 amplitude varied also with type of task, mainly at F4 where it was smaller after cognitive tasks (intelligence test/programming) and larger after emotion-based tasks (sensation seeking/Tetris); (4) cognitive tasks showed shorter latencies; (5) latencies were widely location-independent (within the range of 356-358 ms at F3, F4, P3 and P4) after executing the tutor program or programming; and (6) all observed after-effects were independent of the user's experience in operating computers and may therefore reflect short-term after-effects only and no structural changes of information processing caused by HCI.

  3. PyPele Rewritten To Use MPI

    NASA Technical Reports Server (NTRS)

    Hockney, George; Lee, Seungwon

    2008-01-01

    A computer program known as PyPele, originally written as a Pythonlanguage extension module of a C++ language program, has been rewritten in pure Python language. The original version of PyPele dispatches and coordinates parallel-processing tasks on cluster computers and provides a conceptual framework for spacecraft-mission- design and -analysis software tools to run in an embarrassingly parallel mode. The original version of PyPele uses SSH (Secure Shell a set of standards and an associated network protocol for establishing a secure channel between a local and a remote computer) to coordinate parallel processing. Instead of SSH, the present Python version of PyPele uses Message Passing Interface (MPI) [an unofficial de-facto standard language-independent application programming interface for message- passing on a parallel computer] while keeping the same user interface. The use of MPI instead of SSH and the preservation of the original PyPele user interface make it possible for parallel application programs written previously for the original version of PyPele to run on MPI-based cluster computers. As a result, engineers using the previously written application programs can take advantage of embarrassing parallelism without need to rewrite those programs.

  4. Design and assessment of an interactive physics tutoring environment

    NASA Astrophysics Data System (ADS)

    Scott, Lisa Ann

    2001-07-01

    The application of scientific principles is an extremely important skill taught in undergraduate introductory science courses, yet many students emerge from such courses unable to reliably apply the scientific principles they have ostensibly learned. In an attempt to address this problem, the knowledge and thought processes needed to apply an important principle in introductory physics (Newton's law) were carefully analyzed. Reliable performance requires not only declarative knowledge but also corresponding procedural knowledge and the basic cognitive functions of deciding, implementing and assessing. Computer programs called guided-practice PALs (P&barbelow;ersonal A&barbelow;ssistants for Ḻearning) were developed to teach explicitly the knowledge and thought processes needed to apply Newton's law to solve problems. These programs employ a modified form of Palincsar and Brown's reciprocal-teaching strategy (1984) in which students and computers alternately coach each other, taking turns making decisions, implementing and assessing them. The computer programs make it practically feasible to provide students with individual guidance and feedback ordinarily unavailable in most courses. In a pilot study, the guided-practice PALs were found to be nearly as effective as individual tutoring by expert teachers and significantly more effective than the instruction provided in a well-taught physics course. This guided practice however is not sufficient to ensure that students develop the ability to perform independently. Accordingly, independent-performance PALs were developed which require students to work independently, receiving only the minimal feedback necessary to successfully complete the task. These independent-performance PALS are interspersed with guided-practice PALs to create an instructional environment which facilitates a gradual transition to independent performance. In a study designed to assess the efficacy of the PAL instruction, students in the PAL group used only guided-practice PALS and students in the PAL+ group used both guided-practice and independent-performance PALS. The performance of the PAL and PAL+ groups were compared to the performance of a Control group which received traditional instruction. The addition of the independent-performance PALS proved to be at least as effective as the guided-practice PALs alone, and both forms of PAL instruction were significantly more effective than traditional instruction.

  5. Web Program for Development of GUIs for Cluster Computers

    NASA Technical Reports Server (NTRS)

    Czikmantory, Akos; Cwik, Thomas; Klimeck, Gerhard; Hua, Hook; Oyafuso, Fabiano; Vinyard, Edward

    2003-01-01

    WIGLAF (a Web Interface Generator and Legacy Application Facade) is a computer program that provides a Web-based, distributed, graphical-user-interface (GUI) framework that can be adapted to any of a broad range of application programs, written in any programming language, that are executed remotely on any cluster computer system. WIGLAF enables the rapid development of a GUI for controlling and monitoring a specific application program running on the cluster and for transferring data to and from the application program. The only prerequisite for the execution of WIGLAF is a Web-browser program on a user's personal computer connected with the cluster via the Internet. WIGLAF has a client/server architecture: The server component is executed on the cluster system, where it controls the application program and serves data to the client component. The client component is an applet that runs in the Web browser. WIGLAF utilizes the Extensible Markup Language to hold all data associated with the application software, Java to enable platform-independent execution on the cluster system and the display of a GUI generator through the browser, and the Java Remote Method Invocation software package to provide simple, effective client/server networking.

  6. Discovery of replicating circular RNAs by RNA-seq and computational algorithms.

    PubMed

    Zhang, Zhixiang; Qi, Shuishui; Tang, Nan; Zhang, Xinxin; Chen, Shanshan; Zhu, Pengfei; Ma, Lin; Cheng, Jinping; Xu, Yun; Lu, Meiguang; Wang, Hongqing; Ding, Shou-Wei; Li, Shifang; Wu, Qingfa

    2014-12-01

    Replicating circular RNAs are independent plant pathogens known as viroids, or act to modulate the pathogenesis of plant and animal viruses as their satellite RNAs. The rate of discovery of these subviral pathogens was low over the past 40 years because the classical approaches are technical demanding and time-consuming. We previously described an approach for homology-independent discovery of replicating circular RNAs by analysing the total small RNA populations from samples of diseased tissues with a computational program known as progressive filtering of overlapping small RNAs (PFOR). However, PFOR written in PERL language is extremely slow and is unable to discover those subviral pathogens that do not trigger in vivo accumulation of extensively overlapping small RNAs. Moreover, PFOR is yet to identify a new viroid capable of initiating independent infection. Here we report the development of PFOR2 that adopted parallel programming in the C++ language and was 3 to 8 times faster than PFOR. A new computational program was further developed and incorporated into PFOR2 to allow the identification of circular RNAs by deep sequencing of long RNAs instead of small RNAs. PFOR2 analysis of the small RNA libraries from grapevine and apple plants led to the discovery of Grapevine latent viroid (GLVd) and Apple hammerhead viroid-like RNA (AHVd-like RNA), respectively. GLVd was proposed as a new species in the genus Apscaviroid, because it contained the typical structural elements found in this group of viroids and initiated independent infection in grapevine seedlings. AHVd-like RNA encoded a biologically active hammerhead ribozyme in both polarities, and was not specifically associated with any of the viruses found in apple plants. We propose that these computational algorithms have the potential to discover novel circular RNAs in plants, invertebrates and vertebrates regardless of whether they replicate and/or induce the in vivo accumulation of small RNAs.

  7. Discovery of Replicating Circular RNAs by RNA-Seq and Computational Algorithms

    PubMed Central

    Tang, Nan; Zhang, Xinxin; Chen, Shanshan; Zhu, Pengfei; Ma, Lin; Cheng, Jinping; Xu, Yun; Lu, Meiguang; Wang, Hongqing; Ding, Shou-Wei; Li, Shifang; Wu, Qingfa

    2014-01-01

    Replicating circular RNAs are independent plant pathogens known as viroids, or act to modulate the pathogenesis of plant and animal viruses as their satellite RNAs. The rate of discovery of these subviral pathogens was low over the past 40 years because the classical approaches are technical demanding and time-consuming. We previously described an approach for homology-independent discovery of replicating circular RNAs by analysing the total small RNA populations from samples of diseased tissues with a computational program known as progressive filtering of overlapping small RNAs (PFOR). However, PFOR written in PERL language is extremely slow and is unable to discover those subviral pathogens that do not trigger in vivo accumulation of extensively overlapping small RNAs. Moreover, PFOR is yet to identify a new viroid capable of initiating independent infection. Here we report the development of PFOR2 that adopted parallel programming in the C++ language and was 3 to 8 times faster than PFOR. A new computational program was further developed and incorporated into PFOR2 to allow the identification of circular RNAs by deep sequencing of long RNAs instead of small RNAs. PFOR2 analysis of the small RNA libraries from grapevine and apple plants led to the discovery of Grapevine latent viroid (GLVd) and Apple hammerhead viroid-like RNA (AHVd-like RNA), respectively. GLVd was proposed as a new species in the genus Apscaviroid, because it contained the typical structural elements found in this group of viroids and initiated independent infection in grapevine seedlings. AHVd-like RNA encoded a biologically active hammerhead ribozyme in both polarities, and was not specifically associated with any of the viruses found in apple plants. We propose that these computational algorithms have the potential to discover novel circular RNAs in plants, invertebrates and vertebrates regardless of whether they replicate and/or induce the in vivo accumulation of small RNAs. PMID:25503469

  8. Enabling On-Demand Database Computing with MIT SuperCloud Database Management System

    DTIC Science & Technology

    2015-09-15

    arc.liv.ac.uk/trac/SGE) provides these services and is independent of programming language (C, Fortran, Java , Matlab, etc) or parallel programming...a MySQL database to store DNS records. The DNS records are controlled via a simple web service interface that allows records to be created

  9. DIALIGN P: fast pair-wise and multiple sequence alignment using parallel processors.

    PubMed

    Schmollinger, Martin; Nieselt, Kay; Kaufmann, Michael; Morgenstern, Burkhard

    2004-09-09

    Parallel computing is frequently used to speed up computationally expensive tasks in Bioinformatics. Herein, a parallel version of the multi-alignment program DIALIGN is introduced. We propose two ways of dividing the program into independent sub-routines that can be run on different processors: (a) pair-wise sequence alignments that are used as a first step to multiple alignment account for most of the CPU time in DIALIGN. Since alignments of different sequence pairs are completely independent of each other, they can be distributed to multiple processors without any effect on the resulting output alignments. (b) For alignments of large genomic sequences, we use a heuristics by splitting up sequences into sub-sequences based on a previously introduced anchored alignment procedure. For our test sequences, this combined approach reduces the program running time of DIALIGN by up to 97%. By distributing sub-routines to multiple processors, the running time of DIALIGN can be crucially improved. With these improvements, it is possible to apply the program in large-scale genomics and proteomics projects that were previously beyond its scope.

  10. The nature and use of prediction skills in a biological computer simulation

    NASA Astrophysics Data System (ADS)

    Lavoie, Derrick R.; Good, Ron

    The primary goal of this study was to examine the science process skill of prediction using qualitative research methodology. The think-aloud interview, modeled after Ericsson and Simon (1984), let to the identification of 63 program exploration and prediction behaviors.The performance of seven formal and seven concrete operational high-school biology students were videotaped during a three-phase learning sequence on water pollution. Subjects explored the effects of five independent variables on two dependent variables over time using a computer-simulation program. Predictions were made concerning the effect of the independent variables upon dependent variables through time. Subjects were identified according to initial knowledge of the subject matter and success at solving three selected prediction problems.Successful predictors generally had high initial knowledge of the subject matter and were formal operational. Unsuccessful predictors generally had low initial knowledge and were concrete operational. High initial knowledge seemed to be more important to predictive success than stage of Piagetian cognitive development.Successful prediction behaviors involved systematic manipulation of the independent variables, note taking, identification and use of appropriate independent-dependent variable relationships, high interest and motivation, and in general, higher-level thinking skills. Behaviors characteristic of unsuccessful predictors were nonsystematic manipulation of independent variables, lack of motivation and persistence, misconceptions, and the identification and use of inappropriate independent-dependent variable relationships.

  11. 75 FR 4088 - Medicare Program; Approval of Independent Accrediting Organizations To Participate in the...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-26

    ...) (excluding x-ray, ultrasound, and fluoroscopy), as specified by the Secretary in consultation with physician... ``imaging and computer-assisted imaging services, including x-ray, ultrasound (including echocardiography...

  12. User's manual for EZPLOT version 5.5: A FORTRAN program for 2-dimensional graphic display of data

    NASA Technical Reports Server (NTRS)

    Garbinski, Charles; Redin, Paul C.; Budd, Gerald D.

    1988-01-01

    EZPLOT is a computer applications program that converts data resident on a file into a plot displayed on the screen of a graphics terminal. This program generates either time history or x-y plots in response to commands entered interactively from a terminal keyboard. Plot parameters consist of a single independent parameter and from one to eight dependent parameters. Various line patterns, symbol shapes, axis scales, text labels, and data modification techniques are available. This user's manual describes EZPLOT as it is implemented on the Ames Research Center, Dryden Research Facility ELXSI computer using DI-3000 graphics software tools.

  13. Assess/Mitigate Risk through the Use of Computer-Aided Software Engineering (CASE) Tools

    NASA Technical Reports Server (NTRS)

    Aguilar, Michael L.

    2013-01-01

    The NASA Engineering and Safety Center (NESC) was requested to perform an independent assessment of the mitigation of the Constellation Program (CxP) Risk 4421 through the use of computer-aided software engineering (CASE) tools. With the cancellation of the CxP, the assessment goals were modified to capture lessons learned and best practices in the use of CASE tools. The assessment goal was to prepare the next program for the use of these CASE tools. The outcome of the assessment is contained in this document.

  14. Simulation procedure for modeling transient water table and artesian stress and response

    USGS Publications Warehouse

    Reed, J.E.; Bedinger, M.S.; Terry, J.E.

    1976-01-01

    The series of computer programs described in this report were designed specifically to model the ground-water regime in sufficient detail to determine the effects of the imposition of various types of stress upon the system, and to display the results in a convenient manner during calibration and when presenting projected data. SUPERMOCK simulates the ground-water system and DATE and HYDROG aid in the display of computed data. During calibration, DATE is especially useful because it has the optional feature of comparing computed data with observed data. Although the programs can be run independently, experience dictates that for best results the three should be run as steps in the same job. English units of inches, feet, and days are used in each of the programs. The units for any parameters not given in the text are clearly specified in the instructions for input to the individual programs. (Woodard-USGS)

  15. Lifetime Reliability Evaluation of Structural Ceramic Parts with the CARES/LIFE Computer Program

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel N.; Powers, Lynn M.; Janosik, Lesley A.; Gyekenyesi, John P.

    1993-01-01

    The computer program CARES/LIFE calculates the time-dependent reliability of monolithic ceramic components subjected to thermomechanical and/or proof test loading. This program is an extension of the CARES (Ceramics Analysis and Reliability Evaluation of Structures) computer program. CARES/LIFE accounts for the phenomenon of subcritical crack growth (SCG) by utilizing the power law, Paris law, or Walker equation. The two-parameter Weibull cumulative distribution function is used to characterize the variation in component strength. The effects of multiaxial stresses are modeled using either the principle of independent action (PIA), Weibull's normal stress averaging method (NSA), or Batdorf's theory. Inert strength and fatigue parameters are estimated from rupture strength data of naturally flawed specimens loaded in static, dynamic, or cyclic fatigue. Two example problems demonstrating cyclic fatigue parameter estimation and component reliability analysis with proof testing are included.

  16. 77 FR 12848 - Medicare Program; Solicitation of Independent Accrediting Organizations To Participate in the...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-02

    ...)(4)(B) (excluding x-ray, ultrasound, and fluoroscopy), as specified by the Secretary in consultation... imaging services as ``imaging and computer-assisted imaging services, including x-ray, ultrasound...

  17. Cognitive Style Factors and Learning from Micro-Computer Based and Programmed Instructional Materials: A Preliminary Analysis.

    ERIC Educational Resources Information Center

    Canelos, James; And Others

    This study examined the effects of two cognitive styles--field dependents-independents and reflectivity-impulsivity--on learning from microcomputer-based instruction. In the first of three experimental designs, a programmed instruction text on the human heart was used which contained both visual and verbal information in an instructional display,…

  18. Solution of the equation of heat conduction with time dependent sources: Programmed application to planetary thermal history

    NASA Technical Reports Server (NTRS)

    Conel, J. E.

    1975-01-01

    A computer program (Program SPHERE) solving the inhomogeneous equation of heat conduction with radiation boundary condition on a thermally homogeneous sphere is described. The source terms are taken to be exponential functions of the time. Thermal properties are independent of temperature. The solutions are appropriate to studying certain classes of planetary thermal history. Special application to the moon is discussed.

  19. GTOOLS: an Interactive Computer Program to Process Gravity Data for High-Resolution Applications

    NASA Astrophysics Data System (ADS)

    Battaglia, M.; Poland, M. P.; Kauahikaua, J. P.

    2012-12-01

    An interactive computer program, GTOOLS, has been developed to process gravity data acquired by the Scintrex CG-5 and LaCoste & Romberg EG, G and D gravity meters. The aim of GTOOLS is to provide a validated methodology for computing relative gravity values in a consistent way accounting for as many environmental factors as possible (e.g., tides, ocean loading, solar constraints, etc.), as well as instrument drift. The program has a modular architecture. Each processing step is implemented in a tool (function) that can be either run independently or within an automated task. The tools allow the user to (a) read the gravity data acquired during field surveys completed using different types of gravity meters; (b) compute Earth tides using an improved version of Longman's (1959) model; (c) compute ocean loading using the HARDISP code by Petit and Luzum (2010) and ocean loading harmonics from the TPXO7.2 ocean tide model; (d) estimate the instrument drift using linear functions as appropriate; and (e) compute the weighted least-square-adjusted gravity values and their errors. The corrections are performed up to microGal ( μGal) precision, in accordance with the specifications of high-resolution surveys. The program has the ability to incorporate calibration factors that allow for surveys done using different gravimeters to be compared. Two additional tools (functions) allow the user to (1) estimate the instrument calibration factor by processing data collected by a gravimeter on a calibration range; (2) plot gravity time-series at a chosen benchmark. The interactive procedures and the program output (jpeg plots and text files) have been designed to ease data handling and archiving, to provide useful information for future data interpretation or modeling, and facilitate comparison of gravity surveys conducted at different times. All formulas have been checked for typographical errors in the original reference. GTOOLS, developed using Matlab, is open source and machine independent. We will demonstrate program use and utility with data from multiple microgravity surveys at Kilauea volcano, Hawai'i.

  20. The X-ray system of crystallographic programs for any computer having a PIDGIN FORTRAN compiler

    NASA Technical Reports Server (NTRS)

    Stewart, J. M.; Kruger, G. J.; Ammon, H. L.; Dickinson, C.; Hall, S. R.

    1972-01-01

    A manual is presented for the use of a library of crystallographic programs. This library, called the X-ray system, is designed to carry out the calculations required to solve the structure of crystals by diffraction techniques. It has been implemented at the University of Maryland on the Univac 1108. It has, however, been developed and run on a variety of machines under various operating systems. It is considered to be an essentially machine independent library of applications programs. The report includes definition of crystallographic computing terms, program descriptions, with some text to show their application to specific crystal problems, detailed card input descriptions, mass storage file structure and some example run streams.

  1. Automated inverse computer modeling of borehole flow data in heterogeneous aquifers

    NASA Astrophysics Data System (ADS)

    Sawdey, J. R.; Reeve, A. S.

    2012-09-01

    A computer model has been developed to simulate borehole flow in heterogeneous aquifers where the vertical distribution of permeability may vary significantly. In crystalline fractured aquifers, flow into or out of a borehole occurs at discrete locations of fracture intersection. Under these circumstances, flow simulations are defined by independent variables of transmissivity and far-field heads for each flow contributing fracture intersecting the borehole. The computer program, ADUCK (A Downhole Underwater Computational Kit), was developed to automatically calibrate model simulations to collected flowmeter data providing an inverse solution to fracture transmissivity and far-field head. ADUCK has been tested in variable borehole flow scenarios, and converges to reasonable solutions in each scenario. The computer program has been created using open-source software to make the ADUCK model widely available to anyone who could benefit from its utility.

  2. Optimal design of structures with multiple design variables per group and multiple loading conditions on the personal computer

    NASA Technical Reports Server (NTRS)

    Nguyen, D. T.; Rogers, J. L., Jr.

    1986-01-01

    A finite element based programming system for minimum weight design of a truss-type structure subjected to displacement, stress, and lower and upper bounds on design variables is presented. The programming system consists of a number of independent processors, each performing a specific task. These processors, however, are interfaced through a well-organized data base, thus making the tasks of modifying, updating, or expanding the programming system much easier in a friendly environment provided by many inexpensive personal computers. The proposed software can be viewed as an important step in achieving a 'dummy' finite element for optimization. The programming system has been implemented on both large and small computers (such as VAX, CYBER, IBM-PC, and APPLE) although the focus is on the latter. Examples are presented to demonstrate the capabilities of the code. The present programming system can be used stand-alone or as part of the multilevel decomposition procedure to obtain optimum design for very large scale structural systems. Furthermore, other related research areas such as developing optimization algorithms (or in the larger level: a structural synthesis program) for future trends in using parallel computers may also benefit from this study.

  3. Separation of left and right lungs using 3-dimensional information of sequential computed tomography images and a guided dynamic programming algorithm.

    PubMed

    Park, Sang Cheol; Leader, Joseph Ken; Tan, Jun; Lee, Guee Sang; Kim, Soo Hyung; Na, In Seop; Zheng, Bin

    2011-01-01

    This article presents a new computerized scheme that aims to accurately and robustly separate left and right lungs on computed tomography (CT) examinations. We developed and tested a method to separate the left and right lungs using sequential CT information and a guided dynamic programming algorithm using adaptively and automatically selected start point and end point with especially severe and multiple connections. The scheme successfully identified and separated all 827 connections on the total 4034 CT images in an independent testing data set of CT examinations. The proposed scheme separated multiple connections regardless of their locations, and the guided dynamic programming algorithm reduced the computation time to approximately 4.6% in comparison with the traditional dynamic programming and avoided the permeation of the separation boundary into normal lung tissue. The proposed method is able to robustly and accurately disconnect all connections between left and right lungs, and the guided dynamic programming algorithm is able to remove redundant processing.

  4. Multi-scale computation methods: Their applications in lithium-ion battery research and development

    NASA Astrophysics Data System (ADS)

    Siqi, Shi; Jian, Gao; Yue, Liu; Yan, Zhao; Qu, Wu; Wangwei, Ju; Chuying, Ouyang; Ruijuan, Xiao

    2016-01-01

    Based upon advances in theoretical algorithms, modeling and simulations, and computer technologies, the rational design of materials, cells, devices, and packs in the field of lithium-ion batteries is being realized incrementally and will at some point trigger a paradigm revolution by combining calculations and experiments linked by a big shared database, enabling accelerated development of the whole industrial chain. Theory and multi-scale modeling and simulation, as supplements to experimental efforts, can help greatly to close some of the current experimental and technological gaps, as well as predict path-independent properties and help to fundamentally understand path-independent performance in multiple spatial and temporal scales. Project supported by the National Natural Science Foundation of China (Grant Nos. 51372228 and 11234013), the National High Technology Research and Development Program of China (Grant No. 2015AA034201), and Shanghai Pujiang Program, China (Grant No. 14PJ1403900).

  5. PLANS; a finite element program for nonlinear analysis of structures. Volume 2: User's manual

    NASA Technical Reports Server (NTRS)

    Pifko, A.; Armen, H., Jr.; Levy, A.; Levine, H.

    1977-01-01

    The PLANS system, rather than being one comprehensive computer program, is a collection of finite element programs used for the nonlinear analysis of structures. This collection of programs evolved and is based on the organizational philosophy in which classes of analyses are treated individually based on the physical problem class to be analyzed. Each of the independent finite element computer programs of PLANS, with an associated element library, can be individually loaded and used to solve the problem class of interest. A number of programs have been developed for material nonlinear behavior alone and for combined geometric and material nonlinear behavior. The usage, capabilities, and element libraries of the current programs include: (1) plastic analysis of built-up structures where bending and membrane effects are significant, (2) three dimensional elastic-plastic analysis, (3) plastic analysis of bodies of revolution, and (4) material and geometric nonlinear analysis of built-up structures.

  6. Study of the modifications needed for efficient operation of NASTRAN on the Control Data Corporation STAR-100 computer

    NASA Technical Reports Server (NTRS)

    1975-01-01

    NASA structural analysis (NASTRAN) computer program is operational on three series of third generation computers. The problem and difficulties involved in adapting NASTRAN to a fourth generation computer, namely, the Control Data STAR-100, are discussed. The salient features which distinguish Control Data STAR-100 from third generation computers are hardware vector processing capability and virtual memory. A feasible method is presented for transferring NASTRAN to Control Data STAR-100 system while retaining much of the machine-independent code. Basic matrix operations are noted for optimization for vector processing.

  7. Grow--a computer subroutine that projects the growth of trees in the Lake States' forests.

    Treesearch

    Gary J. Brand

    1981-01-01

    A computer subroutine, Grow, has been written in 1977 Standard FORTRAN to implement a distance-independent, individual tree growth model for Lake States' forests. Grow is a small and easy-to-use version of the growth model. All the user has to do is write a calling program to read initial conditions, call Grow, and summarize the results.

  8. Field-Programmable Gate Array (FPGA) Emulation for Computer Architecture

    DTIC Science & Technology

    2009-08-01

    Execution with Versatile, Microarchitecture-Independent Snapshots, PhD thesis, MIT, Sep 2006. [10] Bienia, Christian, Kumar , Sanjeev, Singh , Jaswinder Pal...2] Pixie: MIPS Computer Systems, Inc. Assembly Language Programmer’s Guide, 1986. [3] Agarwal, Anant , Bianchini, Ricardo, Chaiken, David, David...pp. 68–79. [49] Woo, Steven Cameron, Ohara, Moriyoshi, Torrie, Evan, Singh , Jaswinder Pal, and Gupta, Anoop, “The SPLASH-2 programs

  9. QRev—Software for computation and quality assurance of acoustic doppler current profiler moving-boat streamflow measurements—Technical manual for version 2.8

    USGS Publications Warehouse

    Mueller, David S.

    2016-06-21

    The software program, QRev applies common and consistent computational algorithms combined with automated filtering and quality assessment of the data to improve the quality and efficiency of streamflow measurements and helps ensure that U.S. Geological Survey streamflow measurements are consistent, accurate, and independent of the manufacturer of the instrument used to make the measurement. Software from different manufacturers uses different algorithms for various aspects of the data processing and discharge computation. The algorithms used by QRev to filter data, interpolate data, and compute discharge are documented and compared to the algorithms used in the manufacturers’ software. QRev applies consistent algorithms and creates a data structure that is independent of the data source. QRev saves an extensible markup language (XML) file that can be imported into databases or electronic field notes software. This report is the technical manual for version 2.8 of QRev.

  10. Applications of Parallel Process HiMAP for Large Scale Multidisciplinary Problems

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.; Potsdam, Mark; Rodriguez, David; Kwak, Dochay (Technical Monitor)

    2000-01-01

    HiMAP is a three level parallel middleware that can be interfaced to a large scale global design environment for code independent, multidisciplinary analysis using high fidelity equations. Aerospace technology needs are rapidly changing. Computational tools compatible with the requirements of national programs such as space transportation are needed. Conventional computation tools are inadequate for modern aerospace design needs. Advanced, modular computational tools are needed, such as those that incorporate the technology of massively parallel processors (MPP).

  11. Development and Applications of a Modular Parallel Process for Large Scale Fluid/Structures Problems

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.; Byun, Chansup; Kwak, Dochan (Technical Monitor)

    2001-01-01

    A modular process that can efficiently solve large scale multidisciplinary problems using massively parallel super computers is presented. The process integrates disciplines with diverse physical characteristics by retaining the efficiency of individual disciplines. Computational domain independence of individual disciplines is maintained using a meta programming approach. The process integrates disciplines without affecting the combined performance. Results are demonstrated for large scale aerospace problems on several supercomputers. The super scalability and portability of the approach is demonstrated on several parallel computers.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eperin, A.P.; Zakharzhevsky, Yu.O.; Arzhaev, A.I.

    A two-year Finnish-Russian cooperation program has been initiated in 1995 to demonstrate the applicability of the leak-before-break concept (LBB) to the primary circuit piping of the Leningrad NPP. The program includes J-R curve testing of authentic pipe materials at full operating temperature, screening and computational LBB analyses complying with the USNRC Standard Review Plan 3.6.3, and exchange of LBB-related information with emphasis on NDE. Domestic computer codes are mainly used, and all tests and analyses are independently carried out by each party. The results are believed to apply generally to RBMK type plants of the first generation.

  13. Data storage technology: Hardware and software, Appendix B

    NASA Technical Reports Server (NTRS)

    Sable, J. D.

    1972-01-01

    This project involves the development of more economical ways of integrating and interfacing new storage devices and data processing programs into a computer system. It involves developing interface standards and a software/hardware architecture which will make it possible to develop machine independent devices and programs. These will interface with the machine dependent operating systems of particular computers. The development project will not be to develop the software which would ordinarily be the responsibility of the manufacturer to supply, but to develop the standards with which that software is expected to confirm in providing an interface with the user or storage system.

  14. Monolithic ceramic analysis using the SCARE program

    NASA Technical Reports Server (NTRS)

    Manderscheid, Jane M.

    1988-01-01

    The Structural Ceramics Analysis and Reliability Evaluation (SCARE) computer program calculates the fast fracture reliability of monolithic ceramic components. The code is a post-processor to the MSC/NASTRAN general purpose finite element program. The SCARE program automatically accepts the MSC/NASTRAN output necessary to compute reliability. This includes element stresses, temperatures, volumes, and areas. The SCARE program computes two-parameter Weibull strength distributions from input fracture data for both volume and surface flaws. The distributions can then be used to calculate the reliability of geometrically complex components subjected to multiaxial stress states. Several fracture criteria and flaw types are available for selection by the user, including out-of-plane crack extension theories. The theoretical basis for the reliability calculations was proposed by Batdorf. These models combine linear elastic fracture mechanics (LEFM) with Weibull statistics to provide a mechanistic failure criterion. Other fracture theories included in SCARE are the normal stress averaging technique and the principle of independent action. The objective of this presentation is to summarize these theories, including their limitations and advantages, and to provide a general description of the SCARE program, along with example problems.

  15. Elevated temperature crack growth

    NASA Technical Reports Server (NTRS)

    Kim, K. S.; Vanstone, R. H.

    1992-01-01

    The purpose of this program was to extend the work performed in the base program (CR 182247) into the regime of time-dependent crack growth under isothermal and thermal mechanical fatigue (TMF) loading, where creep deformation also influences the crack growth behavior. The investigation was performed in a two-year, six-task, combined experimental and analytical program. The path-independent integrals for application to time-dependent crack growth were critically reviewed. The crack growth was simulated using a finite element method. The path-independent integrals were computed from the results of finite-element analyses. The ability of these integrals to correlate experimental crack growth data were evaluated under various loading and temperature conditions. The results indicate that some of these integrals are viable parameters for crack growth prediction at elevated temperatures.

  16. The Linear Parameters and the Decoupling Matrix for Linearly Coupled Motion in 6 Dimensional Phase Space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parzen, George

    It will be shown that starting from a coordinate system where the 6 phase space coordinates are linearly coupled, one can go to a new coordinate system, where the motion is uncoupled, by means of a linear transformation. The original coupled coordinates and the new uncoupled coordinates are related by a 6 x 6 matrix, R. R will be called the decoupling matrix. It will be shown that of the 36 elements of the 6 x 6 decoupling matrix R, only 12 elements are independent. This may be contrasted with the results for motion in 4- dimensional phase space, wheremore » R has 4 independent elements. A set of equations is given from which the 12 elements of R can be computed from the one period transfer matrix. This set of equations also allows the linear parameters, the β i,α i, i = 1, 3, for the uncoupled coordinates, to be computed from the one period transfer matrix. An alternative procedure for computing the linear parameters,β i,α i, i = 1, 3, and the 12 independent elements of the decoupling matrix R is also given which depends on computing the eigenvectors of the one period transfer matrix. These results can be used in a tracking program, where the one period transfer matrix can be computed by multiplying the transfer matrices of all the elements in a period, to compute the linear parameters α i and β i, i = 1, 3, and the elements of the decoupling matrix R. The procedure presented here for studying coupled motion in 6-dimensional phase space can also be applied to coupled motion in 4-dimensional phase space, where it may be a useful alternative procedure to the procedure presented by Edwards and Teng. In particular, it gives a simpler programing procedure for computing the beta functions and the emittances for coupled motion in 4-dimensional phase space.« less

  17. The linear parameters and the decoupling matrix for linearly coupled motion in 6 dimensional phase space. Informal report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parzen, G.

    It will be shown that starting from a coordinate system where the 6 phase space coordinates are linearly coupled, one can go to a new coordinate system, where the motion is uncoupled, by means of a linear transformation. The original coupled coordinates and the new uncoupled coordinates are related by a 6 {times} 6 matrix, R. R will be called the decoupling matrix. It will be shown that of the 36 elements of the 6 {times} 6 decoupling matrix R, only 12 elements are independent. This may be contrasted with the results for motion in 4-dimensional phase space, where Rmore » has 4 independent elements. A set of equations is given from which the 12 elements of R can be computed from the one period transfer matrix. This set of equations also allows the linear parameters, {beta}{sub i}, {alpha}{sub i} = 1, 3, for the uncoupled coordinates, to be computed from the one period transfer matrix. An alternative procedure for computing the linear parameters, the {beta}{sub i}, {alpha}{sub i} i = 1, 3, and the 12 independent elements of the decoupling matrix R is also given which depends on computing the eigenvectors of the one period transfer matrix. These results can be used in a tracking program, where the one period transfer matrix can be computed by multiplying the transfer matrices of all the elements in a period, to compute the linear parameters {alpha}{sub i} and {beta}{sub i}, i = 1, 3, and the elements of the decoupling matrix R. The procedure presented here for studying coupled motion in 6-dimensional phase space can also be applied to coupled motion in 4-dimensional phase space, where it may be a useful alternative procedure to the procedure presented by Edwards and Teng. In particular, it gives a simpler programming procedure for computing the beta functions and the emittances for coupled motion in 4-dimensional phase space.« less

  18. Documentation of a multiple-technique computer program for plotting major-ion composition of natural waters

    USGS Publications Warehouse

    Briel, L.I.

    1993-01-01

    A computer program was written to produce 6 different types of water-quality diagrams--Piper, Stiff, pie, X-Y, boxplot, and Piper 3-D--from the same file of input data. The Piper 3-D diagram is a new method that projects values from the surface of a Piper plot into a triangular prism to show how variations in chemical composition can be related to variations in other water-quality variables. This program is an analytical tool to aid in the interpretation of data. This program is interactive, and the user can select from a menu the type of diagram to be produced and a large number of individual features. Alternatively, these choices can be specified in the data file, which provides a batch mode for running the program. The program does not display water-quality diagrams directly; plots are written to a file. Four different plot- file formats are available: device-independent metafiles, Adobe PostScript graphics files, and two Hewlett-Packard graphics language formats (7475 and 7586). An ASCII data-table file is also produced to document the computed values. This program is written in Fortran '77 and uses graphics subroutines from either the PRIOR AGTK or the DISSPLA graphics library. The program has been implemented on Prime series 50 and Data General Aviion computers within the USGS; portability to other computing systems depends on the availability of the graphics library.

  19. Tolerant (parallel) Programming

    NASA Technical Reports Server (NTRS)

    DiNucci, David C.; Bailey, David H. (Technical Monitor)

    1997-01-01

    In order to be truly portable, a program must be tolerant of a wide range of development and execution environments, and a parallel program is just one which must be tolerant of a very wide range. This paper first defines the term "tolerant programming", then describes many layers of tools to accomplish it. The primary focus is on F-Nets, a formal model for expressing computation as a folded partial-ordering of operations, thereby providing an architecture-independent expression of tolerant parallel algorithms. For implementing F-Nets, Cooperative Data Sharing (CDS) is a subroutine package for implementing communication efficiently in a large number of environments (e.g. shared memory and message passing). Software Cabling (SC), a very-high-level graphical programming language for building large F-Nets, possesses many of the features normally expected from today's computer languages (e.g. data abstraction, array operations). Finally, L2(sup 3) is a CASE tool which facilitates the construction, compilation, execution, and debugging of SC programs.

  20. 23 CFR 650.313 - Inspection procedures.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ...) Quality control and quality assurance. Assure systematic quality control (QC) and quality assurance (QA... periodic field review of inspection teams, periodic bridge inspection refresher training for program managers and team leaders, and independent review of inspection reports and computations. (h) Follow-up on...

  1. Hardware-Independent Proofs of Numerical Programs

    NASA Technical Reports Server (NTRS)

    Boldo, Sylvie; Nguyen, Thi Minh Tuyen

    2010-01-01

    On recent architectures, a numerical program may give different answers depending on the execution hardware and the compilation. Our goal is to formally prove properties about numerical programs that are true for multiple architectures and compilers. We propose an approach that states the rounding error of each floating-point computation whatever the environment. This approach is implemented in the Frama-C platform for static analysis of C code. Small case studies using this approach are entirely and automatically proved

  2. Application of Modular Building Block Databus to Air Force Systems

    DTIC Science & Technology

    1988-06-01

    City, State, and ZIP Code) Electronic Systems Division, AFSC Hanscom AFB, MA 01731-5000 10. SOURCE OF FUNDING NUMBERS PROGRAM ELEMENT NO...implement remote monitoring and control of the modules. Computer assistance is available for these processes. Cabinets are independent of the shelter...3 fc to the red databus. Located between the two databuses is the computer sup- porting the technical control position (figure 4) as well as

  3. SpecPad: device-independent NMR data visualization and processing based on the novel DART programming language and Html5 Web technology.

    PubMed

    Guigas, Bruno

    2017-09-01

    SpecPad is a new device-independent software program for the visualization and processing of one-dimensional and two-dimensional nuclear magnetic resonance (NMR) time domain (FID) and frequency domain (spectrum) data. It is the result of a project to investigate whether the novel programming language DART, in combination with Html5 Web technology, forms a suitable base to write an NMR data evaluation software which runs on modern computing devices such as Android, iOS, and Windows tablets as well as on Windows, Linux, and Mac OS X desktop PCs and notebooks. Another topic of interest is whether this technique also effectively supports the required sophisticated graphical and computational algorithms. SpecPad is device-independent because DART's compiled executable code is JavaScript and can, therefore, be run by the browsers of PCs and tablets. Because of Html5 browser cache technology, SpecPad may be operated off-line. Network access is only required during data import or export, e.g. via a Cloud service, or for software updates. A professional and easy to use graphical user interface consistent across all hardware platforms supports touch screen features on mobile devices for zooming and panning and for NMR-related interactive operations such as phasing, integration, peak picking, or atom assignment. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  4. Methods for operating parallel computing systems employing sequenced communications

    DOEpatents

    Benner, R.E.; Gustafson, J.L.; Montry, G.R.

    1999-08-10

    A parallel computing system and method are disclosed having improved performance where a program is concurrently run on a plurality of nodes for reducing total processing time, each node having a processor, a memory, and a predetermined number of communication channels connected to the node and independently connected directly to other nodes. The present invention improves performance of the parallel computing system by providing a system which can provide efficient communication between the processors and between the system and input and output devices. A method is also disclosed which can locate defective nodes with the computing system. 15 figs.

  5. Methods for operating parallel computing systems employing sequenced communications

    DOEpatents

    Benner, Robert E.; Gustafson, John L.; Montry, Gary R.

    1999-01-01

    A parallel computing system and method having improved performance where a program is concurrently run on a plurality of nodes for reducing total processing time, each node having a processor, a memory, and a predetermined number of communication channels connected to the node and independently connected directly to other nodes. The present invention improves performance of performance of the parallel computing system by providing a system which can provide efficient communication between the processors and between the system and input and output devices. A method is also disclosed which can locate defective nodes with the computing system.

  6. Analysis of a Multiprocessor Guidance Computer. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Maltach, E. G.

    1969-01-01

    The design of the next generation of spaceborne digital computers is described. It analyzes a possible multiprocessor computer configuration. For the analysis, a set of representative space computing tasks was abstracted from the Lunar Module Guidance Computer programs as executed during the lunar landing, from the Apollo program. This computer performs at this time about 24 concurrent functions, with iteration rates from 10 times per second to once every two seconds. These jobs were tabulated in a machine-independent form, and statistics of the overall job set were obtained. It was concluded, based on a comparison of simulation and Markov results, that the Markov process analysis is accurate in predicting overall trends and in configuration comparisons, but does not provide useful detailed information in specific situations. Using both types of analysis, it was determined that the job scheduling function is a critical one for efficiency of the multiprocessor. It is recommended that research into the area of automatic job scheduling be performed.

  7. Program For Evaluation Of Reliability Of Ceramic Parts

    NASA Technical Reports Server (NTRS)

    Nemeth, N.; Janosik, L. A.; Gyekenyesi, J. P.; Powers, Lynn M.

    1996-01-01

    CARES/LIFE predicts probability of failure of monolithic ceramic component as function of service time. Assesses risk that component fractures prematurely as result of subcritical crack growth (SCG). Effect of proof testing of components prior to service also considered. Coupled to such commercially available finite-element programs as ANSYS, ABAQUS, MARC, MSC/NASTRAN, and COSMOS/M. Also retains all capabilities of previous CARES code, which includes estimation of fast-fracture component reliability and Weibull parameters from inert strength (without SCG contributing to failure) specimen data. Estimates parameters that characterize SCG from specimen data as well. Written in ANSI FORTRAN 77 to be machine-independent. Program runs on any computer in which sufficient addressable memory (at least 8MB) and FORTRAN 77 compiler available. For IBM-compatible personal computer with minimum 640K memory, limited program available (CARES/PC, COSMIC number LEW-15248).

  8. Approaches in highly parameterized inversion - GENIE, a general model-independent TCP/IP run manager

    USGS Publications Warehouse

    Muffels, Christopher T.; Schreuder, Willem A.; Doherty, John E.; Karanovic, Marinko; Tonkin, Matthew J.; Hunt, Randall J.; Welter, David E.

    2012-01-01

    GENIE is a model-independent suite of programs that can be used to generally distribute, manage, and execute multiple model runs via the TCP/IP infrastructure. The suite consists of a file distribution interface, a run manage, a run executer, and a routine that can be compiled as part of a program and used to exchange model runs with the run manager. Because communication is via a standard protocol (TCP/IP), any computer connected to the Internet can serve in any of the capacities offered by this suite. Model independence is consistent with the existing template and instruction file protocols of the widely used PEST parameter estimation program. This report describes (1) the problem addressed; (2) the approach used by GENIE to queue, distribute, and retrieve model runs; and (3) user instructions, classes, and functions developed. It also includes (4) an example to illustrate the linking of GENIE with Parallel PEST using the interface routine.

  9. Multiple regression technique for Pth degree polynominals with and without linear cross products

    NASA Technical Reports Server (NTRS)

    Davis, J. W.

    1973-01-01

    A multiple regression technique was developed by which the nonlinear behavior of specified independent variables can be related to a given dependent variable. The polynomial expression can be of Pth degree and can incorporate N independent variables. Two cases are treated such that mathematical models can be studied both with and without linear cross products. The resulting surface fits can be used to summarize trends for a given phenomenon and provide a mathematical relationship for subsequent analysis. To implement this technique, separate computer programs were developed for the case without linear cross products and for the case incorporating such cross products which evaluate the various constants in the model regression equation. In addition, the significance of the estimated regression equation is considered and the standard deviation, the F statistic, the maximum absolute percent error, and the average of the absolute values of the percent of error evaluated. The computer programs and their manner of utilization are described. Sample problems are included to illustrate the use and capability of the technique which show the output formats and typical plots comparing computer results to each set of input data.

  10. YAMM - Yet Another Menu Manager

    NASA Technical Reports Server (NTRS)

    Mazer, Alan S.; Weidner, Richard J.

    1991-01-01

    Yet Another Menu Manager (YAMM) computer program an application-independent menuing package of software designed to remove much difficulty and save much time inherent in implementation of front ends of large packages of software. Provides complete menuing front end for wide variety of applications, with provisions for independence from specific types of terminals, configurations that meet specific needs of users, and dynamic creation of menu trees. Consists of two parts: description of menu configuration and body of application code. Written in C.

  11. Computerized content analysis of some adolescent writings of Napoleon Bonaparte: a test of the validity of the method.

    PubMed

    Gottschalk, Louis A; DeFrancisco, Don; Bechtel, Robert J

    2002-08-01

    The aim of this study was to test the validity of a computer software program previously demonstrated to be capable of making DSM-IV neuropsychiatric diagnoses from the content analysis of speech or verbal texts. In this report, the computer program was applied to three personal writings of Napoleon Bonaparte when he was 12 to 16 years of age. The accuracy of the neuropsychiatric evaluations derived from the computerized content analysis of these writings of Napoleon was independently corroborated by two biographers who have described pertinent details concerning his life situations, moods, and other emotional reactions during this adolescent period of his life. The relevance of this type of computer technology to psychohistorical research and clinical psychiatry is suggested.

  12. Time-dependent reliability analysis of ceramic engine components

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel N.

    1993-01-01

    The computer program CARES/LIFE calculates the time-dependent reliability of monolithic ceramic components subjected to thermomechanical and/or proof test loading. This program is an extension of the CARES (Ceramics Analysis and Reliability Evaluation of Structures) computer program. CARES/LIFE accounts for the phenomenon of subcritical crack growth (SCG) by utilizing either the power or Paris law relations. The two-parameter Weibull cumulative distribution function is used to characterize the variation in component strength. The effects of multiaxial stresses are modeled using either the principle of independent action (PIA), the Weibull normal stress averaging method (NSA), or the Batdorf theory. Inert strength and fatigue parameters are estimated from rupture strength data of naturally flawed specimens loaded in static, dynamic, or cyclic fatigue. Two example problems demonstrating proof testing and fatigue parameter estimation are given.

  13. Understanding Portability of a High-Level Programming Model on Contemporary Heterogeneous Architectures

    DOE PAGES

    Sabne, Amit J.; Sakdhnagool, Putt; Lee, Seyong; ...

    2015-07-13

    Accelerator-based heterogeneous computing is gaining momentum in the high-performance computing arena. However, the increased complexity of heterogeneous architectures demands more generic, high-level programming models. OpenACC is one such attempt to tackle this problem. Although the abstraction provided by OpenACC offers productivity, it raises questions concerning both functional and performance portability. In this article, the authors propose HeteroIR, a high-level, architecture-independent intermediate representation, to map high-level programming models, such as OpenACC, to heterogeneous architectures. They present a compiler approach that translates OpenACC programs into HeteroIR and accelerator kernels to obtain OpenACC functional portability. They then evaluate the performance portability obtained bymore » OpenACC with their approach on 12 OpenACC programs on Nvidia CUDA, AMD GCN, and Intel Xeon Phi architectures. They study the effects of various compiler optimizations and OpenACC program settings on these architectures to provide insights into the achieved performance portability.« less

  14. Breaking the hype cycle: using the computer effectively with learners with intellectual disabilities.

    PubMed

    Lloyd, Jan; Moni, Karen B; Jobling, Anne

    2006-06-01

    There has been huge growth in the use of information technology (IT) in classrooms for learners of all ages. It has been suggested that computers in the classroom encourage independent and self-paced learning, provide immediate feedback and improve self-motivation and self-confidence. Concurrently there is increasing interest related to the role of technology in educational programs for individuals with intellectual disabilities. However, although many claims are made about the benefits of computers and software packages there is limited evidence based information to support these claims. Researchers are now starting to look at the specific instructional design features that are hypothesised to facilitate education outcomes rather than the over-emphasis on graphics and sounds. Research undertaken as part of a post-school program (Latch-On: Literacy and Technology - Hands On) at the University of Queensland investigated the use of computers by young adults with intellectual disabilities. The aims of the research reported in this paper were to address the challenges identified in the 'hype' surrounding different pieces of educational software and to develop a means of systematically analysing software for use in teaching programs.

  15. Parallel computation and the Basis system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, G.R.

    1992-12-16

    A software package has been written that can facilitate efforts to develop powerful, flexible, and easy-to-use programs that can run in single-processor, massively parallel, and distributed computing environments. Particular attention has been given to the difficulties posed by a program consisting of many science packages that represent subsystems of a complicated, coupled system. Methods have been found to maintain independence of the packages by hiding data structures without increasing the communication costs in a parallel computing environment. Concepts developed in this work are demonstrated by a prototype program that uses library routines from two existing software systems, Basis and Parallelmore » Virtual Machine (PVM). Most of the details of these libraries have been encapsulated in routines and macros that could be rewritten for alternative libraries that possess certain minimum capabilities. The prototype software uses a flexible master-and-slaves paradigm for parallel computation and supports domain decomposition with message passing for partitioning work among slaves. Facilities are provided for accessing variables that are distributed among the memories of slaves assigned to subdomains. The software is named PROTOPAR.« less

  16. Parallel computation and the basis system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, G.R.

    1993-05-01

    A software package has been written that can facilitate efforts to develop powerful, flexible, and easy-to use programs that can run in single-processor, massively parallel, and distributed computing environments. Particular attention has been given to the difficulties posed by a program consisting of many science packages that represent subsystems of a complicated, coupled system. Methods have been found to maintain independence of the packages by hiding data structures without increasing the communications costs in a parallel computing environment. Concepts developed in this work are demonstrated by a prototype program that uses library routines from two existing software systems, Basis andmore » Parallel Virtual Machine (PVM). Most of the details of these libraries have been encapsulated in routines and macros that could be rewritten for alternative libraries that possess certain minimum capabilities. The prototype software uses a flexible master-and-slaves paradigm for parallel computation and supports domain decomposition with message passing for partitioning work among slaves. Facilities are provided for accessing variables that are distributed among the memories of slaves assigned to subdomains. The software is named PROTOPAR.« less

  17. User's manual for SEDCALC, a computer program for computation of suspended-sediment discharge

    USGS Publications Warehouse

    Koltun, G.F.; Gray, John R.; McElhone, T.J.

    1994-01-01

    Sediment-Record Calculations (SEDCALC), a menu-driven set of interactive computer programs, was developed to facilitate computation of suspended-sediment records. The programs comprising SEDCALC were developed independently in several District offices of the U.S. Geological Survey (USGS) to minimize the intensive labor associated with various aspects of sediment-record computations. SEDCALC operates on suspended-sediment-concentration data stored in American Standard Code for Information Interchange (ASCII) files in a predefined card-image format. Program options within SEDCALC can be used to assist in creating and editing the card-image files, as well as to reformat card-image files to and from formats used by the USGS Water-Quality System. SEDCALC provides options for creating card-image files containing time series of equal-interval suspended-sediment concentrations from 1. digitized suspended-sediment-concentration traces, 2. linear interpolation between log-transformed instantaneous suspended-sediment-concentration data stored at unequal time intervals, and 3. nonlinear interpolation between log-transformed instantaneous suspended-sediment-concentration data stored at unequal time intervals. Suspended-sediment discharge can be computed from the streamflow and suspended-sediment-concentration data or by application of transport relations derived by regressing log-transformed instantaneous streamflows on log-transformed instantaneous suspended-sediment concentrations or discharges. The computed suspended-sediment discharge data are stored in card-image files that can be either directly imported to the USGS Automated Data Processing System or used to generate plots by means of other SEDCALC options.

  18. Simple, efficient allocation of modelling runs on heterogeneous clusters with MPI

    USGS Publications Warehouse

    Donato, David I.

    2017-01-01

    In scientific modelling and computation, the choice of an appropriate method for allocating tasks for parallel processing depends on the computational setting and on the nature of the computation. The allocation of independent but similar computational tasks, such as modelling runs or Monte Carlo trials, among the nodes of a heterogeneous computational cluster is a special case that has not been specifically evaluated previously. A simulation study shows that a method of on-demand (that is, worker-initiated) pulling from a bag of tasks in this case leads to reliably short makespans for computational jobs despite heterogeneity both within and between cluster nodes. A simple reference implementation in the C programming language with the Message Passing Interface (MPI) is provided.

  19. AI in manufacturing

    NASA Astrophysics Data System (ADS)

    Gross, John E.; Minato, Rick; Smith, David M.; Loftin, R. B.; Savely, Robert T.

    1991-10-01

    AI techniques are shown to have been useful in such aerospace industry tasks as vehicle configuration layouts, process planning, tool design, numerically-controlled programming of tools, production scheduling, and equipment testing and diagnosis. Accounts are given of illustrative experiences at the production facilities of three major aerospace defense contractors. Also discussed is NASA's autonomous Intelligent Computer-Aided Training System, for such ambitious manned programs as Space Station Freedom, which employs five different modules to constitute its job-independent training architecture.

  20. Java Mission Evaluation Workstation System

    NASA Technical Reports Server (NTRS)

    Pettinger, Ross; Watlington, Tim; Ryley, Richard; Harbour, Jeff

    2006-01-01

    The Java Mission Evaluation Workstation System (JMEWS) is a collection of applications designed to retrieve, display, and analyze both real-time and recorded telemetry data. This software is currently being used by both the Space Shuttle Program (SSP) and the International Space Station (ISS) program. JMEWS was written in the Java programming language to satisfy the requirement of platform independence. An object-oriented design was used to satisfy additional requirements and to make the software easily extendable. By virtue of its platform independence, JMEWS can be used on the UNIX workstations in the Mission Control Center (MCC) and on office computers. JMEWS includes an interactive editor that allows users to easily develop displays that meet their specific needs. The displays can be developed and modified while viewing data. By simply selecting a data source, the user can view real-time, recorded, or test data.

  1. Accelerated Reader. What Works Clearinghouse Intervention Report

    ERIC Educational Resources Information Center

    What Works Clearinghouse, 2009

    2009-01-01

    "Accelerated Reader" is a computer-based reading management system designed to complement an existing classroom literacy program for grades pre-K-12. It is designed to increase the amount of time students spend reading independently. Students choose reading-level appropriate books or short stories for which Accelerated Reader tests are…

  2. Negotiation Performance: Antecedents, Outcomes, and Training Recommendations

    DTIC Science & Technology

    2011-10-01

    Tutorial Cognitive Apprenticeships Instructional Conversations Independent Programmed Instruction Computer-Based Instruction I Rr La...procedural knowledge, as well as the more distal antecedents of individual difference variables (e.g., cognitive ability , personality) and psychological...individual difference variables (e.g., cognitive ability , personality) and psychological processes (e.g., cognitive , motivational, and emotional). This

  3. ORE's GENeric Evaluation SYStem: GENESYS 1988-89.

    ERIC Educational Resources Information Center

    Baenen, Nancy; And Others

    GENESYS--GENeric Evaluation SYStem--is a method of streamlining data collection and evaluation through the use of computer technology. GENESYS has allowed the Office of Research and Evaluation (ORE) of the Austin (Texas) Independent School District to evaluate a multitude of contrasting programs with limited resources. By standardizing methods and…

  4. Uses of Technology in Community Colleges: A Resource Book for Community College Teachers and Administrators.

    ERIC Educational Resources Information Center

    Gooler, Dennis D., Ed.

    This resource guide for community college teachers and administrators focuses on hardware and software. The following are discussed: (1) individual technologies--computer-assisted instruction, audio tape, films, filmstrips/slides, dial access, programmed instruction, learning activity packages, video cassettes, cable TV, independent learning labs,…

  5. Proving the correctness of the flight director program EADIFD, volume 1

    NASA Technical Reports Server (NTRS)

    Lee, F. J.; Maurer, W. D.

    1977-01-01

    EADIFD is written in symbolic assembly language for execution on the C4000 airborne computer. It is a subprogram of an aircraft navigation and guidance program and is used to generate pitch and roll command signals for use in terminal airspace. The proof of EADIFD was carried out by an inductive assertion method consisting of two parts, a verification condition generator and a source language independent proof checker. With the specifications provided by NASA, EADIFD was proved correct. The termination of the program is guaranteed and the program contains no instructions that can modify it under any conditions.

  6. SMMP v. 3.0—Simulating proteins and protein interactions in Python and Fortran

    NASA Astrophysics Data System (ADS)

    Meinke, Jan H.; Mohanty, Sandipan; Eisenmenger, Frank; Hansmann, Ulrich H. E.

    2008-03-01

    We describe a revised and updated version of the program package SMMP. SMMP is an open-source FORTRAN package for molecular simulation of proteins within the standard geometry model. It is designed as a simple and inexpensive tool for researchers and students to become familiar with protein simulation techniques. SMMP 3.0 sports a revised API increasing its flexibility, an implementation of the Lund force field, multi-molecule simulations, a parallel implementation of the energy function, Python bindings, and more. Program summaryTitle of program:SMMP Catalogue identifier:ADOJ_v3_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADOJ_v3_0.html Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing provisions:Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html Programming language used:FORTRAN, Python No. of lines in distributed program, including test data, etc.:52 105 No. of bytes in distributed program, including test data, etc.:599 150 Distribution format:tar.gz Computer:Platform independent Operating system:OS independent RAM:2 Mbytes Classification:3 Does the new version supersede the previous version?:Yes Nature of problem:Molecular mechanics computations and Monte Carlo simulation of proteins. Solution method:Utilizes ECEPP2/3, FLEX, and Lund potentials. Includes Monte Carlo simulation algorithms for canonical, as well as for generalized ensembles. Reasons for new version:API changes and increased functionality. Summary of revisions:Added Lund potential; parameters used in subroutines are now passed as arguments; multi-molecule simulations; parallelized energy calculation for ECEPP; Python bindings. Restrictions:The consumed CPU time increases with the size of protein molecule. Running time:Depends on the size of the simulated molecule.

  7. Internet (WWW) based system of ultrasonic image processing tools for remote image analysis.

    PubMed

    Zeng, Hong; Fei, Ding-Yu; Fu, Cai-Ting; Kraft, Kenneth A

    2003-07-01

    Ultrasonic Doppler color imaging can provide anatomic information and simultaneously render flow information within blood vessels for diagnostic purpose. Many researchers are currently developing ultrasound image processing algorithms in order to provide physicians with accurate clinical parameters from the images. Because researchers use a variety of computer languages and work on different computer platforms to implement their algorithms, it is difficult for other researchers and physicians to access those programs. A system has been developed using World Wide Web (WWW) technologies and HTTP communication protocols to publish our ultrasonic Angle Independent Doppler Color Image (AIDCI) processing algorithm and several general measurement tools on the Internet, where authorized researchers and physicians can easily access the program using web browsers to carry out remote analysis of their local ultrasonic images or images provided from the database. In order to overcome potential incompatibility between programs and users' computer platforms, ActiveX technology was used in this project. The technique developed may also be used for other research fields.

  8. Some Programs Should Not Run on Laptops - Providing Programmatic Access to Applications Via Web Services

    NASA Astrophysics Data System (ADS)

    Gupta, V.; Gupta, N.; Gupta, S.; Field, E.; Maechling, P.

    2003-12-01

    Modern laptop computers, and personal computers, can provide capabilities that are, in many ways, comparable to workstations or departmental servers. However, this doesn't mean we should run all computations on our local computers. We have identified several situations in which it preferable to implement our seismological application programs in a distributed, server-based, computing model. In this model, application programs on the user's laptop, or local computer, invoke programs that run on an organizational server, and the results are returned to the invoking system. Situations in which a server-based architecture may be preferred include: (a) a program is written in a language, or written for an operating environment, that is unsupported on the local computer, (b) software libraries or utilities required to execute a program are not available on the users computer, (c) a computational program is physically too large, or computationally too expensive, to run on a users computer, (d) a user community wants to enforce a consistent method of performing a computation by standardizing on a single implementation of a program, and (e) the computational program may require current information, that is not available to all client computers. Until recently, distributed, server-based, computational capabilities were implemented using client/server architectures. In these architectures, client programs were often written in the same language, and they executed in the same computing environment, as the servers. Recently, a new distributed computational model, called Web Services, has been developed. Web Services are based on Internet standards such as XML, SOAP, WDSL, and UDDI. Web Services offer the promise of platform, and language, independent distributed computing. To investigate this new computational model, and to provide useful services to the SCEC Community, we have implemented several computational and utility programs using a Web Service architecture. We have hosted these Web Services as a part of the SCEC Community Modeling Environment (SCEC/CME) ITR Project (http://www.scec.org/cme). We have implemented Web Services for several of the reasons sited previously. For example, we implemented a FORTRAN-based Earthquake Rupture Forecast (ERF) as a Web Service for use by client computers that don't support a FORTRAN runtime environment. We implemented a Generic Mapping Tool (GMT) Web Service for use by systems that don't have local access to GMT. We implemented a Hazard Map Calculator Web Service to execute Hazard calculations that are too computationally intensive to run on a local system. We implemented a Coordinate Conversion Web Service to enforce a standard and consistent method for converting between UTM and Lat/Lon. Our experience developing these services indicates both strengths and weakness in current Web Service technology. Client programs that utilize Web Services typically need network access, a significant disadvantage at times. Programs with simple input and output parameters were the easiest to implement as Web Services, while programs with complex parameter-types required a significant amount of additional development. We also noted that Web services are very data-oriented, and adapting object-oriented software into the Web Service model proved problematic. Also, the Web Service approach of converting data types into XML format for network transmission has significant inefficiencies for some data sets.

  9. cloudPEST - A python module for cloud-computing deployment of PEST, a program for parameter estimation

    USGS Publications Warehouse

    Fienen, Michael N.; Kunicki, Thomas C.; Kester, Daniel E.

    2011-01-01

    This report documents cloudPEST-a Python module with functions to facilitate deployment of the model-independent parameter estimation code PEST on a cloud-computing environment. cloudPEST makes use of low-level, freely available command-line tools that interface with the Amazon Elastic Compute Cloud (EC2(TradeMark)) that are unlikely to change dramatically. This report describes the preliminary setup for both Python and EC2 tools and subsequently describes the functions themselves. The code and guidelines have been tested primarily on the Windows(Registered) operating system but are extensible to Linux(Registered).

  10. Method for simultaneous overlapped communications between neighboring processors in a multiple

    DOEpatents

    Benner, Robert E.; Gustafson, John L.; Montry, Gary R.

    1991-01-01

    A parallel computing system and method having improved performance where a program is concurrently run on a plurality of nodes for reducing total processing time, each node having a processor, a memory, and a predetermined number of communication channels connected to the node and independently connected directly to other nodes. The present invention improves performance of performance of the parallel computing system by providing a system which can provide efficient communication between the processors and between the system and input and output devices. A method is also disclosed which can locate defective nodes with the computing system.

  11. Development and Applications of a Modular Parallel Process for Large Scale Fluid/Structures Problems

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.; Kwak, Dochan (Technical Monitor)

    2002-01-01

    A modular process that can efficiently solve large scale multidisciplinary problems using massively parallel supercomputers is presented. The process integrates disciplines with diverse physical characteristics by retaining the efficiency of individual disciplines. Computational domain independence of individual disciplines is maintained using a meta programming approach. The process integrates disciplines without affecting the combined performance. Results are demonstrated for large scale aerospace problems on several supercomputers. The super scalability and portability of the approach is demonstrated on several parallel computers.

  12. Stresses in acoustically excited panels and shuttle insulation tiles

    NASA Technical Reports Server (NTRS)

    Otalvo, I. U.

    1976-01-01

    Natural vibration and acoustic response results are presented for a 36 x 18 inch panel with 18 6 x 6-inch tiles of 1.0, 1.6 and 2.3 inch thicknesses. Computed results for an untiled panel are compared with experiments performed earlier. Natural frequency and acoustic response comparisons are also given for independent analyses performed upon tiled and untiled panels. The results indicate the general applicability of the computer programs developed for use as shuttle design and analysis tools.

  13. Flight program language requirements. Volume 2: Requirements and evaluations

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The efforts and results are summarized for a study to establish requirements for a flight programming language for future onboard computer applications. Several different languages were available as potential candidates for future NASA flight programming efforts. The study centered around an evaluation of the four most pertinent existing aerospace languages. Evaluation criteria were established, and selected kernels from the current Saturn 5 and Skylab flight programs were used as benchmark problems for sample coding. An independent review of the language specifications incorporated anticipated future programming requirements into the evaluation. A set of detailed language requirements was synthesized from these activities. The details of program language requirements and of the language evaluations are described.

  14. Software fault-tolerance by design diversity DEDIX: A tool for experiments

    NASA Technical Reports Server (NTRS)

    Avizienis, A.; Gunningberg, P.; Kelly, J. P. J.; Lyu, R. T.; Strigini, L.; Traverse, P. J.; Tso, K. S.; Voges, U.

    1986-01-01

    The use of multiple versions of a computer program, independently designed from a common specification, to reduce the effects of an error is discussed. If these versions are designed by independent programming teams, it is expected that a fault in one version will not have the same behavior as any fault in the other versions. Since the errors in the output of the versions are different and uncorrelated, it is possible to run the versions concurrently, cross-check their results at prespecified points, and mask errors. A DEsign DIversity eXperiments (DEDIX) testbed was implemented to study the influence of common mode errors which can result in a failure of the entire system. The layered design of DEDIX and its decision algorithm are described.

  15. A Collection of Technical Studies Completed for the Computer-Aided Acquisition and Logistic Support (CALS) Program Fiscal Year 1988. Volume 2. Graphics, CGM MIL SPEC

    DTIC Science & Technology

    1991-03-01

    test cases are gathered, studied, and evaluated; industry and other national European programs are studied; and experience is gained. This evolution ...application callable layer. The CGM Generator can be used to record device-independent picture descriptions. conceptually in parallel with the...contributors: I Organization Peter R. Bono Associates, Inc. Secretarial Support Susan Bonde , Diane Bono, E!aine Bono, Brenda Carson, Gillian Hall

  16. Data reduction software for LORAN-C flight test evaluation

    NASA Technical Reports Server (NTRS)

    Fischer, J. P.

    1979-01-01

    A set of programs designed to be run on an IBM 370/158 computer to read the recorded time differences from the tape produced by the LORAN data collection system, convert them to latitude/longitude and produce various plotting input files are described. The programs were written so they may be tailored easily to meet the demands of a particular data reduction job. The tape reader program is written in 370 assembler language and the remaining programs are written in standard IBM FORTRAN-IV language. The tape reader program is dependent upon the recording format used by the data collection system and on the I/O macros used at the computing facility. The other programs are generally device-independent, although the plotting routines are dependent upon the plotting method used. The data reduction programs convert the recorded data to a more readily usable form; convert the time difference (TD) numbers to latitude/longitude (lat/long), to format a printed listing of the TDs, lat/long, reference times, and other information derived from the data, and produce data files which may be used for subsequent plotting.

  17. Research on computer systems benchmarking

    NASA Technical Reports Server (NTRS)

    Smith, Alan Jay (Principal Investigator)

    1996-01-01

    This grant addresses the topic of research on computer systems benchmarking and is more generally concerned with performance issues in computer systems. This report reviews work in those areas during the period of NASA support under this grant. The bulk of the work performed concerned benchmarking and analysis of CPUs, compilers, caches, and benchmark programs. The first part of this work concerned the issue of benchmark performance prediction. A new approach to benchmarking and machine characterization was reported, using a machine characterizer that measures the performance of a given system in terms of a Fortran abstract machine. Another report focused on analyzing compiler performance. The performance impact of optimization in the context of our methodology for CPU performance characterization was based on the abstract machine model. Benchmark programs are analyzed in another paper. A machine-independent model of program execution was developed to characterize both machine performance and program execution. By merging these machine and program characterizations, execution time can be estimated for arbitrary machine/program combinations. The work was continued into the domain of parallel and vector machines, including the issue of caches in vector processors and multiprocessors. All of the afore-mentioned accomplishments are more specifically summarized in this report, as well as those smaller in magnitude supported by this grant.

  18. Informatics in radiology (infoRAD): free DICOM image viewing and processing software for the Macintosh computer: what's available and what it can do for you.

    PubMed

    Escott, Edward J; Rubinstein, David

    2004-01-01

    It is often necessary for radiologists to use digital images in presentations and conferences. Most imaging modalities produce images in the Digital Imaging and Communications in Medicine (DICOM) format. The image files tend to be large and thus cannot be directly imported into most presentation software, such as Microsoft PowerPoint; the large files also consume storage space. There are many free programs that allow viewing and processing of these files on a personal computer, including conversion to more common file formats such as the Joint Photographic Experts Group (JPEG) format. Free DICOM image viewing and processing software for computers running on the Microsoft Windows operating system has already been evaluated. However, many people use the Macintosh (Apple Computer) platform, and a number of programs are available for these users. The World Wide Web was searched for free DICOM image viewing or processing software that was designed for the Macintosh platform or is written in Java and is therefore platform independent. The features of these programs and their usability were evaluated. There are many free programs for the Macintosh platform that enable viewing and processing of DICOM images. (c) RSNA, 2004.

  19. INDDGO: Integrated Network Decomposition & Dynamic programming for Graph Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Groer, Christopher S; Sullivan, Blair D; Weerapurage, Dinesh P

    2012-10-01

    It is well-known that dynamic programming algorithms can utilize tree decompositions to provide a way to solve some \\emph{NP}-hard problems on graphs where the complexity is polynomial in the number of nodes and edges in the graph, but exponential in the width of the underlying tree decomposition. However, there has been relatively little computational work done to determine the practical utility of such dynamic programming algorithms. We have developed software to construct tree decompositions using various heuristics and have created a fast, memory-efficient dynamic programming implementation for solving maximum weighted independent set. We describe our software and the algorithms wemore » have implemented, focusing on memory saving techniques for the dynamic programming. We compare the running time and memory usage of our implementation with other techniques for solving maximum weighted independent set, including a commercial integer programming solver and a semi-definite programming solver. Our results indicate that it is possible to solve some instances where the underlying decomposition has width much larger than suggested by the literature. For certain types of problems, our dynamic programming code runs several times faster than these other methods.« less

  20. Establish a Baseline for Planning and Growth. Focus on Program Evaluation. Technology Report 92-114.

    ERIC Educational Resources Information Center

    Des Moines Public Schools, IA. Dept. of Information Management.

    This evaluation focuses on the use of computers, telephones, broadcast video, and related devices to support instructional activities and administrative functions in the Des Moines (Iowa) Independent Community School District. The findings are presented in five parts: (1) Context Evaluation--History and Recent Improvements (video and instructional…

  1. Effect of Animated Graphic Annotations and Immediate Visual Feedback in Aiding Japanese Pronunciation Learning: A Comparative Study

    ERIC Educational Resources Information Center

    Hew, Soon-Hin; Ohki, Mitsuru

    2004-01-01

    This study examines the effectiveness of imagery and electronic visual feedback in facilitating students' acquisition of Japanese pronunciation skills. The independent variables, animated graphic annotation (AGA) and immediate visual feedback (IVF) were integrated into a Japanese computer-assisted language learning (JCALL) program focused on the…

  2. Microcomputers in Schools as a Teaching and Learning Aid.

    ERIC Educational Resources Information Center

    Trotman-Dickenson, D. I.

    1986-01-01

    Presents the findings of a survey of comprehensive and independent schools' use of microcomputers as teaching and learning aids in economics. Results suggest that use is wide spread but not intensive. Teachers allocate few hours to computer programs per year, have difficulty finding suitable software, and fail to encourage use by girls. (JDH)

  3. Improving Students' Reading Fluency through the Use of Phonics and Word Recognition Strategies.

    ERIC Educational Resources Information Center

    Ballard, Christine; Jacocks, Kathleen

    This study describes a program designed to improve student reading fluency. The targeted population consisted of first and third grade students in a growing urban community in the Midwest. Evidence for the existence of the problem included standardized test scores and independent computer reports that measured academic achievement, phonic…

  4. Calculation of gravity and magnetic anomalies along profiles with end corrections and inverse solutions for density and magnetization

    USGS Publications Warehouse

    Cady, John W.

    1977-01-01

    A computer program is presented which performs, for one or more bodies, along a profile perpendicular to strike, both forward calculations for the magnetic and gravity anomaly fields and independent gravity and magnetic inverse calculations for density and susceptibility or remanent magnetization.

  5. Accelerating numerical solution of stochastic differential equations with CUDA

    NASA Astrophysics Data System (ADS)

    Januszewski, M.; Kostur, M.

    2010-01-01

    Numerical integration of stochastic differential equations is commonly used in many branches of science. In this paper we present how to accelerate this kind of numerical calculations with popular NVIDIA Graphics Processing Units using the CUDA programming environment. We address general aspects of numerical programming on stream processors and illustrate them by two examples: the noisy phase dynamics in a Josephson junction and the noisy Kuramoto model. In presented cases the measured speedup can be as high as 675× compared to a typical CPU, which corresponds to several billion integration steps per second. This means that calculations which took weeks can now be completed in less than one hour. This brings stochastic simulation to a completely new level, opening for research a whole new range of problems which can now be solved interactively. Program summaryProgram title: SDE Catalogue identifier: AEFG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFG_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Gnu GPL v3 No. of lines in distributed program, including test data, etc.: 978 No. of bytes in distributed program, including test data, etc.: 5905 Distribution format: tar.gz Programming language: CUDA C Computer: any system with a CUDA-compatible GPU Operating system: Linux RAM: 64 MB of GPU memory Classification: 4.3 External routines: The program requires the NVIDIA CUDA Toolkit Version 2.0 or newer and the GNU Scientific Library v1.0 or newer. Optionally gnuplot is recommended for quick visualization of the results. Nature of problem: Direct numerical integration of stochastic differential equations is a computationally intensive problem, due to the necessity of calculating multiple independent realizations of the system. We exploit the inherent parallelism of this problem and perform the calculations on GPUs using the CUDA programming environment. The GPU's ability to execute hundreds of threads simultaneously makes it possible to speed up the computation by over two orders of magnitude, compared to a typical modern CPU. Solution method: The stochastic Runge-Kutta method of the second order is applied to integrate the equation of motion. Ensemble-averaged quantities of interest are obtained through averaging over multiple independent realizations of the system. Unusual features: The numerical solution of the stochastic differential equations in question is performed on a GPU using the CUDA environment. Running time: < 1 minute

  6. Performance Models for Split-execution Computing Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humble, Travis S; McCaskey, Alex; Schrock, Jonathan

    Split-execution computing leverages the capabilities of multiple computational models to solve problems, but splitting program execution across different computational models incurs costs associated with the translation between domains. We analyze the performance of a split-execution computing system developed from conventional and quantum processing units (QPUs) by using behavioral models that track resource usage. We focus on asymmetric processing models built using conventional CPUs and a family of special-purpose QPUs that employ quantum computing principles. Our performance models account for the translation of a classical optimization problem into the physical representation required by the quantum processor while also accounting for hardwaremore » limitations and conventional processor speed and memory. We conclude that the bottleneck in this split-execution computing system lies at the quantum-classical interface and that the primary time cost is independent of quantum processor behavior.« less

  7. A phenomenographic study of the ways of understanding conditional and repetition structures in computer programming languages

    NASA Astrophysics Data System (ADS)

    Bucks, Gregory Warren

    Computers have become an integral part of how engineers complete their work, allowing them to collect and analyze data, model potential solutions and aiding in production through automation and robotics. In addition, computers are essential elements of the products themselves, from tennis shoes to construction materials. An understanding of how computers function, both at the hardware and software level, is essential for the next generation of engineers. Despite the need for engineers to develop a strong background in computing, little opportunity is given for engineering students to develop these skills. Learning to program is widely seen as a difficult task, requiring students to develop not only an understanding of specific concepts, but also a way of thinking. In addition, students are forced to learn a new tool, in the form of the programming environment employed, along with these concepts and thought processes. Because of this, many students will not develop a sufficient proficiency in programming, even after progressing through the traditional introductory programming sequence. This is a significant problem, especially in the engineering disciplines, where very few students receive more than one or two semesters' worth of instruction in an already crowded engineering curriculum. To address these issues, new pedagogical techniques must be investigated in an effort to enhance the ability of engineering students to develop strong computing skills. However, these efforts are hindered by the lack of published assessment instruments available for probing an individual's understanding of programming concepts across programming languages. Traditionally, programming knowledge has been assessed by producing written code in a specific language. This can be an effective method, but does not lend itself well to comparing the pedagogical impact of different programming environments, languages or paradigms. This dissertation presents a phenomenographic research study exploring the different ways of understanding held by individuals of two programming concepts: conditional structures and repetition structures. This work lays the foundation for the development of language independent assessment instruments, which can ultimately be used to assess the pedagogical implications of various programming environments.

  8. Structural Technology Evaluation and Analysis Program (STEAP). Delivery Order 0035: Dynamics and Control and Computational Design of Flapping Wing Micro Air Vehicles

    DTIC Science & Technology

    2012-10-01

    library as a principal Requestor. The M3CT requestor is written in Java , leveraging the cross platform deployment capabilities needed for a broadly...each application to the Java programming language, the independently generated sources are wrapped with JNA or Groovy. The Java wrapping process...unlimited. Figure 13. Leveraging Languages Once the underlying product is available to the Java source as a library, the application leverages

  9. GPU-based Parallel Application Design for Emerging Mobile Devices

    NASA Astrophysics Data System (ADS)

    Gupta, Kshitij

    A revolution is underway in the computing world that is causing a fundamental paradigm shift in device capabilities and form-factor, with a move from well-established legacy desktop/laptop computers to mobile devices in varying sizes and shapes. Amongst all the tasks these devices must support, graphics has emerged as the 'killer app' for providing a fluid user interface and high-fidelity game rendering, effectively making the graphics processor (GPU) one of the key components in (present and future) mobile systems. By utilizing the GPU as a general-purpose parallel processor, this dissertation explores the GPU computing design space from an applications standpoint, in the mobile context, by focusing on key challenges presented by these devices---limited compute, memory bandwidth, and stringent power consumption requirements---while improving the overall application efficiency of the increasingly important speech recognition workload for mobile user interaction. We broadly partition trends in GPU computing into four major categories. We analyze hardware and programming model limitations in current-generation GPUs and detail an alternate programming style called Persistent Threads, identify four use case patterns, and propose minimal modifications that would be required for extending native support. We show how by manually extracting data locality and altering the speech recognition pipeline, we are able to achieve significant savings in memory bandwidth while simultaneously reducing the compute burden on GPU-like parallel processors. As we foresee GPU computing to evolve from its current 'co-processor' model into an independent 'applications processor' that is capable of executing complex work independently, we create an alternate application framework that enables the GPU to handle all control-flow dependencies autonomously at run-time while minimizing host involvement to just issuing commands, that facilitates an efficient application implementation. Finally, as compute and communication capabilities of mobile devices improve, we analyze energy implications of processing speech recognition locally (on-chip) and offloading it to servers (in-cloud).

  10. The Perception and Costs of the Interview Process for Plastic Surgery Residency Programs: Can the Process Be Streamlined?

    PubMed

    Susarla, Srinivas M; Swanson, Edward W; Slezak, Sheri; Lifchez, Scott D; Redett, Richard J

    2017-01-01

    The purpose of this study was to assess applicant perceptions and costs associated with the interview process for plastic surgery residency positions. This was a cross-sectional survey of applicants to the integrated- and independent-track residencies at the authors' institution. All applicants who were interviewed were invited to complete a Web-based survey on costs and perceptions of various components of the interview process. Descriptive and bivariate statistics were computed to compare applicants to the two program tracks. Fifty-three applicants were interviewed for residency positions; 48 completed a survey (90.5 percent response rate). Thirty-four applicants were candidates for the integrated program; 16 applicants were candidates for the independent program. The program spent $2763 per applicant interviewed; 63 percent of applicants spent more than $5000 on the interview process. More than 70 percent of applicants missed more than 7 days of work to attend interviews. Independent applicants felt less strongly that interviews were critical to the selection process and placed less value on physically visiting the hospital and direct, in-person interaction. Applicants placed little value on program informational talks. Applicants who had experience with virtual interviews felt more positively about the format of a video interview relative to those who did not. The residency interview process is resource intensive for programs and applicants. Removing informational talks may improve the process. Making physical tours and in-person interviews optional are other alternatives that merit future study.

  11. Update on PISCES

    NASA Technical Reports Server (NTRS)

    Pearson, Don; Hamm, Dustin; Kubena, Brian; Weaver, Jonathan K.

    2010-01-01

    An updated version of the Platform Independent Software Components for the Exploration of Space (PISCES) software library is available. A previous version was reported in Library for Developing Spacecraft-Mission-Planning Software (MSC-22983), NASA Tech Briefs, Vol. 25, No. 7 (July 2001), page 52. To recapitulate: This software provides for Web-based, collaborative development of computer programs for planning trajectories and trajectory- related aspects of spacecraft-mission design. The library was built using state-of-the-art object-oriented concepts and software-development methodologies. The components of PISCES include Java-language application programs arranged in a hierarchy of classes that facilitates the reuse of the components. As its full name suggests, the PISCES library affords platform-independence: The Java language makes it possible to use the classes and application programs with a Java virtual machine, which is available in most Web-browser programs. Another advantage is expandability: Object orientation facilitates expansion of the library through creation of a new class. Improvements in the library since the previous version include development of orbital-maneuver- planning and rendezvous-launch-window application programs, enhancement of capabilities for propagation of orbits, and development of a desktop user interface.

  12. QRev—Software for computation and quality assurance of acoustic doppler current profiler moving-boat streamflow measurements—User’s manual for version 2.8

    USGS Publications Warehouse

    Mueller, David S.

    2016-05-12

    The software program, QRev computes the discharge from moving-boat acoustic Doppler current profiler measurements using data collected with any of the Teledyne RD Instrument or SonTek bottom tracking acoustic Doppler current profilers. The computation of discharge is independent of the manufacturer of the acoustic Doppler current profiler because QRev applies consistent algorithms independent of the data source. In addition, QRev automates filtering and quality checking of the collected data and provides feedback to the user of potential quality issues with the measurement. Various statistics and characteristics of the measurement, in addition to a simple uncertainty assessment are provided to the user to assist them in properly rating the measurement. QRev saves an extensible markup language file that can be imported into databases or electronic field notes software. The user interacts with QRev through a tablet-friendly graphical user interface. This report is the manual for version 2.8 of QRev.

  13. Parallel computation for biological sequence comparison: comparing a portable model to the native model for the Intel Hypercube.

    PubMed

    Nadkarni, P M; Miller, P L

    1991-01-01

    A parallel program for inter-database sequence comparison was developed on the Intel Hypercube using two models of parallel programming. One version was built using machine-specific Hypercube parallel programming commands. The other version was built using Linda, a machine-independent parallel programming language. The two versions of the program provide a case study comparing these two approaches to parallelization in an important biological application area. Benchmark tests with both programs gave comparable results with a small number of processors. As the number of processors was increased, the Linda version was somewhat less efficient. The Linda version was also run without change on Network Linda, a virtual parallel machine running on a network of desktop workstations.

  14. Measured Radiation Patterns of the Boeing 91-Element ICAPA Antenna With Comparison to Calculations

    NASA Technical Reports Server (NTRS)

    Lambert, Kevin M.; Burke, Thomas (Technical Monitor)

    2003-01-01

    This report presents measured antenna patterns of the Boeing 91-Element Integrated Circuit Active Phased Array (ICAPA) Antenna at 19.85 GHz. These patterns were taken in support of various communication experiments that were performed using the antenna as a testbed. The goal here is to establish a foundation of the performance of the antenna for the experiments. An independent variable used in the communication experiments was the scan angle of the antenna. Therefore, the results presented here are patterns as a function of scan angle, at the stated frequency. Only a limited number of scan angles could be measured. Therefore, a computer program was written to simulate the pattern performance of the antenna at any scan angle. This program can be used to facilitate further study of the antenna. The computed patterns from this program are compared to the measured patterns as a means of validating the model.

  15. Durability evaluation of ceramic components using CARES/LIFE

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel N.; Powers, Lynn M.; Janosik, Lesley A.; Gyekenyesi, John P.

    1994-01-01

    The computer program CARES/LIFE calculates the time-dependent reliability of monolithic ceramic components subjected to thermomechanical and/or proof test loading. This program is an extension of the CARES (Ceramics Analysis and Reliability Evaluation of Structures) computer program. CARES/LIFE accounts for the phenomenon of subcritical crack growth (SCG) by utilizing the power law, Paris law, or Walker equation. The two-parameter Weibull cumulative distribution function is used to characterize the variation in component strength. The effects of multiaxial stresses are modeled using either the principle of independent action (PIA), the Weibull normal stress averaging method (NSA), or the Batdorf theory. Inert strength and fatigue parameters are estimated from rupture strength data of naturally flawed specimens loaded in static, dynamic, or cyclic fatigue. Application of this design methodology is demonstrated using experimental data from alumina bar and disk flexure specimens which exhibit SCG when exposed to water.

  16. Durability evaluation of ceramic components using CARES/LIFE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nemeth, N.N.; Janosik, L.A.; Gyekenyesi, J.P.

    1996-01-01

    The computer program CARES/LIFE calculates the time-dependent reliability of monolithic ceramic components subjected to thermomechanical and/or proof test loading. This program is an extension of the CARES (Ceramics Analysis and Reliability Evaluation of Structures) computer program. CARES/LIFE accounts for the phenomenon of subcritical crack growth (SCG) by utilizing the power law, Paris law, or Walker equation. The two-parameter Weibull cumulative distribution function is used to characterize the variation in component strength. The effects of multiaxial stresses are modeled using either the principle of independent action (PIA), the Weibull normal stress averaging method (NSA), or the Batdorf theory. Inert strength andmore » fatigue parameters are estimated from rupture strength data of naturally flawed specimens loaded in static, dynamic, or cyclic fatigue. Application of this design methodology is demonstrated using experimental data from alumina bar and disk flexure specimens, which exhibit SCG when exposed to water.« less

  17. The Design and Evaluation of "CAPTools"--A Computer Aided Parallelization Toolkit

    NASA Technical Reports Server (NTRS)

    Yan, Jerry; Frumkin, Michael; Hribar, Michelle; Jin, Haoqiang; Waheed, Abdul; Johnson, Steve; Cross, Jark; Evans, Emyr; Ierotheou, Constantinos; Leggett, Pete; hide

    1998-01-01

    Writing applications for high performance computers is a challenging task. Although writing code by hand still offers the best performance, it is extremely costly and often not very portable. The Computer Aided Parallelization Tools (CAPTools) are a toolkit designed to help automate the mapping of sequential FORTRAN scientific applications onto multiprocessors. CAPTools consists of the following major components: an inter-procedural dependence analysis module that incorporates user knowledge; a 'self-propagating' data partitioning module driven via user guidance; an execution control mask generation and optimization module for the user to fine tune parallel processing of individual partitions; a program transformation/restructuring facility for source code clean up and optimization; a set of browsers through which the user interacts with CAPTools at each stage of the parallelization process; and a code generator supporting multiple programming paradigms on various multiprocessors. Besides describing the rationale behind the architecture of CAPTools, the parallelization process is illustrated via case studies involving structured and unstructured meshes. The programming process and the performance of the generated parallel programs are compared against other programming alternatives based on the NAS Parallel Benchmarks, ARC3D and other scientific applications. Based on these results, a discussion on the feasibility of constructing architectural independent parallel applications is presented.

  18. Failure detection in high-performance clusters and computers using chaotic map computations

    DOEpatents

    Rao, Nageswara S.

    2015-09-01

    A programmable media includes a processing unit capable of independent operation in a machine that is capable of executing 10.sup.18 floating point operations per second. The processing unit is in communication with a memory element and an interconnect that couples computing nodes. The programmable media includes a logical unit configured to execute arithmetic functions, comparative functions, and/or logical functions. The processing unit is configured to detect computing component failures, memory element failures and/or interconnect failures by executing programming threads that generate one or more chaotic map trajectories. The central processing unit or graphical processing unit is configured to detect a computing component failure, memory element failure and/or an interconnect failure through an automated comparison of signal trajectories generated by the chaotic maps.

  19. Space Debris Surfaces (Computer Code): Probability of No Penetration Versus Impact Velocity and Obliquity

    NASA Technical Reports Server (NTRS)

    Elfer, N.; Meibaum, R.; Olsen, G.

    1995-01-01

    A unique collection of computer codes, Space Debris Surfaces (SD_SURF), have been developed to assist in the design and analysis of space debris protection systems. SD_SURF calculates and summarizes a vehicle's vulnerability to space debris as a function of impact velocity and obliquity. An SD_SURF analysis will show which velocities and obliquities are the most probable to cause a penetration. This determination can help the analyst select a shield design that is best suited to the predominant penetration mechanism. The analysis also suggests the most suitable parameters for development or verification testing. The SD_SURF programs offer the option of either FORTRAN programs or Microsoft-EXCEL spreadsheets and macros. The FORTRAN programs work with BUMPERII. The EXCEL spreadsheets and macros can be used independently or with selected output from the SD_SURF FORTRAN programs. Examples will be presented of the interaction between space vehicle geometry, the space debris environment, and the penetration and critical damage ballistic limit surfaces of the shield under consideration.

  20. SCANS (Shipping Cask ANalysis System) a microcomputer-based analysis system for shipping cask design review: User`s manual to Version 3a. Volume 1, Revision 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mok, G.C.; Thomas, G.R.; Gerhard, M.A.

    SCANS (Shipping Cask ANalysis System) is a microcomputer-based system of computer programs and databases developed at the Lawrence Livermore National Laboratory (LLNL) for evaluating safety analysis reports on spent fuel shipping casks. SCANS is an easy-to-use system that calculates the global response to impact loads, pressure loads and thermal conditions, providing reviewers with an independent check on analyses submitted by licensees. SCANS is based on microcomputers compatible with the IBM-PC family of computers. The system is composed of a series of menus, input programs, cask analysis programs, and output display programs. All data is entered through fill-in-the-blank input screens thatmore » contain descriptive data requests. Analysis options are based on regulatory cases described in the Code of Federal Regulations 10 CFR 71 and Regulatory Guides published by the US Nuclear Regulatory Commission in 1977 and 1978.« less

  1. Top ten reasons the World Wide Web may fail to change medical education.

    PubMed

    Friedman, R B

    1996-09-01

    The Internet's World Wide Web (WWW) offers educators a unique opportunity to introduce computer-assisted instructional (CAI) programs into the medical school curriculum. With the WWW, CAI programs developed at one medical school could be successfully used at other institutions without concern about hardware or software compatibility; further, programs could be maintained and regularly updated at a single central location, could be distributed rapidly, would be technology-independent, and would be presented in the same format on all computers. However, while the WWW holds promise for CAI, the author discusses ten reasons that educators' efforts to fulfill the Web's promise may fail, including the following: CAI is generally not fully integrated into the medical school curriculum; students are not tested on material taught using CAI; and CAI programs tend to be poorly designed. The author argues that medical educators must overcome these obstacles if they are to make truly effective use of the WWW in the classroom.

  2. Hutchins's University of Utopia: Institutional Independence, Academic Freedom, and Radical Restructuring

    ERIC Educational Resources Information Center

    Hoff, Peter Sloat

    2009-01-01

    In a crisis-plagued world looking to higher education for knowledge, wisdom, and solutions, higher education itself is stumbling. Its transformational thinking has frozen up like an overstressed computer program; and we need, in effect, to "push the reset button." In 1953, the renowned and controversial president of the University of Chicago,…

  3. DITT: a computer program for Data Interpretation for Torsional Tests

    USGS Publications Warehouse

    Chen, Albert T.F.

    1979-01-01

    Measurements of the helium concentration of soil samples collected and stored in Vacutainer-brand evacuated glass tubes show that Vacutainers are reliable containers for soil collection. Within the limits of reproducibility, helium content of soils appears to be independent of variations in soil temperature, barometric pressure, and quantity of soil moisture present in the sample.

  4. 3-D parallel program for numerical calculation of gas dynamics problems with heat conductivity on distributed memory computational systems (CS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sofronov, I.D.; Voronin, B.L.; Butnev, O.I.

    1997-12-31

    The aim of the work performed is to develop a 3D parallel program for numerical calculation of gas dynamics problem with heat conductivity on distributed memory computational systems (CS), satisfying the condition of numerical result independence from the number of processors involved. Two basically different approaches to the structure of massive parallel computations have been developed. The first approach uses the 3D data matrix decomposition reconstructed at temporal cycle and is a development of parallelization algorithms for multiprocessor CS with shareable memory. The second approach is based on using a 3D data matrix decomposition not reconstructed during a temporal cycle.more » The program was developed on 8-processor CS MP-3 made in VNIIEF and was adapted to a massive parallel CS Meiko-2 in LLNL by joint efforts of VNIIEF and LLNL staffs. A large number of numerical experiments has been carried out with different number of processors up to 256 and the efficiency of parallelization has been evaluated in dependence on processor number and their parameters.« less

  5. One dimensional heavy ion beam transport: Energy independent model. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Farhat, Hamidullah

    1990-01-01

    Attempts are made to model the transport problem for heavy ion beams in various targets, employing the current level of understanding of the physics of high-charge and energy (HZE) particle interaction with matter are made. An energy independent transport model, with the most simplified assumptions and proper parameters is presented. The first and essential assumption in this case (energy independent transport) is the high energy characterization of the incident beam. The energy independent equation is solved and application is made to high energy neon (NE-20) and iron (FE-56) beams in water. The numerical solutions is given and compared to a numerical solution to determine the accuracy of the model. The lower limit energy for neon and iron to be high energy beams is calculated due to Barkas and Burger theory by LBLFRG computer program. The calculated values in the density range of interest (50 g/sq cm) of water are: 833.43 MeV/nuc for neon and 1597.68 MeV/nuc for iron. The analytical solutions of the energy independent transport equation gives the flux of different collision terms. The fluxes of individual collision terms are given and the total fluxes are shown in graphs relative to different thicknesses of water. The values for fluxes are calculated by the ANASTP computer code.

  6. The 1980-81 AFOSR (Air Force Office of Scientific Research)-HTTM (Heat Transfer and Turbulence Mechanics)-Stanford Conference on Complex Turbulent Flows: Comparison of Computation and Experiment. Volume 3. Comparison of Computation with Experiment, and Computors’ Summary Report.

    DTIC Science & Technology

    1981-09-01

    organized the paperwork system , including finances, travel, k, , f iling, and programs in a highly independent and responsible fashion. Thanks are also due...three-dimensional transformation procedure for arbitrary non-orthogonal coordinate systems , for the purpose of the three-dimensional turbulent...transformation procedure for arbitrary non-orthogonal coordinate systems so as to acquire the generality in the application for elliptic flows (for the square

  7. Vector computer memory bank contention

    NASA Technical Reports Server (NTRS)

    Bailey, D. H.

    1985-01-01

    A number of vector supercomputers feature very large memories. Unfortunately the large capacity memory chips that are used in these computers are much slower than the fast central processing unit (CPU) circuitry. As a result, memory bank reservation times (in CPU ticks) are much longer than on previous generations of computers. A consequence of these long reservation times is that memory bank contention is sharply increased, resulting in significantly lowered performance rates. The phenomenon of memory bank contention in vector computers is analyzed using both a Markov chain model and a Monte Carlo simulation program. The results of this analysis indicate that future generations of supercomputers must either employ much faster memory chips or else feature very large numbers of independent memory banks.

  8. A computer architecture for intelligent machines

    NASA Technical Reports Server (NTRS)

    Lefebvre, D. R.; Saridis, G. N.

    1992-01-01

    The theory of intelligent machines proposes a hierarchical organization for the functions of an autonomous robot based on the principle of increasing precision with decreasing intelligence. An analytic formulation of this theory using information-theoretic measures of uncertainty for each level of the intelligent machine has been developed. The authors present a computer architecture that implements the lower two levels of the intelligent machine. The architecture supports an event-driven programming paradigm that is independent of the underlying computer architecture and operating system. Execution-level controllers for motion and vision systems are briefly addressed, as well as the Petri net transducer software used to implement coordination-level functions. A case study illustrates how this computer architecture integrates real-time and higher-level control of manipulator and vision systems.

  9. Vector computer memory bank contention

    NASA Technical Reports Server (NTRS)

    Bailey, David H.

    1987-01-01

    A number of vector supercomputers feature very large memories. Unfortunately the large capacity memory chips that are used in these computers are much slower than the fast central processing unit (CPU) circuitry. As a result, memory bank reservation times (in CPU ticks) are much longer than on previous generations of computers. A consequence of these long reservation times is that memory bank contention is sharply increased, resulting in significantly lowered performance rates. The phenomenon of memory bank contention in vector computers is analyzed using both a Markov chain model and a Monte Carlo simulation program. The results of this analysis indicate that future generations of supercomputers must either employ much faster memory chips or else feature very large numbers of independent memory banks.

  10. PREDICTORS OF COMPUTER USE IN COMMUNITY-DWELLING ETHNICALLY DIVERSE OLDER ADULTS

    PubMed Central

    Werner, Julie M.; Carlson, Mike; Jordan-Marsh, Maryalice; Clark, Florence

    2011-01-01

    Objective In this study we analyzed self-reported computer use, demographic variables, psychosocial variables, and health and well-being variables collected from 460 ethnically diverse, community-dwelling elders in order to investigate the relationship computer use has with demographics, well-being and other key psychosocial variables in older adults. Background Although younger elders with more education, those who employ active coping strategies, or those who are low in anxiety levels are thought to use computers at higher rates than others, previous research has produced mixed or inconclusive results regarding ethnic, gender, and psychological factors, or has concentrated on computer-specific psychological factors only (e.g., computer anxiety). Few such studies have employed large sample sizes or have focused on ethnically diverse populations of community-dwelling elders. Method With a large number of overlapping predictors, zero-order analysis alone is poorly equipped to identify variables that are independently associated with computer use. Accordingly, both zero-order and stepwise logistic regression analyses were conducted to determine the correlates of two types of computer use: email and general computer use. Results Results indicate that younger age, greater level of education, non-Hispanic ethnicity, behaviorally active coping style, general physical health, and role-related emotional health each independently predicted computer usage. Conclusion Study findings highlight differences in computer usage, especially in regard to Hispanic ethnicity and specific health and well-being factors. Application Potential applications of this research include future intervention studies, individualized computer-based activity programming, or customizable software and user interface design for older adults responsive to a variety of personal characteristics and capabilities. PMID:22046718

  11. Predictors of computer use in community-dwelling, ethnically diverse older adults.

    PubMed

    Werner, Julie M; Carlson, Mike; Jordan-Marsh, Maryalice; Clark, Florence

    2011-10-01

    In this study, we analyzed self-reported computer use, demographic variables, psychosocial variables, and health and well-being variables collected from 460 ethnically diverse, community-dwelling elders to investigate the relationship computer use has with demographics, well-being, and other key psychosocial variables in older adults. Although younger elders with more education, those who employ active coping strategies, or those who are low in anxiety levels are thought to use computers at higher rates than do others, previous research has produced mixed or inconclusive results regarding ethnic, gender, and psychological factors or has concentrated on computer-specific psychological factors only (e.g., computer anxiety). Few such studies have employed large sample sizes or have focused on ethnically diverse populations of community-dwelling elders. With a large number of overlapping predictors, zero-order analysis alone is poorly equipped to identify variables that are independently associated with computer use. Accordingly, both zero-order and stepwise logistic regression analyses were conducted to determine the correlates of two types of computer use: e-mail and general computer use. Results indicate that younger age, greater level of education, non-Hispanic ethnicity, behaviorally active coping style, general physical health, and role-related emotional health each independently predicted computer usage. Study findings highlight differences in computer usage, especially in regard to Hispanic ethnicity and specific health and well-being factors. Potential applications of this research include future intervention studies, individualized computer-based activity programming, or customizable software and user interface design for older adults responsive to a variety of personal characteristics and capabilities.

  12. CARES/LIFE Ceramics Analysis and Reliability Evaluation of Structures Life Prediction Program

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel N.; Powers, Lynn M.; Janosik, Lesley A.; Gyekenyesi, John P.

    2003-01-01

    This manual describes the Ceramics Analysis and Reliability Evaluation of Structures Life Prediction (CARES/LIFE) computer program. The program calculates the time-dependent reliability of monolithic ceramic components subjected to thermomechanical and/or proof test loading. CARES/LIFE is an extension of the CARES (Ceramic Analysis and Reliability Evaluation of Structures) computer program. The program uses results from MSC/NASTRAN, ABAQUS, and ANSYS finite element analysis programs to evaluate component reliability due to inherent surface and/or volume type flaws. CARES/LIFE accounts for the phenomenon of subcritical crack growth (SCG) by utilizing the power law, Paris law, or Walker law. The two-parameter Weibull cumulative distribution function is used to characterize the variation in component strength. The effects of multiaxial stresses are modeled by using either the principle of independent action (PIA), the Weibull normal stress averaging method (NSA), or the Batdorf theory. Inert strength and fatigue parameters are estimated from rupture strength data of naturally flawed specimens loaded in static, dynamic, or cyclic fatigue. The probabilistic time-dependent theories used in CARES/LIFE, along with the input and output for CARES/LIFE, are described. Example problems to demonstrate various features of the program are also included.

  13. Quantitative and theoretical analysis of the joint Department of Energy-National Institute of Standards and Technology Energy-Related Inventions Program from 1975 to 1995: Implications for development of public policy toward innovation

    NASA Astrophysics Data System (ADS)

    Pevenstein, Jack Edward

    This dissertation presents 18 alternative models for computing the social rate of return (SRR) of the joint Department of Energy (DOE)-National Institute of Standards and Technology (NIST) Energy-Related Inventions Program (ERIP) from 1975 to 1995. The models differ on the on the choice of societal benefit, adjustments made to the benefits, accounting for initial investments in ERIP and annual program appropriations. Alternative quantitative measures of societal benefit include annual gross market sales of successfully commercialized ERIP-supported inventions, annual energy savings resulting from the use of such inventions, pollution-remediation cost reductions due to decreased carbon emissions from greenhouse gases associated with more efficient energy generation. SRR computation employs the net present value (NPV) model with the SRR being the discount rate that reduces the NPV of a stream of societal benefits to zero over a period of n years given an initial investment and annual program appropriations. The SRR is the total rate of return to the nation from public investment in ERIP. The data used for computation were assembled by Dr. Marilyn A. Brown and her staff at Oak Ridge National Laboratory under contract to DOE since 1985. Other data on energy use and carbon emission from greenhouse gas production come from official publications of DOE's Energy Information Administration. Mean ERIP SRR = 412.7% with standard deviation = +/-426.5%. The population of the SRR sample is accepted as normally distributed at an alpha = 0.05, using the Kolmogorov-Smirnov test. These SRR's, which appear reasonable in comparison with those computed by Professor Edwin Mansfield, (Wharton School) for inventions and by Dr. Gregory Tassey (NIST Chief Economist) for NIST programs supporting innovations in measurement technology, show a significant underinvestment in public service technology innovation evaluation programs for independent inventors and small technology-oriented businesses. Moreover, it is argued that ERIP [with its participants] is a good representation of a larger community of independent inventors and innovators comprising a resource the writer calls the "national innovation infrastructure." This national innovation infrastructure, like ERIP, is underinvested in terms of public support. Thus, the nation would benefit from a large-scale, value-adding, public-service innovative technology evaluation program modeled on ERIP. Further, support of such technology evaluation programs at both state and Federal levels should be an important priority of public technology policy.

  14. Technical support to the Nuclear Regulatory Commission for the boiling water reactor blowdown heat transfer program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rice, R.E.

    Results are presented of studies conducted by Aerojet Nuclear Company (ANC) in FY 1975 to support the Nuclear Regulatory Commission (NRC) on the boiling water reactor blowdown heat transfer (BWR-BDHT) program. The support provided by ANC is that of an independent assessor of the program to ensure that the data obtained are adequate for verification of analytical models used for predicting reactor response to a postulated loss-of-coolant accident. The support included reviews of program plans, objectives, measurements, and actual data. Additional activity included analysis of experimental system performance and evaluation of the RELAP4 computer code as applied to the experiments.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    McKinney, M.J.; Jenkins, S.

    Project JEM (Jarvis Enhancement of Males) is a pre-college program directed toward stimulating disadvantaged, talented African American males in grades four, five, and six to attend college and major in mathematics, science, computer science, or related technical areas needed by the US Department of Energy. Twenty young African American male students were recruited from Gladewater Independent School District (ISD), Longview ISD, Hawkins ISD, Tyler ISD, Winona ISD and big Sandy ISD. Students enrolled in the program range from ages 10 to 13 and are in grades four, five and six. Student participants in the 1997 Project JEM Program attended Saturdaymore » Academy sessions and a four week intensive, summer residential program. The information here provides a synopsis of the activities which were conducted through each program component.« less

  16. MEKS: A program for computation of inclusive jet cross sections at hadron colliders

    NASA Astrophysics Data System (ADS)

    Gao, Jun; Liang, Zhihua; Soper, Davison E.; Lai, Hung-Liang; Nadolsky, Pavel M.; Yuan, C.-P.

    2013-06-01

    EKS is a numerical program that predicts differential cross sections for production of single-inclusive hadronic jets and jet pairs at next-to-leading order (NLO) accuracy in a perturbative QCD calculation. We describe MEKS 1.0, an upgraded EKS program with increased numerical precision, suitable for comparisons to the latest experimental data from the Large Hadron Collider and Tevatron. The program integrates the regularized patron-level matrix elements over the kinematical phase space for production of two and three partons using the VEGAS algorithm. It stores the generated weighted events in finely binned two-dimensional histograms for fast offline analysis. A user interface allows one to customize computation of inclusive jet observables. Results of a benchmark comparison of the MEKS program and the commonly used FastNLO program are also documented. Program SummaryProgram title: MEKS 1.0 Catalogue identifier: AEOX_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOX_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland. Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 9234 No. of bytes in distributed program, including test data, etc.: 51997 Distribution format: tar.gz Programming language: Fortran (main program), C (CUBA library and analysis program). Computer: All. Operating system: Any UNIX-like system. RAM: ˜300 MB Classification: 11.1. External routines: LHAPDF (https://lhapdf.hepforge.org/) Nature of problem: Computation of differential cross sections for inclusive production of single hadronic jets and jet pairs at next-to-leading order accuracy in perturbative quantum chromodynamics. Solution method: Upon subtraction of infrared singularities, the hard-scattering matrix elements are integrated over available phase space using an optimized VEGAS algorithm. Weighted events are generated and filled into a finely binned two-dimensional histogram, from which the final cross sections with typical experimental binning and cuts are computed by an independent analysis program. Monte Carlo sampling of event weights is tuned automatically to get better efficiency. Running time: Depends on details of the calculation and sought numerical accuracy. See benchmark performance in Section 4. The tests provided take approximately 27 min for the jetbin run and a few seconds for jetana.

  17. ePORT, NASA's Computer Database Program for System Safety Risk Management Oversight (Electronic Project Online Risk Tool)

    NASA Technical Reports Server (NTRS)

    Johnson, Paul W.

    2008-01-01

    ePORT (electronic Project Online Risk Tool) provides a systematic approach to using an electronic database program to manage a program/project risk management processes. This presentation will briefly cover the standard risk management procedures, then thoroughly cover NASA's Risk Management tool called ePORT. This electronic Project Online Risk Tool (ePORT) is a web-based risk management program that provides a common framework to capture and manage risks, independent of a programs/projects size and budget. It is used to thoroughly cover the risk management paradigm providing standardized evaluation criterion for common management reporting, ePORT improves Product Line, Center and Corporate Management insight, simplifies program/project manager reporting, and maintains an archive of data for historical reference.

  18. Parallel computation for biological sequence comparison: comparing a portable model to the native model for the Intel Hypercube.

    PubMed Central

    Nadkarni, P. M.; Miller, P. L.

    1991-01-01

    A parallel program for inter-database sequence comparison was developed on the Intel Hypercube using two models of parallel programming. One version was built using machine-specific Hypercube parallel programming commands. The other version was built using Linda, a machine-independent parallel programming language. The two versions of the program provide a case study comparing these two approaches to parallelization in an important biological application area. Benchmark tests with both programs gave comparable results with a small number of processors. As the number of processors was increased, the Linda version was somewhat less efficient. The Linda version was also run without change on Network Linda, a virtual parallel machine running on a network of desktop workstations. PMID:1807632

  19. The Influence of Experiencing Success in Math on Math Anxiety, Perceived Math Competence, and Math Performance

    ERIC Educational Resources Information Center

    Jansen, Brenda R. J.; Louwerse, Jolien; Straatemeier, Marthe; Van der Ven, Sanne H. G.; Klinkenberg, Sharon; Van der Maas, Han L. J.

    2013-01-01

    It was investigated whether children would experience less math anxiety and feel more competent when they, independent of ability level, experienced high success rates in math. Comparable success rates were achieved by adapting problem difficulty to individuals' ability levels with a computer-adaptive program. A total of 207 children (grades 3-6)…

  20. Computer Assisted Reading in German as a Foreign Language, Developing and Testing an NLP-Based Application

    ERIC Educational Resources Information Center

    Wood, Peter

    2011-01-01

    "QuickAssist," the program presented in this paper, uses natural language processing (NLP) technologies. It places a range of NLP tools at the disposal of learners, intended to enable them to independently read and comprehend a German text of their choice while they extend their vocabulary, learn about different uses of particular words,…

  1. From Turing machines to computer viruses.

    PubMed

    Marion, Jean-Yves

    2012-07-28

    Self-replication is one of the fundamental aspects of computing where a program or a system may duplicate, evolve and mutate. Our point of view is that Kleene's (second) recursion theorem is essential to understand self-replication mechanisms. An interesting example of self-replication codes is given by computer viruses. This was initially explained in the seminal works of Cohen and of Adleman in the 1980s. In fact, the different variants of recursion theorems provide and explain constructions of self-replicating codes and, as a result, of various classes of malware. None of the results are new from the point of view of computability theory. We now propose a self-modifying register machine as a model of computation in which we can effectively deal with the self-reproduction and in which new offsprings can be activated as independent organisms.

  2. An intelligent CNC machine control system architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, D.J.; Loucks, C.S.

    1996-10-01

    Intelligent, agile manufacturing relies on automated programming of digitally controlled processes. Currently, processes such as Computer Numerically Controlled (CNC) machining are difficult to automate because of highly restrictive controllers and poor software environments. It is also difficult to utilize sensors and process models for adaptive control, or to integrate machining processes with other tasks within a factory floor setting. As part of a Laboratory Directed Research and Development (LDRD) program, a CNC machine control system architecture based on object-oriented design and graphical programming has been developed to address some of these problems and to demonstrate automated agile machining applications usingmore » platform-independent software.« less

  3. UW VLSI chip tester

    NASA Astrophysics Data System (ADS)

    McKenzie, Neil

    1989-12-01

    We present a design for a low-cost, functional VLSI chip tester. It is based on the Apple MacIntosh II personal computer. It tests chips that have up to 128 pins. All pin drivers of the tester are bidirectional; each pin is programmed independently as an input or an output. The tester can test both static and dynamic chips. Rudimentary speed testing is provided. Chips are tested by executing C programs written by the user. A software library is provided for program development. Tests run under both the Mac Operating System and A/UX. The design is implemented using Xilinx Logic Cell Arrays. Price/performance tradeoffs are discussed.

  4. Methodology, status, and plans for development and assessment of the RELAP5 code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, G.W.; Riemke, R.A.

    1997-07-01

    RELAP/MOD3 is a computer code used for the simulation of transients and accidents in light-water nuclear power plants. The objective of the program to develop and maintain RELAP5 was and is to provide the U.S. Nuclear Regulatory Commission with an independent tool for assessing reactor safety. This paper describes code requirements, models, solution scheme, language and structure, user interface validation, and documentation. The paper also describes the current and near term development program and provides an assessment of the code`s strengths and limitations.

  5. A Linguistic Model in Component Oriented Programming

    NASA Astrophysics Data System (ADS)

    Crăciunean, Daniel Cristian; Crăciunean, Vasile

    2016-12-01

    It is a fact that the component-oriented programming, well organized, can bring a large increase in efficiency in the development of large software systems. This paper proposes a model for building software systems by assembling components that can operate independently of each other. The model is based on a computing environment that runs parallel and distributed applications. This paper introduces concepts as: abstract aggregation scheme and aggregation application. Basically, an aggregation application is an application that is obtained by combining corresponding components. In our model an aggregation application is a word in a language.

  6. Computer Program Development Specification for Ada Integrated Environment: KAPSE (Kernel Ada Programming Support Environment)/Database, Type B5, B5-AIE(1).KAPSE(1).

    DTIC Science & Technology

    1982-11-12

    File 1/0 Prgram Invocation Other Access M and Control Services KAPSE/Host Interface most Operating System Peripherals/ 01 su ?eetworks 6282318-2 Figure 3...3.2.4.3.8.5 Transitory Windows The TRANSITORY flag is used to prevent permanent dependence on temporary windows created simply for focusing on a part of the...KAPSE/Tool interfaces in terms of these low-level host-independent interfaces. In addition, the KAPSE/Host interface packages prevent the application

  7. The Distributed Diagonal Force Decomposition Method for Parallelizing Molecular Dynamics Simulations

    PubMed Central

    Boršnik, Urban; Miller, Benjamin T.; Brooks, Bernard R.; Janežič, Dušanka

    2011-01-01

    Parallelization is an effective way to reduce the computational time needed for molecular dynamics simulations. We describe a new parallelization method, the distributed-diagonal force decomposition method, with which we extend and improve the existing force decomposition methods. Our new method requires less data communication during molecular dynamics simulations than replicated data and current force decomposition methods, increasing the parallel efficiency. It also dynamically load-balances the processors' computational load throughout the simulation. The method is readily implemented in existing molecular dynamics codes and it has been incorporated into the CHARMM program, allowing its immediate use in conjunction with the many molecular dynamics simulation techniques that are already present in the program. We also present the design of the Force Decomposition Machine, a cluster of personal computers and networks that is tailored to running molecular dynamics simulations using the distributed diagonal force decomposition method. The design is expandable and provides various degrees of fault resilience. This approach is easily adaptable to computers with Graphics Processing Units because it is independent of the processor type being used. PMID:21793007

  8. Automatic differentiation evaluated as a tool for rotorcraft design and optimization

    NASA Technical Reports Server (NTRS)

    Walsh, Joanne L.; Young, Katherine C.

    1995-01-01

    This paper investigates the use of automatic differentiation (AD) as a means for generating sensitivity analyses in rotorcraft design and optimization. This technique transforms an existing computer program into a new program that performs sensitivity analysis in addition to the original analysis. The original FORTRAN program calculates a set of dependent (output) variables from a set of independent (input) variables, the new FORTRAN program calculates the partial derivatives of the dependent variables with respect to the independent variables. The AD technique is a systematic implementation of the chain rule of differentiation, this method produces derivatives to machine accuracy at a cost that is comparable with that of finite-differencing methods. For this study, an analysis code that consists of the Langley-developed hover analysis HOVT, the comprehensive rotor analysis CAMRAD/JA, and associated preprocessors is processed through the AD preprocessor ADIFOR 2.0. The resulting derivatives are compared with derivatives obtained from finite-differencing techniques. The derivatives obtained with ADIFOR 2.0 are exact within machine accuracy and do not depend on the selection of step-size, as are the derivatives obtained with finite-differencing techniques.

  9. Gstat: a program for geostatistical modelling, prediction and simulation

    NASA Astrophysics Data System (ADS)

    Pebesma, Edzer J.; Wesseling, Cees G.

    1998-01-01

    Gstat is a computer program for variogram modelling, and geostatistical prediction and simulation. It provides a generic implementation of the multivariable linear model with trends modelled as a linear function of coordinate polynomials or of user-defined base functions, and independent or dependent, geostatistically modelled, residuals. Simulation in gstat comprises conditional or unconditional (multi-) Gaussian sequential simulation of point values or block averages, or (multi-) indicator sequential simulation. Besides many of the popular options found in other geostatistical software packages, gstat offers the unique combination of (i) an interactive user interface for modelling variograms and generalized covariances (residual variograms), that uses the device-independent plotting program gnuplot for graphical display, (ii) support for several ascii and binary data and map file formats for input and output, (iii) a concise, intuitive and flexible command language, (iv) user customization of program defaults, (v) no built-in limits, and (vi) free, portable ANSI-C source code. This paper describes the class of problems gstat can solve, and addresses aspects of efficiency and implementation, managing geostatistical projects, and relevant technical details.

  10. Programming with models: modularity and abstraction provide powerful capabilities for systems biology

    PubMed Central

    Mallavarapu, Aneil; Thomson, Matthew; Ullian, Benjamin; Gunawardena, Jeremy

    2008-01-01

    Mathematical models are increasingly used to understand how phenotypes emerge from systems of molecular interactions. However, their current construction as monolithic sets of equations presents a fundamental barrier to progress. Overcoming this requires modularity, enabling sub-systems to be specified independently and combined incrementally, and abstraction, enabling generic properties of biological processes to be specified independently of specific instances. These, in turn, require models to be represented as programs rather than as datatypes. Programmable modularity and abstraction enables libraries of modules to be created, which can be instantiated and reused repeatedly in different contexts with different components. We have developed a computational infrastructure that accomplishes this. We show here why such capabilities are needed, what is required to implement them and what can be accomplished with them that could not be done previously. PMID:18647734

  11. Programming with models: modularity and abstraction provide powerful capabilities for systems biology.

    PubMed

    Mallavarapu, Aneil; Thomson, Matthew; Ullian, Benjamin; Gunawardena, Jeremy

    2009-03-06

    Mathematical models are increasingly used to understand how phenotypes emerge from systems of molecular interactions. However, their current construction as monolithic sets of equations presents a fundamental barrier to progress. Overcoming this requires modularity, enabling sub-systems to be specified independently and combined incrementally, and abstraction, enabling generic properties of biological processes to be specified independently of specific instances. These, in turn, require models to be represented as programs rather than as datatypes. Programmable modularity and abstraction enables libraries of modules to be created, which can be instantiated and reused repeatedly in different contexts with different components. We have developed a computational infrastructure that accomplishes this. We show here why such capabilities are needed, what is required to implement them and what can be accomplished with them that could not be done previously.

  12. Fast scaffolding with small independent mixed integer programs

    PubMed Central

    Salmela, Leena; Mäkinen, Veli; Välimäki, Niko; Ylinen, Johannes; Ukkonen, Esko

    2011-01-01

    Motivation: Assembling genomes from short read data has become increasingly popular, but the problem remains computationally challenging especially for larger genomes. We study the scaffolding phase of sequence assembly where preassembled contigs are ordered based on mate pair data. Results: We present MIP Scaffolder that divides the scaffolding problem into smaller subproblems and solves these with mixed integer programming. The scaffolding problem can be represented as a graph and the biconnected components of this graph can be solved independently. We present a technique for restricting the size of these subproblems so that they can be solved accurately with mixed integer programming. We compare MIP Scaffolder to two state of the art methods, SOPRA and SSPACE. MIP Scaffolder is fast and produces better or as good scaffolds as its competitors on large genomes. Availability: The source code of MIP Scaffolder is freely available at http://www.cs.helsinki.fi/u/lmsalmel/mip-scaffolder/. Contact: leena.salmela@cs.helsinki.fi PMID:21998153

  13. COSMIC monthly progress report

    NASA Technical Reports Server (NTRS)

    1994-01-01

    Activities of the Computer Software Management and Information Center (COSMIC) are summarized for the month of May 1994. Tables showing the current inventory of programs available from COSMIC are presented and program processing and evaluation activities are summarized. Nine articles were prepared for publication in the NASA Tech Brief Journal. These articles (included in this report) describe the following software items: (1) WFI - Windowing System for Test and Simulation; (2) HZETRN - A Free Space Radiation Transport and Shielding Program; (3) COMGEN-BEM - Composite Model Generation-Boundary Element Method; (4) IDDS - Interactive Data Display System; (5) CET93/PC - Chemical Equilibrium with Transport Properties, 1993; (6) SDVIC - Sub-pixel Digital Video Image Correlation; (7) TRASYS - Thermal Radiation Analyzer System (HP9000 Series 700/800 Version without NASADIG); (8) NASADIG - NASA Device Independent Graphics Library, Version 6.0 (VAX VMS Version); and (9) NASADIG - NASA Device Independent Graphics Library, Version 6.0 (UNIX Version). Activities in the areas of marketing, customer service, benefits identification, maintenance and support, and dissemination are also described along with a budget summary.

  14. Implementation and performance of FDPS: a framework for developing parallel particle simulation codes

    NASA Astrophysics Data System (ADS)

    Iwasawa, Masaki; Tanikawa, Ataru; Hosono, Natsuki; Nitadori, Keigo; Muranushi, Takayuki; Makino, Junichiro

    2016-08-01

    We present the basic idea, implementation, measured performance, and performance model of FDPS (Framework for Developing Particle Simulators). FDPS is an application-development framework which helps researchers to develop simulation programs using particle methods for large-scale distributed-memory parallel supercomputers. A particle-based simulation program for distributed-memory parallel computers needs to perform domain decomposition, exchange of particles which are not in the domain of each computing node, and gathering of the particle information in other nodes which are necessary for interaction calculation. Also, even if distributed-memory parallel computers are not used, in order to reduce the amount of computation, algorithms such as the Barnes-Hut tree algorithm or the Fast Multipole Method should be used in the case of long-range interactions. For short-range interactions, some methods to limit the calculation to neighbor particles are required. FDPS provides all of these functions which are necessary for efficient parallel execution of particle-based simulations as "templates," which are independent of the actual data structure of particles and the functional form of the particle-particle interaction. By using FDPS, researchers can write their programs with the amount of work necessary to write a simple, sequential and unoptimized program of O(N2) calculation cost, and yet the program, once compiled with FDPS, will run efficiently on large-scale parallel supercomputers. A simple gravitational N-body program can be written in around 120 lines. We report the actual performance of these programs and the performance model. The weak scaling performance is very good, and almost linear speed-up was obtained for up to the full system of the K computer. The minimum calculation time per timestep is in the range of 30 ms (N = 107) to 300 ms (N = 109). These are currently limited by the time for the calculation of the domain decomposition and communication necessary for the interaction calculation. We discuss how we can overcome these bottlenecks.

  15. Efficient generation of connectivity in neuronal networks from simulator-independent descriptions

    PubMed Central

    Djurfeldt, Mikael; Davison, Andrew P.; Eppler, Jochen M.

    2014-01-01

    Simulator-independent descriptions of connectivity in neuronal networks promise greater ease of model sharing, improved reproducibility of simulation results, and reduced programming effort for computational neuroscientists. However, until now, enabling the use of such descriptions in a given simulator in a computationally efficient way has entailed considerable work for simulator developers, which must be repeated for each new connectivity-generating library that is developed. We have developed a generic connection generator interface that provides a standard way to connect a connectivity-generating library to a simulator, such that one library can easily be replaced by another, according to the modeler's needs. We have used the connection generator interface to connect C++ and Python implementations of the previously described connection-set algebra to the NEST simulator. We also demonstrate how the simulator-independent modeling framework PyNN can transparently take advantage of this, passing a connection description through to the simulator layer for rapid processing in C++ where a simulator supports the connection generator interface and falling-back to slower iteration in Python otherwise. A set of benchmarks demonstrates the good performance of the interface. PMID:24795620

  16. Using a computer-based simulation with an artificial intelligence component and discovery learning to formulate training needs for a new technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hillis, D.R.

    A computer-based simulation with an artificial intelligence component and discovery learning was investigated as a method to formulate training needs for new or unfamiliar technologies. Specifically, the study examined if this simulation method would provide for the recognition of applications and knowledge/skills which would be the basis for establishing training needs. The study also examined the effect of field-dependence/independence on recognition of applications and knowledge/skills. A pretest-posttest control group experimental design involving fifty-eight college students from an industrial technology program was used. The study concluded that the simulation was effective in developing recognition of applications and the knowledge/skills for amore » new or unfamiliar technology. And, the simulation's effectiveness for providing this recognition was not limited by an individual's field-dependence/independence.« less

  17. GCLAS: a graphical constituent loading analysis system

    USGS Publications Warehouse

    McKallip, T.E.; Koltun, G.F.; Gray, J.R.; Glysson, G.D.

    2001-01-01

    The U. S. Geological Survey has developed a program called GCLAS (Graphical Constituent Loading Analysis System) to aid in the computation of daily constituent loads transported in stream flow. Due to the relative paucity with which most water-quality data are collected, computation of daily constituent loads is moderately to highly dependent on human interpretation of the relation between stream hydraulics and constituent transport. GCLAS provides a visual environment for evaluating the relation between hydraulic and other covariate time series and the constituent chemograph. GCLAS replaces the computer program Sedcalc, which is the most recent USGS sanctioned tool for constructing sediment chemographs and computing suspended-sediment loads. Written in a portable language, GCLAS has an interactive graphical interface that permits easy entry of estimated values and provides new tools to aid in making those estimates. The use of a portable language for program development imparts a degree of computer platform independence that was difficult to obtain in the past, making implementation more straightforward within the USGS' s diverse computing environment. Some of the improvements introduced in GCLAS include (1) the ability to directly handle periods of zero or reverse flow, (2) the ability to analyze and apply coefficient adjustments to concentrations as a function of time, streamflow, or both, (3) the ability to compute discharges of constituents other than suspended sediment, (4) the ability to easily view data related to the chemograph at different levels of detail, and (5) the ability to readily display covariate time series data to provide enhanced visual cues for drawing the constituent chemograph.

  18. Alloy Design Workbench-Surface Modeling Package Developed

    NASA Technical Reports Server (NTRS)

    Abel, Phillip B.; Noebe, Ronald D.; Bozzolo, Guillermo H.; Good, Brian S.; Daugherty, Elaine S.

    2003-01-01

    NASA Glenn Research Center's Computational Materials Group has integrated a graphical user interface with in-house-developed surface modeling capabilities, with the goal of using computationally efficient atomistic simulations to aid the development of advanced aerospace materials, through the modeling of alloy surfaces, surface alloys, and segregation. The software is also ideal for modeling nanomaterials, since surface and interfacial effects can dominate material behavior and properties at this level. Through the combination of an accurate atomistic surface modeling methodology and an efficient computational engine, it is now possible to directly model these types of surface phenomenon and metallic nanostructures without a supercomputer. Fulfilling a High Operating Temperature Propulsion Components (HOTPC) project level-I milestone, a graphical user interface was created for a suite of quantum approximate atomistic materials modeling Fortran programs developed at Glenn. The resulting "Alloy Design Workbench-Surface Modeling Package" (ADW-SMP) is the combination of proven quantum approximate Bozzolo-Ferrante-Smith (BFS) algorithms (refs. 1 and 2) with a productivity-enhancing graphical front end. Written in the portable, platform independent Java programming language, the graphical user interface calls on extensively tested Fortran programs running in the background for the detailed computational tasks. Designed to run on desktop computers, the package has been deployed on PC, Mac, and SGI computer systems. The graphical user interface integrates two modes of computational materials exploration. One mode uses Monte Carlo simulations to determine lowest energy equilibrium configurations. The second approach is an interactive "what if" comparison of atomic configuration energies, designed to provide real-time insight into the underlying drivers of alloying processes.

  19. A program for computing the prediction probability and the related receiver operating characteristic graph.

    PubMed

    Jordan, Denis; Steiner, Marcel; Kochs, Eberhard F; Schneider, Gerhard

    2010-12-01

    Prediction probability (P(K)) and the area under the receiver operating characteristic curve (AUC) are statistical measures to assess the performance of anesthetic depth indicators, to more precisely quantify the correlation between observed anesthetic depth and corresponding values of a monitor or indicator. In contrast to many other statistical tests, they offer several advantages. First, P(K) and AUC are independent from scale units and assumptions on underlying distributions. Second, the calculation can be performed without any knowledge about particular indicator threshold values, which makes the test more independent from specific test data. Third, recent approaches using resampling methods allow a reliable comparison of P(K) or AUC of different indicators of anesthetic depth. Furthermore, both tests allow simple interpretation, whereby results between 0 and 1 are related to the probability, how good an indicator separates the observed levels of anesthesia. For these reasons, P(K) and AUC have become popular in medical decision making. P(K) is intended for polytomous patient states (i.e., >2 anesthetic levels) and can be considered as a generalization of the AUC, which was basically introduced to assess a predictor of dichotomous classes (e.g., consciousness and unconsciousness in anesthesia). Dichotomous paradigms provide equal values of P(K) and AUC test statistics. In the present investigation, we introduce a user-friendly computer program for computing P(K) and estimating reliable bootstrap confidence intervals. It is designed for multiple comparisons of the performance of depth of anesthesia indicators. Additionally, for dichotomous classes, the program plots the receiver operating characteristic graph completing information obtained from P(K) or AUC, respectively. In clinical investigations, both measures are applied for indicator assessment, where ambiguous usage and interpretation may be a consequence. Therefore, a summary of the concepts of P(K) and AUC including brief and easily understandable proof of their equality is presented in the text. The exposure introduces readers to the algorithms of the provided computer program and is intended to make standardized performance tests of depth of anesthesia indicators available to medical researchers.

  20. Building flexible real-time systems using the Flex language

    NASA Technical Reports Server (NTRS)

    Kenny, Kevin B.; Lin, Kwei-Jay

    1991-01-01

    The design and implementation of a real-time programming language called Flex, which is a derivative of C++, are presented. It is shown how different types of timing requirements might be expressed and enforced in Flex, how they might be fulfilled in a flexible way using different program models, and how the programming environment can help in making binding and scheduling decisions. The timing constraint primitives in Flex are easy to use yet powerful enough to define both independent and relative timing constraints. Program models like imprecise computation and performance polymorphism can carry out flexible real-time programs. In addition, programmers can use a performance measurement tool that produces statistically correct timing models to predict the expected execution time of a program and to help make binding decisions. A real-time programming environment is also presented.

  1. [Activities of Research Institute for Advanced Computer Science

    NASA Technical Reports Server (NTRS)

    Gross, Anthony R. (Technical Monitor); Leiner, Barry M.

    2001-01-01

    The Research Institute for Advanced Computer Science (RIACS) carries out basic research and technology development in computer science, in support of the National Aeronautics and Space Administrations missions. RIACS is located at the NASA Ames Research Center, Moffett Field, California. RIACS research focuses on the three cornerstones of IT research necessary to meet the future challenges of NASA missions: 1. Automated Reasoning for Autonomous Systems Techniques are being developed enabling spacecraft that will be self-guiding and self-correcting to the extent that they will require little or no human intervention. Such craft will be equipped to independently solve problems as they arise, and fulfill their missions with minimum direction from Earth. 2. Human-Centered Computing Many NASA missions require synergy between humans and computers, with sophisticated computational aids amplifying human cognitive and perceptual abilities. 3. High Performance Computing and Networking Advances in the performance of computing and networking continue to have major impact on a variety of NASA endeavors, ranging from modeling and simulation to analysis of large scientific datasets to collaborative engineering, planning and execution. In addition, RIACS collaborates with NASA scientists to apply IT research to a variety of NASA application domains. RIACS also engages in other activities, such as workshops, seminars, visiting scientist programs and student summer programs, designed to encourage and facilitate collaboration between the university and NASA IT research communities.

  2. Algorithm-Based Fault Tolerance for Numerical Subroutines

    NASA Technical Reports Server (NTRS)

    Tumon, Michael; Granat, Robert; Lou, John

    2007-01-01

    A software library implements a new methodology of detecting faults in numerical subroutines, thus enabling application programs that contain the subroutines to recover transparently from single-event upsets. The software library in question is fault-detecting middleware that is wrapped around the numericalsubroutines. Conventional serial versions (based on LAPACK and FFTW) and a parallel version (based on ScaLAPACK) exist. The source code of the application program that contains the numerical subroutines is not modified, and the middleware is transparent to the user. The methodology used is a type of algorithm- based fault tolerance (ABFT). In ABFT, a checksum is computed before a computation and compared with the checksum of the computational result; an error is declared if the difference between the checksums exceeds some threshold. Novel normalization methods are used in the checksum comparison to ensure correct fault detections independent of algorithm inputs. In tests of this software reported in the peer-reviewed literature, this library was shown to enable detection of 99.9 percent of significant faults while generating no false alarms.

  3. Estimating population diversity with CatchAll

    PubMed Central

    Bunge, John; Woodard, Linda; Böhning, Dankmar; Foster, James A.; Connolly, Sean; Allen, Heather K.

    2012-01-01

    Motivation: The massive data produced by next-generation sequencing require advanced statistical tools. We address estimating the total diversity or species richness in a population. To date, only relatively simple methods have been implemented in available software. There is a need for software employing modern, computationally intensive statistical analyses including error, goodness-of-fit and robustness assessments. Results: We present CatchAll, a fast, easy-to-use, platform-independent program that computes maximum likelihood estimates for finite-mixture models, weighted linear regression-based analyses and coverage-based non-parametric methods, along with outlier diagnostics. Given sample ‘frequency count’ data, CatchAll computes 12 different diversity estimates and applies a model-selection algorithm. CatchAll also derives discounted diversity estimates to adjust for possibly uncertain low-frequency counts. It is accompanied by an Excel-based graphics program. Availability: Free executable downloads for Linux, Windows and Mac OS, with manual and source code, at www.northeastern.edu/catchall. Contact: jab18@cornell.edu PMID:22333246

  4. Long period nodal motion of sun synchronous orbits

    NASA Technical Reports Server (NTRS)

    Duck, K. I.

    1975-01-01

    An approximative model is formulated for assessing these perturbations that significantly affect long term modal motion of sun synchronous orbits. Computer simulations with several independent computer programs consider zonal and tesseral gravitational harmonics, third body gravitational disturbances induced by the sun and the moon, and atmospheric drag. A pendulum model consisting of evenzonal harmonics through order 4 and solar gravity dominated nodal motion approximation. This pendulum motion results from solar gravity inducing an inclination oscillation which couples into the nodal precession induced by the earth's oblateness. The pendulum model correlated well with simulations observed flight data.

  5. Attitudes Towards and Limitations to ICT Use in Assisted and Independent Living Communities: Findings from a Specially-Designed Technological Intervention

    PubMed Central

    Berkowsky, Ronald W.; Cotten, Shelia R.; Yost, Elizabeth A.; Winstead, Vicki P.

    2012-01-01

    While much literature has been devoted to theoretical explanations of the learning processes of older adults and to the methods of teaching best utilized in older populations, less has focused on the education of older adults who reside in assisted and independent living communities (AICs), especially with regards to information and communication technology (ICT) education. The purpose of this study is to determine whether participants’ attitudes and views towards computers and the Internet are affected as a result of participating in an eight-week training program designed to enhance computer and Internet use among older adults in such communities. Specifically, we examine if ICT education specially designed for AIC residents results in more positive attitudes towards ICTs and a perceived decrease in factors that may limit or prevent computer and Internet use. We discuss the implications of these results for enhancing the quality of life for older adults in AICs and make recommendations for those seeking to decrease digital inequality among older adults in these communities through their own ICT classes. PMID:24244065

  6. Microgravity sciences application visiting scientist program

    NASA Technical Reports Server (NTRS)

    Glicksman, Martin; Vanalstine, James

    1995-01-01

    Marshall Space Flight Center pursues scientific research in the area of low-gravity effects on materials and processes. To facilitate these Government performed research responsibilities, a number of supplementary research tasks were accomplished by a group of specialized visiting scientists. They participated in work on contemporary research problems with specific objectives related to current or future space flight experiments and defined and established independent programs of research which were based on scientific peer review and the relevance of the defined research to NASA microgravity for implementing a portion of the national program. The programs included research in the following areas: protein crystal growth, X-ray crystallography and computer analysis of protein crystal structure, optimization and analysis of protein crystal growth techniques, and design and testing of flight hardware.

  7. A Generalization of the Karush-Kuhn-Tucker Theorem for Approximate Solutions of Mathematical Programming Problems Based on Quadratic Approximation

    NASA Astrophysics Data System (ADS)

    Voloshinov, V. V.

    2018-03-01

    In computations related to mathematical programming problems, one often has to consider approximate, rather than exact, solutions satisfying the constraints of the problem and the optimality criterion with a certain error. For determining stopping rules for iterative procedures, in the stability analysis of solutions with respect to errors in the initial data, etc., a justified characteristic of such solutions that is independent of the numerical method used to obtain them is needed. A necessary δ-optimality condition in the smooth mathematical programming problem that generalizes the Karush-Kuhn-Tucker theorem for the case of approximate solutions is obtained. The Lagrange multipliers corresponding to the approximate solution are determined by solving an approximating quadratic programming problem.

  8. Static Schedulers for Embedded Real-Time Systems

    DTIC Science & Technology

    1989-12-01

    Because of the need for having efficient scheduling algorithms in large scale real time systems , software engineers put a lot of effort on developing...provide static schedulers for he Embedded Real Time Systems with single processor using Ada programming language. The independent nonpreemptable...support the Computer Aided Rapid Prototyping for Embedded Real Time Systems so that we determine whether the system, as designed, meets the required

  9. A computer program for cyclic plasticity and structural fatigue analysis

    NASA Technical Reports Server (NTRS)

    Kalev, I.

    1980-01-01

    A computerized tool for the analysis of time independent cyclic plasticity structural response, life to crack initiation prediction, and crack growth rate prediction for metallic materials is described. Three analytical items are combined: the finite element method with its associated numerical techniques for idealization of the structural component, cyclic plasticity models for idealization of the material behavior, and damage accumulation criteria for the fatigue failure.

  10. Manual for Getdata Version 3.1: a FORTRAN Utility Program for Time History Data

    NASA Technical Reports Server (NTRS)

    Maine, Richard E.

    1987-01-01

    This report documents version 3.1 of the GetData computer program. GetData is a utility program for manipulating files of time history data, i.e., data giving the values of parameters as functions of time. The most fundamental capability of GetData is extracting selected signals and time segments from an input file and writing the selected data to an output file. Other capabilities include converting file formats, merging data from several input files, time skewing, interpolating to common output times, and generating calculated output signals as functions of the input signals. This report also documents the interface standards for the subroutines used by GetData to read and write the time history files. All interface to the data files is through these subroutines, keeping the main body of GetData independent of the precise details of the file formats. Different file formats can be supported by changes restricted to these subroutines. Other computer programs conforming to the interface standards can call the same subroutines to read and write files in compatible formats.

  11. Large-scale structural optimization

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.

    1983-01-01

    Problems encountered by aerospace designers in attempting to optimize whole aircraft are discussed, along with possible solutions. Large scale optimization, as opposed to component-by-component optimization, is hindered by computational costs, software inflexibility, concentration on a single, rather than trade-off, design methodology and the incompatibility of large-scale optimization with single program, single computer methods. The software problem can be approached by placing the full analysis outside of the optimization loop. Full analysis is then performed only periodically. Problem-dependent software can be removed from the generic code using a systems programming technique, and then embody the definitions of design variables, objective function and design constraints. Trade-off algorithms can be used at the design points to obtain quantitative answers. Finally, decomposing the large-scale problem into independent subproblems allows systematic optimization of the problems by an organization of people and machines.

  12. Probabilistic structural analysis methods for improving Space Shuttle engine reliability

    NASA Technical Reports Server (NTRS)

    Boyce, L.

    1989-01-01

    Probabilistic structural analysis methods are particularly useful in the design and analysis of critical structural components and systems that operate in very severe and uncertain environments. These methods have recently found application in space propulsion systems to improve the structural reliability of Space Shuttle Main Engine (SSME) components. A computer program, NESSUS, based on a deterministic finite-element program and a method of probabilistic analysis (fast probability integration) provides probabilistic structural analysis for selected SSME components. While computationally efficient, it considers both correlated and nonnormal random variables as well as an implicit functional relationship between independent and dependent variables. The program is used to determine the response of a nickel-based superalloy SSME turbopump blade. Results include blade tip displacement statistics due to the variability in blade thickness, modulus of elasticity, Poisson's ratio or density. Modulus of elasticity significantly contributed to blade tip variability while Poisson's ratio did not. Thus, a rational method for choosing parameters to be modeled as random is provided.

  13. Dynamic online surveys and experiments with the free open-source software dynQuest.

    PubMed

    Rademacher, Jens D M; Lippke, Sonia

    2007-08-01

    With computers and the World Wide Web widely available, collecting data through Web browsers is an attractive method utilized by the social sciences. In this article, conducting PC- and Web-based trials with the software package dynQuest is described. The software manages dynamic questionnaire-based trials over the Internet or on single computers, possibly as randomized control trials (RCT), if two or more groups are involved. The choice of follow-up questions can depend on previous responses, as needed for matched interventions. Data are collected in a simple text-based database that can be imported easily into other programs for postprocessing and statistical analysis. The software consists of platform-independent scripts written in the programming language PERL that use the common gateway interface between Web browser and server for submission of data through HTML forms. Advantages of dynQuest are parsimony, simplicity in use and installation, transparency, and reliability. The program is available as open-source freeware from the authors.

  14. Using an architectural approach to integrate heterogeneous, distributed software components

    NASA Technical Reports Server (NTRS)

    Callahan, John R.; Purtilo, James M.

    1995-01-01

    Many computer programs cannot be easily integrated because their components are distributed and heterogeneous, i.e., they are implemented in diverse programming languages, use different data representation formats, or their runtime environments are incompatible. In many cases, programs are integrated by modifying their components or interposing mechanisms that handle communication and conversion tasks. For example, remote procedure call (RPC) helps integrate heterogeneous, distributed programs. When configuring such programs, however, mechanisms like RPC must be used explicitly by software developers in order to integrate collections of diverse components. Each collection may require a unique integration solution. This paper describes improvements to the concepts of software packaging and some of our experiences in constructing complex software systems from a wide variety of components in different execution environments. Software packaging is a process that automatically determines how to integrate a diverse collection of computer programs based on the types of components involved and the capabilities of available translators and adapters in an environment. Software packaging provides a context that relates such mechanisms to software integration processes and reduces the cost of configuring applications whose components are distributed or implemented in different programming languages. Our software packaging tool subsumes traditional integration tools like UNIX make by providing a rule-based approach to software integration that is independent of execution environments.

  15. Spacelab experiment computer study. Volume 1: Executive summary (presentation)

    NASA Technical Reports Server (NTRS)

    Lewis, J. L.; Hodges, B. C.; Christy, J. O.

    1976-01-01

    A quantitative cost for various Spacelab flight hardware configurations is provided along with varied software development options. A cost analysis of Spacelab computer hardware and software is presented. The cost study is discussed based on utilization of a central experiment computer with optional auxillary equipment. Groundrules and assumptions used in deriving the costing methods for all options in the Spacelab experiment study are presented. The groundrules and assumptions, are analysed and the options along with their cost considerations, are discussed. It is concluded that Spacelab program cost for software development and maintenance is independent of experimental hardware and software options, that distributed standard computer concept simplifies software integration without a significant increase in cost, and that decisions on flight computer hardware configurations should not be made until payload selection for a given mission and a detailed analysis of the mission requirements are completed.

  16. Cascade flow analysis by Navier-Stokes equation

    NASA Astrophysics Data System (ADS)

    Nozaki, Osamu

    1987-06-01

    As the performance of the large electronic computer has improved, numerical simulation of the flow around the blade of the aircraft, for instance, is being actively conducted. In the compressor and turbine cascades of aircraft engine, multiple blades are put side by side closely, and the pressure gradient in the flow direction is large. Thus they have more complicated properties than the independent blade. At present, therefore, it is the mainstream to use potential, Euler's equation, etc., as the basic equation but, for knowing the phenomenon caused by the viscosity like the interference of shock waves and boundary layers, it is necessary to solve the Navier-Stokes (N-S) equation. A two-dimensional cascade analysis program was developed by the N-S equation by expanding the two-dimensional high Reynolds number transonic profile analysis code NSFOIL and the lattice formation program AFMESH for the independent blade, which were already developed so as to fit the cascade flow.

  17. Flight summaries and temperature climatology at airliner cruise altitudes from GASP (Global Atmospheric Sampling Program) data

    NASA Technical Reports Server (NTRS)

    Nastrom, G. D.; Jasperson, W. H.

    1983-01-01

    Temperature data obtained by the Global Atmospheric Sampling Program (GASP) during the period March 1975 to July 1979 are compiled to form flight summaries of static air temperature and a geographic temperature climatology. The flight summaries include the height and location of the coldest observed temperature and the mean flight level, temperature and the standard deviation of temperature for each flight as well as for flight segments. These summaries are ordered by route and month. The temperature climatology was computed for all statistically independent temperture data for each flight. The grid used consists of 5 deg latitude, 30 deg longitude and 2000 feet vertical resolution from FL270 to FL430 for each month of the year. The number of statistically independent observations, their mean, standard deviation and the empirical 98, 50, 16, 2 and .3 probability percentiles are presented.

  18. Computer-based test-bed for clinical assessment of hand/wrist feed-forward neuroprosthetic controllers using artificial neural networks.

    PubMed

    Luján, J L; Crago, P E

    2004-11-01

    Neuroprosthestic systems can be used to restore hand grasp and wrist control in individuals with C5/C6 spinal cord injury. A computer-based system was developed for the implementation, tuning and clinical assessment of neuroprosthetic controllers, using off-the-shelf hardware and software. The computer system turned a Pentium III PC running Windows NT into a non-dedicated, real-time system for the control of neuroprostheses. Software execution (written using the high-level programming languages LabVIEW and MATLAB) was divided into two phases: training and real-time control. During the training phase, the computer system collected input/output data by stimulating the muscles and measuring the muscle outputs in real-time, analysed the recorded data, generated a set of training data and trained an artificial neural network (ANN)-based controller. During real-time control, the computer system stimulated the muscles using stimulus pulsewidths predicted by the ANN controller in response to a sampled input from an external command source, to provide independent control of hand grasp and wrist posture. System timing was stable, reliable and capable of providing muscle stimulation at frequencies up to 24Hz. To demonstrate the application of the test-bed, an ANN-based controller was implemented with three inputs and two independent channels of stimulation. The ANN controller's ability to control hand grasp and wrist angle independently was assessed by quantitative comparison of the outputs of the stimulated muscles with a set of desired grasp or wrist postures determined by the command signal. Controller performance results were mixed, but the platform provided the tools to implement and assess future controller designs.

  19. Configurable software for satellite graphics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hartzman, P D

    An important goal in interactive computer graphics is to provide users with both quick system responses for basic graphics functions and enough computing power for complex calculations. One solution is to have a distributed graphics system in which a minicomputer and a powerful large computer share the work. The most versatile type of distributed system is an intelligent satellite system in which the minicomputer is programmable by the application user and can do most of the work while the large remote machine is used for difficult computations. At New York University, the hardware was configured from available equipment. The levelmore » of system intelligence resulted almost completely from software development. Unlike previous work with intelligent satellites, the resulting system had system control centered in the satellite. It also had the ability to reconfigure software during realtime operation. The design of the system was done at a very high level using set theoretic language. The specification clearly illustrated processor boundaries and interfaces. The high-level specification also produced a compact, machine-independent virtual graphics data structure for picture representation. The software was written in a systems implementation language; thus, only one set of programs was needed for both machines. A user can program both machines in a single language. Tests of the system with an application program indicate that is has very high potential. A major result of this work is the demonstration that a gigantic investment in new hardware is not necessary for computing facilities interested in graphics.« less

  20. Fully Implanted Brain-Computer Interface in a Locked-In Patient with ALS.

    PubMed

    Vansteensel, Mariska J; Pels, Elmar G M; Bleichner, Martin G; Branco, Mariana P; Denison, Timothy; Freudenburg, Zachary V; Gosselaar, Peter; Leinders, Sacha; Ottens, Thomas H; Van Den Boom, Max A; Van Rijen, Peter C; Aarnoutse, Erik J; Ramsey, Nick F

    2016-11-24

    Options for people with severe paralysis who have lost the ability to communicate orally are limited. We describe a method for communication in a patient with late-stage amyotrophic lateral sclerosis (ALS), involving a fully implanted brain-computer interface that consists of subdural electrodes placed over the motor cortex and a transmitter placed subcutaneously in the left side of the thorax. By attempting to move the hand on the side opposite the implanted electrodes, the patient accurately and independently controlled a computer typing program 28 weeks after electrode placement, at the equivalent of two letters per minute. The brain-computer interface offered autonomous communication that supplemented and at times supplanted the patient's eye-tracking device. (Funded by the Government of the Netherlands and the European Union; ClinicalTrials.gov number, NCT02224469 .).

  1. The effects of the interaction between cognitive style and instructional strategy on the educational outcomes for a science exhibit

    NASA Astrophysics Data System (ADS)

    Knappenberger, Naomi

    This dissertation examines factors which may affect the educational effectiveness of science exhibits. Exhibit effectiveness is the result of a complex interaction among exhibit features, cognitive characteristics of the museum visitor, and educational outcomes. The purpose of this study was to determine the relative proportions of field-dependent and field-independent visitors in the museum audience, and to ascertain if the cognitive style of visitors interacted with instructional strategies to affect the educational outcomes for a computer-based science exhibit. Cognitive style refers to the self-consistent modes of selecting and processing information that an individual employs throughout his or her perceptual and intellectual activities. It has a broad influence on many aspects of personality and behavior, including perception, memory, problem solving, interest, and even social behaviors and self-concept. As such, it constitutes essential dimensions of individual differences among museum visitors and has important implications for instructional design in the museum. The study was conducted in the spring of 1998 at the Adler Planetarium and Astronomy Museum in Chicago. Two experimental treatments of a computer-based exhibit were tested in the study. The first experimental treatment utilized strategies designed for field-dependent visitors that limited the text and provided more structure and cueing than the baseline treatment of the computer program. The other experimental treatment utilized strategies designed for field-independent visitors that provided hypothesis-testing and more contextual information. Approximately two-thirds of the visitors were field-independent. The results of a multiple regression analysis indicated that there was a significant interaction between cognitive style and instructional strategy that affected visitors' posttest scores on a multiple-choice test of the content. Field-independent visitors out- performed the field-dependent visitors in the control, baseline, and both experimental treatments. Both field-dependent and field-independent visitor posttest scores increased in the field-dependent experimental treatment and in the field-independent treatment. The most effective treatment for all visitors was the field-independent treatment. Criteria for designing a computer-based exhibit to meet the needs of all visitors were recommended. These included organized, concise text; a structured, rather than exploratory design; and cueing in the form of questions, bold fonts, underlining of important words and concepts, and captioned images.

  2. Democratic population decisions result in robust policy-gradient learning: a parametric study with GPU simulations.

    PubMed

    Richmond, Paul; Buesing, Lars; Giugliano, Michele; Vasilaki, Eleni

    2011-05-04

    High performance computing on the Graphics Processing Unit (GPU) is an emerging field driven by the promise of high computational power at a low cost. However, GPU programming is a non-trivial task and moreover architectural limitations raise the question of whether investing effort in this direction may be worthwhile. In this work, we use GPU programming to simulate a two-layer network of Integrate-and-Fire neurons with varying degrees of recurrent connectivity and investigate its ability to learn a simplified navigation task using a policy-gradient learning rule stemming from Reinforcement Learning. The purpose of this paper is twofold. First, we want to support the use of GPUs in the field of Computational Neuroscience. Second, using GPU computing power, we investigate the conditions under which the said architecture and learning rule demonstrate best performance. Our work indicates that networks featuring strong Mexican-Hat-shaped recurrent connections in the top layer, where decision making is governed by the formation of a stable activity bump in the neural population (a "non-democratic" mechanism), achieve mediocre learning results at best. In absence of recurrent connections, where all neurons "vote" independently ("democratic") for a decision via population vector readout, the task is generally learned better and more robustly. Our study would have been extremely difficult on a desktop computer without the use of GPU programming. We present the routines developed for this purpose and show that a speed improvement of 5x up to 42x is provided versus optimised Python code. The higher speed is achieved when we exploit the parallelism of the GPU in the search of learning parameters. This suggests that efficient GPU programming can significantly reduce the time needed for simulating networks of spiking neurons, particularly when multiple parameter configurations are investigated.

  3. Processing of on-board recorded data for quick analysis of aircraft performance. [rotor systems research aircraft

    NASA Technical Reports Server (NTRS)

    Michaud, N. H.

    1979-01-01

    A system of independent computer programs for the processing of digitized pulse code modulated (PCM) and frequency modulated (FM) data is described. Information is stored in a set of random files and accessed to produce both statistical and graphical output. The software system is designed primarily to present these reports within a twenty-four hour period for quick analysis of the helicopter's performance.

  4. MSIX - A general and user-friendly platform for RAM analysis

    NASA Astrophysics Data System (ADS)

    Pan, Z. J.; Blemel, Peter

    The authors present a CAD (computer-aided design) platform supporting RAM (reliability, availability, and maintainability) analysis with efficient system description and alternative evaluation. The design concepts, implementation techniques, and application results are described. This platform is user-friendly because of its graphic environment, drawing facilities, object orientation, self-tutoring, and access to the operating system. The programs' independency and portability make them generally applicable to various analysis tasks.

  5. Development of the OMPAT Neuropsychological/Psychomotor Performance Evaluation and OMPAT Data and Timing Support Programs

    DTIC Science & Technology

    1993-12-31

    effect of Ritalin on attention and traumatically brain injured adults and the issues concerning repeated measures using computer based testing with...heat, cold and fatigue on neurological functions, as well as, the interactive and independent effects of chemical agents and pharmaceuticals. 5) A...serial manner was becoming an increasingly important task in neuropsychology. Serial assessment was important for monitoring medication effects

  6. Project PEGS! Practices in Effective Guidance Strategies: Interactive CD-ROM Series for Educators To Practice Positive Behavior Management Skills, October 1, 1999-December 30, 2002. Final Performance Report.

    ERIC Educational Resources Information Center

    Quirk, Constance A.

    This final report describes the activities and outcomes of a federally funded project designed to produce and field-test two computer-based interactive CD-ROMs: "PEGS! for Preschool" and "PEGS! for Secondary School". These programs, in a game format, provide beginning general and special educators with independent practice in…

  7. A package of Linux scripts for the parallelization of Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Badal, Andreu; Sempau, Josep

    2006-09-01

    Despite the fact that fast computers are nowadays available at low cost, there are many situations where obtaining a reasonably low statistical uncertainty in a Monte Carlo (MC) simulation involves a prohibitively large amount of time. This limitation can be overcome by having recourse to parallel computing. Most tools designed to facilitate this approach require modification of the source code and the installation of additional software, which may be inconvenient for some users. We present a set of tools, named clonEasy, that implement a parallelization scheme of a MC simulation that is free from these drawbacks. In clonEasy, which is designed to run under Linux, a set of "clone" CPUs is governed by a "master" computer by taking advantage of the capabilities of the Secure Shell (ssh) protocol. Any Linux computer on the Internet that can be ssh-accessed by the user can be used as a clone. A key ingredient for the parallel calculation to be reliable is the availability of an independent string of random numbers for each CPU. Many generators—such as RANLUX, RANECU or the Mersenne Twister—can readily produce these strings by initializing them appropriately and, hence, they are suitable to be used with clonEasy. This work was primarily motivated by the need to find a straightforward way to parallelize PENELOPE, a code for MC simulation of radiation transport that (in its current 2005 version) employs the generator RANECU, which uses a combination of two multiplicative linear congruential generators (MLCGs). Thus, this paper is focused on this class of generators and, in particular, we briefly present an extension of RANECU that increases its period up to ˜5×10 and we introduce seedsMLCG, a tool that provides the information necessary to initialize disjoint sequences of an MLCG to feed different CPUs. This program, in combination with clonEasy, allows to run PENELOPE in parallel easily, without requiring specific libraries or significant alterations of the sequential code. Program summary 1Title of program:clonEasy Catalogue identifier:ADYD_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADYD_v1_0 Program obtainable from:CPC Program Library, Queen's University of Belfast, Northern Ireland Computer for which the program is designed and others in which it is operable:Any computer with a Unix style shell (bash), support for the Secure Shell protocol and a FORTRAN compiler Operating systems under which the program has been tested:Linux (RedHat 8.0, SuSe 8.1, Debian Woody 3.1) Compilers:GNU FORTRAN g77 (Linux); g95 (Linux); Intel Fortran Compiler 7.1 (Linux) Programming language used:Linux shell (bash) script, FORTRAN 77 No. of bits in a word:32 No. of lines in distributed program, including test data, etc.:1916 No. of bytes in distributed program, including test data, etc.:18 202 Distribution format:tar.gz Nature of the physical problem:There are many situations where a Monte Carlo simulation involves a huge amount of CPU time. The parallelization of such calculations is a simple way of obtaining a relatively low statistical uncertainty using a reasonable amount of time. Method of solution:The presented collection of Linux scripts and auxiliary FORTRAN programs implement Secure Shell-based communication between a "master" computer and a set of "clones". The aim of this communication is to execute a code that performs a Monte Carlo simulation on all the clones simultaneously. The code is unique, but each clone is fed with a different set of random seeds. Hence, clonEasy effectively permits the parallelization of the calculation. Restrictions on the complexity of the program:clonEasy can only be used with programs that produce statistically independent results using the same code, but with a different sequence of random numbers. Users must choose the initialization values for the random number generator on each computer and combine the output from the different executions. A FORTRAN program to combine the final results is also provided. Typical running time:The execution time of each script largely depends on the number of computers that are used, the actions that are to be performed and, to a lesser extent, on the network connexion bandwidth. Unusual features of the program:Any computer on the Internet with a Secure Shell client/server program installed can be used as a node of a virtual computer cluster for parallel calculations with the sequential source code. The simplicity of the parallelization scheme makes the use of this package a straightforward task, which does not require installing any additional libraries. Program summary 2Title of program:seedsMLCG Catalogue identifier:ADYE_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADYE_v1_0 Program obtainable from:CPC Program Library, Queen's University of Belfast, Northern Ireland Computer for which the program is designed and others in which it is operable:Any computer with a FORTRAN compiler Operating systems under which the program has been tested:Linux (RedHat 8.0, SuSe 8.1, Debian Woody 3.1), MS Windows (2000, XP) Compilers:GNU FORTRAN g77 (Linux and Windows); g95 (Linux); Intel Fortran Compiler 7.1 (Linux); Compaq Visual Fortran 6.1 (Windows) Programming language used:FORTRAN 77 No. of bits in a word:32 Memory required to execute with typical data:500 kilobytes No. of lines in distributed program, including test data, etc.:492 No. of bytes in distributed program, including test data, etc.:5582 Distribution format:tar.gz Nature of the physical problem:Statistically independent results from different runs of a Monte Carlo code can be obtained using uncorrelated sequences of random numbers on each execution. Multiplicative linear congruential generators (MLCG), or other generators that are based on them such as RANECU, can be adapted to produce these sequences. Method of solution:For a given MLCG, the presented program calculates initialization values that produce disjoint, consecutive sequences of pseudo-random numbers. The calculated values initiate the generator in distant positions of the random number cycle and can be used, for instance, on a parallel simulation. The values are found using the formula S=(aS)MODm, which gives the random value that will be generated after J iterations of the MLCG. Restrictions on the complexity of the program:The 32-bit length restriction for the integer variables in standard FORTRAN 77 limits the produced seeds to be separated a distance smaller than 2 31, when the distance J is expressed as an integer value. The program allows the user to input the distance as a power of 10 for the purpose of efficiently splitting the sequence of generators with a very long period. Typical running time:The execution time depends on the parameters of the used MLCG and the distance between the generated seeds. The generation of 10 6 seeds separated 10 12 units in the sequential cycle, for one of the MLCGs found in the RANECU generator, takes 3 s on a 2.4 GHz Intel Pentium 4 using the g77 compiler.

  8. One Small Step for Manuals: Computer-Assisted Training in Twelve-Step Facilitation*

    PubMed Central

    Sholomskas, Diane E.; Carroll, Kathleen M.

    2008-01-01

    Objective The burgeoning number of empirically validated therapies has not been met with systematic evaluation of practical, inexpensive means of teaching large numbers of clinicians to use these treatments effectively. An interactive, computer-assisted training program that sought to impart skills associated with the Project MATCH (Matching Alcoholism Treatments to Client Heterogeneity) Twelve-Step Facilitation (TSF) manual was developed to address this need. Method Twenty-five community-based substance use-treatment clinicians were randomized to one of two training conditions: (1) access to the computer-assisted training program plus the TSF manual or (2) access to the manual only. The primary outcome measure was change from pre- to posttraining in the clinicians' ability to demonstrate key TSF skills. Results The data suggested that the clinicians' ability to implement TSF, as assessed by independent ratings of adherence and skill for the key TSF interventions, was significantly higher after training for those who had access to the computerized training condition than those who were assigned to the manual-only condition. Those assigned to the computer-assisted training condition also demonstrated greater gains in a knowledge test assessing familiarity with concepts presented in the TSF manual. Conclusions Computer-based training may be a feasible and effective means of training larger numbers of clinicians in empirically supported, manual-guided therapies. PMID:17061013

  9. One small step for manuals: Computer-assisted training in twelve-step facilitation.

    PubMed

    Sholomskas, Diane E; Carroll, Kathleen M

    2006-11-01

    The burgeoning number of empirically validated therapies has not been met with systematic evaluation of practical, inexpensive means of teaching large numbers of clinicians to use these treatments effectively. An interactive, computer-assisted training program that sought to impart skills associated with the Project MATCH (Matching Alcoholism Treatments to Client Heterogeneity) Twelve-Step Facilitation (TSF) manual was developed to address this need. Twenty-five community-based substance use-treatment clinicians were randomized to one of two training conditions: (1) access to the computer- assisted training program plus the TSF manual or (2) access to the manual only. The primary outcome measure was change from preto posttraining in the clinicians' ability to demonstrate key TSF skills. The data suggested that the clinicians' ability to implement TSF, as assessed by independent ratings of adherence and skill for the key TSF interventions, was significantly higher after training for those who had access to the computerized training condition than those who were assigned to the manual-only condition. Those assigned to the computer-assisted training condition also demonstrated greater gains in a knowledge test assessing familiarity with concepts presented in the TSF manual. Computer-based training may be a feasible and effective means of training larger numbers of clinicians in empirically supported, manual-guided therapies.

  10. Computer simulations in the high school: students' cognitive stages, science process skills and academic achievement in microbiology

    NASA Astrophysics Data System (ADS)

    Huppert, J.; Michal Lomask, S.; Lazarowitz, R.

    2002-08-01

    Computer-assisted learning, including simulated experiments, has great potential to address the problem solving process which is a complex activity. It requires a highly structured approach in order to understand the use of simulations as an instructional device. This study is based on a computer simulation program, 'The Growth Curve of Microorganisms', which required tenth grade biology students to use problem solving skills whilst simultaneously manipulating three independent variables in one simulated experiment. The aims were to investigate the computer simulation's impact on students' academic achievement and on their mastery of science process skills in relation to their cognitive stages. The results indicate that the concrete and transition operational students in the experimental group achieved significantly higher academic achievement than their counterparts in the control group. The higher the cognitive operational stage, the higher students' achievement was, except in the control group where students in the concrete and transition operational stages did not differ. Girls achieved equally with the boys in the experimental group. Students' academic achievement may indicate the potential impact a computer simulation program can have, enabling students with low reasoning abilities to cope successfully with learning concepts and principles in science which require high cognitive skills.

  11. Validation of Magnetic Resonance Thermometry by Computational Fluid Dynamics

    NASA Astrophysics Data System (ADS)

    Rydquist, Grant; Owkes, Mark; Verhulst, Claire M.; Benson, Michael J.; Vanpoppel, Bret P.; Burton, Sascha; Eaton, John K.; Elkins, Christopher P.

    2016-11-01

    Magnetic Resonance Thermometry (MRT) is a new experimental technique that can create fully three-dimensional temperature fields in a noninvasive manner. However, validation is still required to determine the accuracy of measured results. One method of examination is to compare data gathered experimentally to data computed with computational fluid dynamics (CFD). In this study, large-eddy simulations have been performed with the NGA computational platform to generate data for a comparison with previously run MRT experiments. The experimental setup consisted of a heated jet inclined at 30° injected into a larger channel. In the simulations, viscosity and density were scaled according to the local temperature to account for differences in buoyant and viscous forces. A mesh-independent study was performed with 5 mil-, 15 mil- and 45 mil-cell meshes. The program Star-CCM + was used to simulate the complete experimental geometry. This was compared to data generated from NGA. Overall, both programs show good agreement with the experimental data gathered with MRT. With this data, the validity of MRT as a diagnostic tool has been shown and the tool can be used to further our understanding of a range of flows with non-trivial temperature distributions.

  12. SSR_pipeline--computer software for the identification of microsatellite sequences from paired-end Illumina high-throughput DNA sequence data

    USGS Publications Warehouse

    Miller, Mark P.; Knaus, Brian J.; Mullins, Thomas D.; Haig, Susan M.

    2013-01-01

    SSR_pipeline is a flexible set of programs designed to efficiently identify simple sequence repeats (SSRs; for example, microsatellites) from paired-end high-throughput Illumina DNA sequencing data. The program suite contains three analysis modules along with a fourth control module that can be used to automate analyses of large volumes of data. The modules are used to (1) identify the subset of paired-end sequences that pass quality standards, (2) align paired-end reads into a single composite DNA sequence, and (3) identify sequences that possess microsatellites conforming to user specified parameters. Each of the three separate analysis modules also can be used independently to provide greater flexibility or to work with FASTQ or FASTA files generated from other sequencing platforms (Roche 454, Ion Torrent, etc). All modules are implemented in the Python programming language and can therefore be used from nearly any computer operating system (Linux, Macintosh, Windows). The program suite relies on a compiled Python extension module to perform paired-end alignments. Instructions for compiling the extension from source code are provided in the documentation. Users who do not have Python installed on their computers or who do not have the ability to compile software also may choose to download packaged executable files. These files include all Python scripts, a copy of the compiled extension module, and a minimal installation of Python in a single binary executable. See program documentation for more information.

  13. Computer Use and Factors Related to Computer Use in Large Independent Secondary School Libraries.

    ERIC Educational Resources Information Center

    Currier, Heidi F.

    Survey results about the use of computers in independent secondary school libraries are reported, and factors related to the presence of computers are identified. Data are from 104 librarians responding to a questionnaire sent to a sample of 136 large (over 400 students) independent secondary schools. Data are analyzed descriptively to show the…

  14. Teaching pathology in the 21st century. An experimental automated curriculum delivery system for basic pathology.

    PubMed

    Woods, J W; Jones, R R; Schoultz, T W; Kuenz, M; Moore, R L

    1988-08-01

    In late 1984, the "General Professional Education of the Physician" (GPEP) report recommended, among other things, that medical curricula be revised to rely less on lectures and more on independent study and problem solving. We seem to have anticipated, in 1980, the findings of the GPEP panel by formulating and starting to test the hypothesis that certain "core" information in medical curricula can be as effectively delivered by technology-based self-study means as by lecture or formal laboratory. We began, at that time, to prepare a series of self-study materials using, at first, videotape and then computer-controlled optical videodiscs. The content area selected for study was basic microscopic pathology. The series was planned to cover the following areas of study: cellular alterations and adaptations, cell injury, acute inflammation, chronic inflammation and wound healing, cellular accumulations, circulatory disturbances, necrosis, and neoplasia. All are intended to provide learning experiences in basic pathology. The first two programs were released for testing in 1983 as a two-sided videodisc accompanied by computer-driven pretests, study modules, and posttests that used Apple computers and Pioneer (DiscoVision) videodisc players. An MS DOS (eg, IBM) version of the computer programs was released in 1984. The first two programs are now used in 57 US, Canadian, European, and Philippine health professions schools, and over 1300 student and faculty evaluations have been received. Student and faculty evaluations of these first two programs were very positive, and, as a result, the others are in production and will be completed in 1988. Only when a critical mass of curriculum is available can we really test our stated hypothesis. In the meantime, it is worthwhile to report the evaluation of the first two programs.

  15. Advanced Simulation and Computing: A Summary Report to the Director's Review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCoy, M G; Peck, T

    2003-06-01

    It has now been three years since the Advanced Simulation and Computing Program (ASCI), as managed by Defense and Nuclear Technologies (DNT) Directorate, has been reviewed by this Director's Review Committee (DRC). Since that time, there has been considerable progress for all components of the ASCI Program, and these developments will be highlighted in this document and in the presentations planned for June 9 and 10, 2003. There have also been some name changes. Today, the Program is called ''Advanced Simulation and Computing,'' Although it retains the familiar acronym ASCI, the initiative nature of the effort has given way tomore » sustained services as an integral part of the Stockpile Stewardship Program (SSP). All computing efforts at LLNL and the other two Defense Program (DP) laboratories are funded and managed under ASCI. This includes the so-called legacy codes, which remain essential tools in stockpile stewardship. The contract between the Department of Energy (DOE) and the University of California (UC) specifies an independent appraisal of Directorate technical work and programmatic management. Such represents the work of this DNT Review Committee. Beginning this year, the Laboratory is implementing a new review system. This process was negotiated between UC, the National Nuclear Security Administration (NNSA), and the Laboratory Directors. Central to this approach are eight performance objectives that focus on key programmatic and administrative goals. Associated with each of these objectives are a number of performance measures to more clearly characterize the attainment of the objectives. Each performance measure has a lead directorate and one or more contributing directorates. Each measure has an evaluation plan and has identified expected documentation to be included in the ''Assessment File''.« less

  16. COMO: a numerical model for predicting furnace performance in axisymmetric geometries. Volume 1. Technical summary. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fiveland, W.A.; Oberjohn, W.J.; Cornelius, D.K.

    1985-12-01

    This report summarizes the work conducted during a 30-month contract with the United States Department of Energy (DOE) Pittsburgh Energy Technology Center (PETC). The general objective is to develop and verify a computer code capable of modeling the major aspects of pulverized coal combustion. Achieving this objective will lead to design methods applicable to industrial and utility furnaces. The combustion model (COMO) is based mainly on an existing Babcock and Wilcox (B and W) computer program. The model consists of a number of relatively independent modules that represent the major processes involved in pulverized coal combustion: flow, heterogeneous and homogeneousmore » chemical reaction, and heat transfer. As models are improved or as new ones are developed, this modular structure allows portions of the COMO model to be updated with minimal impact on the remainder of the program. The report consists of two volumes. This volume (Volume 1) contains a technical summary of the COMO model, results of predictions for gas phase combustion, pulverized coal combustion, and a detailed description of the COMO model. Volume 2 is the Users Guide for COMO and contains detailed instructions for preparing the input data and a description of the program output. Several example cases have been included to aid the user in usage of the computer program for pulverized coal applications. 66 refs., 41 figs., 21 tabs.« less

  17. Space Trajectories Error Analysis (STEAP) Programs. Volume 1: Analytic manual, update

    NASA Technical Reports Server (NTRS)

    1971-01-01

    Manual revisions are presented for the modified and expanded STEAP series. The STEAP 2 is composed of three independent but related programs: NOMAL for the generation of n-body nominal trajectories performing a number of deterministic guidance events; ERRAN for the linear error analysis and generalized covariance analysis along specific targeted trajectories; and SIMUL for testing the mathematical models used in the navigation and guidance process. The analytic manual provides general problem description, formulation, and solution and the detailed analysis of subroutines. The programmers' manual gives descriptions of the overall structure of the programs as well as the computational flow and analysis of the individual subroutines. The user's manual provides information on the input and output quantities of the programs. These are updates to N69-36472 and N69-36473.

  18. [A UNIX-based electronic data processing system for routine use in a trauma surgery department].

    PubMed

    Boos, O; Kinzl, L; Schweiggert, F; Suger, G

    1994-05-01

    A computer program for a UNIX workstation has been developed to support routine activities in a surgical department. A relational database contains reports on operations, medical letters and further data imported from independent computer subsystems outside the department. Data are accessible at 15 terminals and PCs through a simple and intuitive user interface with a mouse. The patient record is organized in a hypertext fashion and permits direct access to the various types of documents in a consistent manner. The implementation is currently used to manage information on 40,000 patients and has proved valuable in daily routine over a 2-year period.

  19. The Voyager spacecraft /James Watt International Gold Medal Lecture/

    NASA Technical Reports Server (NTRS)

    Heacock, R. L.

    1980-01-01

    The Voyager Project background is reviewed with emphasis on selected features of the Voyager spacecraft. Investigations by the Thermo-electric Outer Planets Spacecraft Project are discussed, including trajectories, design requirements, and the development of a Self Test and Repair computer, and a Computer Accessed Telemetry System. The design and configuration of the spacecraft are described, including long range communications, attitude control, solar independent power, sequencing and control data handling, and spacecraft propulsion. The development program, maintained by JPL, experienced a variety of problems such as design deficiencies, and process control and manufacturing problems. Finally, the spacecraft encounter with Jupiter is discussed, and expectations for the Saturn encounter are expressed.

  20. Developing Information Power Grid Based Algorithms and Software

    NASA Technical Reports Server (NTRS)

    Dongarra, Jack

    1998-01-01

    This exploratory study initiated our effort to understand performance modeling on parallel systems. The basic goal of performance modeling is to understand and predict the performance of a computer program or set of programs on a computer system. Performance modeling has numerous applications, including evaluation of algorithms, optimization of code implementations, parallel library development, comparison of system architectures, parallel system design, and procurement of new systems. Our work lays the basis for the construction of parallel libraries that allow for the reconstruction of application codes on several distinct architectures so as to assure performance portability. Following our strategy, once the requirements of applications are well understood, one can then construct a library in a layered fashion. The top level of this library will consist of architecture-independent geometric, numerical, and symbolic algorithms that are needed by the sample of applications. These routines should be written in a language that is portable across the targeted architectures.

  1. Laboratory manual: mineral X-ray diffraction data retrieval/plot computer program

    USGS Publications Warehouse

    Hauff, Phoebe L.; VanTrump, George

    1976-01-01

    The Mineral X-Ray Diffraction Data Retrieval/Plot Computer Program--XRDPLT (VanTrump and Hauff, 1976a) is used to retrieve and plot mineral X-ray diffraction data. The program operates on a file of mineral powder diffraction data (VanTrump and Hauff, 1976b) which contains two-theta or 'd' values, and intensities, chemical formula, mineral name, identification number, and mineral group code. XRDPLT is a machine-independent Fortran program which operates in time-sharing mode on a DEC System i0 computer and the Gerber plotter (Evenden, 1974). The program prompts the user to respond from a time-sharing terminal in a conversational format with the required input information. The program offers two major options: retrieval only; retrieval and plot. The first option retrieves mineral names, formulas, and groups from the file by identification number, by the mineral group code (a classification by chemistry or structure), or by searches based on the formula components. For example, it enables the user to search for minerals by major groups (i.e., feldspars, micas, amphiboles, oxides, phosphates, carbonates) by elemental composition (i.e., Fe, Cu, AI, Zn), or by a combination of these (i.e., all copper-bearing arsenates). The second option retrieves as the first, but also plots the retrieved 2-theta and intensity values as diagrammatic X-ray powder patterns on mylar sheets or overlays. These plots can be made using scale combinations compatible with chart recorder diffractograms and 114.59 mm powder camera films. The overlays are then used to separate or sieve out unrelated minerals until unknowns are matched and identified.

  2. A systematic review of randomized control trials evaluating the effectiveness of interactive computerized asthma patient education programs.

    PubMed

    Bussey-Smith, Kristin L; Rossen, Roger D

    2007-06-01

    Educating patients with asthma about the pathophysiology and treatment of their disease is recommended. In recent years, several computer programs have been developed to provide this education. These programs take advantage of the population's increasing skill with computers and the growth of the Internet as a source of health care information. To evaluate the effectiveness of published interactive computerized asthma patient education programs (CAPEPs) that have been subjected to randomized controlled trials (RCTs). The PubMed, ERIC, CINAHL, Psychinfo, and Clinicaltrials.gov databases were searched (through October 3, 2005) using the following terms: asthma, patient, education, interactive, and computer. RCTs in English that evaluated the effect of an interactive CAPEP on the following primary end points were included in the study: hospitalizations, acute care visits, rescue inhaler use, or lung function. Secondary end points included asthma knowledge and symptoms. Trials were screened by title and abstract before full text review. Two independent investigators used a standardized data extraction form to identify the articles chosen for full review. Nine of 406 citations met inclusion criteria. Four CAPEPs were computer games, 7 only studied children, and 4 focused on urban populations. One study each showed that the intervention reduced the number of hospitalizations, acute care visits, or rescue inhaler use. Two studies reported lung function improvements. Four studies showed improvement in asthma knowledge, and 5 studies reported improvements in symptoms. Although interactive CAPEPs may improve patient asthma knowledge and symptoms, their effect on objective clinical outcomes is less consistent.

  3. The Routine Fitting of Kinetic Data to Models

    PubMed Central

    Berman, Mones; Shahn, Ezra; Weiss, Marjory F.

    1962-01-01

    A mathematical formalism is presented for use with digital computers to permit the routine fitting of data to physical and mathematical models. Given a set of data, the mathematical equations describing a model, initial conditions for an experiment, and initial estimates for the values of model parameters, the computer program automatically proceeds to obtain a least squares fit of the data by an iterative adjustment of the values of the parameters. When the experimental measures are linear combinations of functions, the linear coefficients for a least squares fit may also be calculated. The values of both the parameters of the model and the coefficients for the sum of functions may be unknown independent variables, unknown dependent variables, or known constants. In the case of dependence, only linear dependencies are provided for in routine use. The computer program includes a number of subroutines, each one of which performs a special task. This permits flexibility in choosing various types of solutions and procedures. One subroutine, for example, handles linear differential equations, another, special non-linear functions, etc. The use of analytic or numerical solutions of equations is possible. PMID:13867975

  4. A computer architecture for intelligent machines

    NASA Technical Reports Server (NTRS)

    Lefebvre, D. R.; Saridis, G. N.

    1991-01-01

    The Theory of Intelligent Machines proposes a hierarchical organization for the functions of an autonomous robot based on the Principle of Increasing Precision With Decreasing Intelligence. An analytic formulation of this theory using information-theoretic measures of uncertainty for each level of the intelligent machine has been developed in recent years. A computer architecture that implements the lower two levels of the intelligent machine is presented. The architecture supports an event-driven programming paradigm that is independent of the underlying computer architecture and operating system. Details of Execution Level controllers for motion and vision systems are addressed, as well as the Petri net transducer software used to implement Coordination Level functions. Extensions to UNIX and VxWorks operating systems which enable the development of a heterogeneous, distributed application are described. A case study illustrates how this computer architecture integrates real-time and higher-level control of manipulator and vision systems.

  5. WINPEPI updated: computer programs for epidemiologists, and their teaching potential

    PubMed Central

    2011-01-01

    Background The WINPEPI computer programs for epidemiologists are designed for use in practice and research in the health field and as learning or teaching aids. The programs are free, and can be downloaded from the Internet. Numerous additions have been made in recent years. Implementation There are now seven WINPEPI programs: DESCRIBE, for use in descriptive epidemiology; COMPARE2, for use in comparisons of two independent groups or samples; PAIRSetc, for use in comparisons of paired and other matched observations; LOGISTIC, for logistic regression analysis; POISSON, for Poisson regression analysis; WHATIS, a "ready reckoner" utility program; and ETCETERA, for miscellaneous other procedures. The programs now contain 122 modules, each of which provides a number, sometimes a large number, of statistical procedures. The programs are accompanied by a Finder that indicates which modules are appropriate for different purposes. The manuals explain the uses, limitations and applicability of the procedures, and furnish formulae and references. Conclusions WINPEPI is a handy resource for a wide variety of statistical routines used by epidemiologists. Because of its ready availability, portability, ease of use, and versatility, WINPEPI has a considerable potential as a learning and teaching aid, both with respect to practical procedures in the planning and analysis of epidemiological studies, and with respect to important epidemiological concepts. It can also be used as an aid in the teaching of general basic statistics. PMID:21288353

  6. A Computer-Based Interactive Multimedia Program to Reduce HIV Transmission for Women with Intellectual Disability

    PubMed Central

    Delaine, Khaya

    2011-01-01

    Background Despite recent recognition of the need for preventive sexual health materials for people with intellectual disability (ID), there have been remarkably few health-based interventions designed for people with mild to moderate ID. The purpose of this study was to evaluate the effects of a computer-based interactive multimedia (CBIM) program to teach HIV/AIDS knowledge, skills, and decision-making. Methods Twenty-five women with mild to moderate intellectual disability evaluated the program. The study used a quasi-experimental within-subjects design to assess the efficacy of the CBIM program. Research participants completed five qualitative and quantitative instruments that assessed HIV knowledge, and decision-making skills regarding HIV prevention practices and condom application skills (i.e., demonstration of skills opening a condom and putting it on a model penis). In addition, 18 service providers who work with women with ID reviewed the program and completed a demographics questionnaire and a professional customer satisfaction survey. Results Women with ID showed statistically significant increases from pretest to posttest in all knowledge and skill domains. Furthermore, the statistical gains were accompanied by medium to large effect sizes. Overall, service providers rated the program highly on several outcome measures (stimulation, relevance, and usability). Conclusions The results of this study indicate the CBIM program was effective in increasing HIV/AIDS knowledge and skills among women with ID, who live both semi-independently and independently, in a single-session intervention. Since the CBIM program is not dependent on staff for instructional delivery, it is a highly efficient teaching tool; and CBIM is an efficacious means to provide behavioral health content, compensating for the dearth of available health promotion materials for people with ID. As such, it has a potential for broad distribution and implementation by medical practitioners, and public health offices. People with ID are part of our society, yet continue to be overlooked, particularly in the area of health promotion. Special tools need to be developed in order to address the health disparities experienced by people with ID. PMID:21917052

  7. Scientific Programming Using Java: A Remote Sensing Example

    NASA Technical Reports Server (NTRS)

    Prados, Don; Mohamed, Mohamed A.; Johnson, Michael; Cao, Changyong; Gasser, Jerry

    1999-01-01

    This paper presents results of a project to port remote sensing code from the C programming language to Java. The advantages and disadvantages of using Java versus C as a scientific programming language in remote sensing applications are discussed. Remote sensing applications deal with voluminous data that require effective memory management, such as buffering operations, when processed. Some of these applications also implement complex computational algorithms, such as Fast Fourier Transformation analysis, that are very performance intensive. Factors considered include performance, precision, complexity, rapidity of development, ease of code reuse, ease of maintenance, memory management, and platform independence. Performance of radiometric calibration code written in Java for the graphical user interface and of using C for the domain model are also presented.

  8. Common Graphics Library (CGL). Volume 1: LEZ user's guide

    NASA Technical Reports Server (NTRS)

    Taylor, Nancy L.; Hammond, Dana P.; Hofler, Alicia S.; Miner, David L.

    1988-01-01

    Users are introduced to and instructed in the use of the Langley Easy (LEZ) routines of the Common Graphics Library (CGL). The LEZ routines form an application independent graphics package which enables the user community to view data quickly and easily, while providing a means of generating scientific charts conforming to the publication and/or viewgraph process. A distinct advantage for using the LEZ routines is that the underlying graphics package may be replaced or modified without requiring the users to change their application programs. The library is written in ANSI FORTRAN 77, and currently uses a CORE-based underlying graphics package, and is therefore machine independent, providing support for centralized and/or distributed computer systems.

  9. Implementing an ADA Kernel on NEBULA.

    DTIC Science & Technology

    1983-08-01

    physical address(es). No instruction supports directly semaphore operations , or spin-locks, or other entities used in the synchronisation of tasks...these operations It is found that NEBULA supports admirably the control structures oil Ada, but its Memory Mamagement system is not very suitable. Entry... operating system . With the advent of Ada, in theory at least, the whole program can be written in Ada in a manner that is independent of the computer and of

  10. Implications of the Turing machine model of computation for processor and programming language design

    NASA Astrophysics Data System (ADS)

    Hunter, Geoffrey

    2004-01-01

    A computational process is classified according to the theoretical model that is capable of executing it; computational processes that require a non-predeterminable amount of intermediate storage for their execution are Turing-machine (TM) processes, while those whose storage are predeterminable are Finite Automation (FA) processes. Simple processes (such as traffic light controller) are executable by Finite Automation, whereas the most general kind of computation requires a Turing Machine for its execution. This implies that a TM process must have a non-predeterminable amount of memory allocated to it at intermediate instants of its execution; i.e. dynamic memory allocation. Many processes encountered in practice are TM processes. The implication for computational practice is that the hardware (CPU) architecture and its operating system must facilitate dynamic memory allocation, and that the programming language used to specify TM processes must have statements with the semantic attribute of dynamic memory allocation, for in Alan Turing"s thesis on computation (1936) the "standard description" of a process is invariant over the most general data that the process is designed to process; i.e. the program describing the process should never have to be modified to allow for differences in the data that is to be processed in different instantiations; i.e. data-invariant programming. Any non-trivial program is partitioned into sub-programs (procedures, subroutines, functions, modules, etc). Examination of the calls/returns between the subprograms reveals that they are nodes in a tree-structure; this tree-structure is independent of the programming language used to encode (define) the process. Each sub-program typically needs some memory for its own use (to store values intermediate between its received data and its computed results); this locally required memory is not needed before the subprogram commences execution, and it is not needed after its execution terminates; it may be allocated as its execution commences, and deallocated as its execution terminates, and if the amount of this local memory is not known until just before execution commencement, then it is essential that it be allocated dynamically as the first action of its execution. This dynamically allocated/deallocated storage of each subprogram"s intermediate values, conforms with the stack discipline; i.e. last allocated = first to be deallocated, an incidental benefit of which is automatic overlaying of variables. This stack-based dynamic memory allocation was a semantic implication of the nested block structure that originated in the ALGOL-60 programming language. AGLOL-60 was a TM language, because the amount of memory allocated on subprogram (block/procedure) entry (for arrays, etc) was computable at execution time. A more general requirement of a Turing machine process is for code generation at run-time; this mandates access to the source language processor (compiler/interpretor) during execution of the process. This fundamental aspect of computer science is important to the future of system design, because it has been overlooked throughout the 55 years since modern computing began in 1048. The popular computer systems of this first half-century of computing were constrained by compile-time (or even operating system boot-time) memory allocation, and were thus limited to executing FA processes. The practical effect was that the distinction between the data-invariant program and its variable data was blurred; programmers had to make trial and error executions, modifying the program"s compile-time constants (array dimensions) to iterate towards the values required at run-time by the data being processed. This era of trial and error computing still persists; it pervades the culture of current (2003) computing practice.

  11. Arbitrating Control of Control and Display Units

    NASA Technical Reports Server (NTRS)

    Sugden, Paul C.

    2007-01-01

    The ARINC 739 Switch is a computer program that arbitrates control of two multi-function control and display units (MCDUs) between (1) a commercial flight-management computer (FMC) and (2) NASA software used in research on transport aircraft. (MCDUs are the primary interfaces between pilots and FMCs on many commercial aircraft.) This program was recently redesigned into a software library that can be embedded in research application programs. As part of the redesign, this software was combined with software for creating custom pages of information to be displayed on a CDU. This software commands independent switching of the left (pilot s) and right (copilot s) MCDUs. For example, a custom CDU page can control the left CDU while the FMC controls the right CDU. The software uses menu keys to switch control of the CDU between the FMC or a custom CDU page. The software provides an interface that enables custom CDU pages to insert keystrokes into the FMC s CDU input interface. This feature allows the custom CDU pages to manipulate the FMC as if it were a pilot.

  12. A user-friendly application for the extraction of kubios hrv output to an optimal format for statistical analysis - biomed 2011.

    PubMed

    Johnsen Lind, Andreas; Helge Johnsen, Bjorn; Hill, Labarron K; Sollers Iii, John J; Thayer, Julian F

    2011-01-01

    The aim of the present manuscript is to present a user-friendly and flexible platform for transforming Kubios HRV output files to an .xls-file format, used by MS Excel. The program utilizes either native or bundled Java and is platform-independent and mobile. This means that it can run without being installed on a computer. It also has an option of continuous transferring of data indicating that it can run in the background while Kubios produces output files. The program checks for changes in the file structure and automatically updates the .xls- output file.

  13. Automated analysis of retinal images for detection of referable diabetic retinopathy.

    PubMed

    Abràmoff, Michael D; Folk, James C; Han, Dennis P; Walker, Jonathan D; Williams, David F; Russell, Stephen R; Massin, Pascale; Cochener, Beatrice; Gain, Philippe; Tang, Li; Lamard, Mathieu; Moga, Daniela C; Quellec, Gwénolé; Niemeijer, Meindert

    2013-03-01

    The diagnostic accuracy of computer detection programs has been reported to be comparable to that of specialists and expert readers, but no computer detection programs have been validated in an independent cohort using an internationally recognized diabetic retinopathy (DR) standard. To determine the sensitivity and specificity of the Iowa Detection Program (IDP) to detect referable diabetic retinopathy (RDR). In primary care DR clinics in France, from January 1, 2005, through December 31, 2010, patients were photographed consecutively, and retinal color images were graded for retinopathy severity according to the International Clinical Diabetic Retinopathy scale and macular edema by 3 masked independent retinal specialists and regraded with adjudication until consensus. The IDP analyzed the same images at a predetermined and fixed set point. We defined RDR as more than mild nonproliferative retinopathy and/or macular edema. A total of 874 people with diabetes at risk for DR. Sensitivity and specificity of the IDP to detect RDR, area under the receiver operating characteristic curve, sensitivity and specificity of the retinal specialists' readings, and mean interobserver difference (κ). The RDR prevalence was 21.7% (95% CI, 19.0%-24.5%). The IDP sensitivity was 96.8% (95% CI, 94.4%-99.3%) and specificity was 59.4% (95% CI, 55.7%-63.0%), corresponding to 6 of 874 false-negative results (none met treatment criteria). The area under the receiver operating characteristic curve was 0.937 (95% CI, 0.916-0.959). Before adjudication and consensus, the sensitivity/specificity of the retinal specialists were 0.80/0.98, 0.71/1.00, and 0.91/0.95, and the mean intergrader κ was 0.822. The IDP has high sensitivity and specificity to detect RDR. Computer analysis of retinal photographs for DR and automated detection of RDR can be implemented safely into the DR screening pipeline, potentially improving access to screening and health care productivity and reducing visual loss through early treatment.

  14. ELM - A SIMPLE TOOL FOR THERMAL-HYDRAULIC ANALYSIS OF SOLID-CORE NUCLEAR ROCKET FUEL ELEMENTS

    NASA Technical Reports Server (NTRS)

    Walton, J. T.

    1994-01-01

    ELM is a simple computational tool for modeling the steady-state thermal-hydraulics of propellant flow through fuel element coolant channels in nuclear thermal rockets. Written for the nuclear propulsion project of the Space Exploration Initiative, ELM evaluates the various heat transfer coefficient and friction factor correlations available for turbulent pipe flow with heat addition. In the past, these correlations were found in different reactor analysis codes, but now comparisons are possible within one program. The logic of ELM is based on the one-dimensional conservation of energy in combination with Newton's Law of Cooling to determine the bulk flow temperature and the wall temperature across a control volume. Since the control volume is an incremental length of tube, the corresponding pressure drop is determined by application of the Law of Conservation of Momentum. The size, speed, and accuracy of ELM make it a simple tool for use in fuel element parametric studies. ELM is a machine independent program written in FORTRAN 77. It has been successfully compiled on an IBM PC compatible running MS-DOS using Lahey FORTRAN 77, a DEC VAX series computer running VMS, and a Sun4 series computer running SunOS UNIX. ELM requires 565K of RAM under SunOS 4.1, 360K of RAM under VMS 5.4, and 406K of RAM under MS-DOS. Because this program is machine independent, no executable is provided on the distribution media. The standard distribution medium for ELM is one 5.25 inch 360K MS-DOS format diskette. ELM was developed in 1991. DEC, VAX, and VMS are trademarks of Digital Equipment Corporation. Sun4 and SunOS are trademarks of Sun Microsystems, Inc. IBM PC is a registered trademark of International Business Machines. MS-DOS is a registered trademark of Microsoft Corporation.

  15. A local network integrated into a balloon-borne apparatus

    NASA Astrophysics Data System (ADS)

    Imori, Masatosi; Ueda, Ikuo; Shimamura, Kotaro; Maeno, Tadashi; Murata, Takahiro; Sasaki, Makoto; Matsunaga, Hiroyuki; Matsumoto, Hiroshi; Shikaze, Yoshiaki; Anraku, Kazuaki; Matsui, Nagataka; Yamagami, Takamasa

    A local network is incorporated into an apparatus for a balloon-borne experiment. A balloon-borne system implemented in the apparatus is composed of subsystems interconnected through a local network, which introduces modular architecture into the system. The network decomposes the balloon-borne system into subsystems, which are similarly structured from the point of view that the systems is kept under the control of a ground station. The subsystem is functionally self-contained and electrically independent. A computer is integrated into a subsystem, keeping the subsystem under the control. An independent group of batteries, being dedicated to a subsystem, supplies the whole electricity of the subsystem. The subsystem could be turned on and off independently of the other subsystems. So communication among the subsystems needs to be based on such a protocol that could guarantee the independence of the individual subsystems. An Omninet protocol is employed to network the subsystems. A ground station sends commands to the balloon-borne system. The command is received and executed at the system, then results of the execution are returned to the ground station. Various commands are available so that the system borne on a balloon could be controlled and monitored remotely from the ground station. A subsystem responds to a specific group of commands. A command is received by a transceiver subsystem and then transferred through the network to the subsystem to which the command is addressed. Then the subsystem executes the command and returns results to the transceiver subsystem, where the results are telemetered to the ground station. The network enhances independence of the individual subsystems, which enables programs of the individual subsystems to be coded independently. Independence facilitates development and debugging of programs, improving the quality of the system borne on a balloon.

  16. ORBIT: an integrated environment for user-customized bioinformatics tools.

    PubMed

    Bellgard, M I; Hiew, H L; Hunter, A; Wiebrands, M

    1999-10-01

    There are a large number of computational programs freely available to bioinformaticians via a client/server, web-based environment. However, the client interface to these tools (typically an html form page) cannot be customized from the client side as it is created by the service provider. The form page is usually generic enough to cater for a wide range of users. However, this implies that a user cannot set as 'default' advanced program parameters on the form or even customize the interface to his/her specific requirements or preferences. Currently, there is a lack of end-user interface environments that can be modified by the user when accessing computer programs available on a remote server running on an intranet or over the Internet. We have implemented a client/server system called ORBIT (Online Researcher's Bioinformatics Interface Tools) where individual clients can have interfaces created and customized to command-line-driven, server-side programs. Thus, Internet-based interfaces can be tailored to a user's specific bioinformatic needs. As interfaces are created on the client machine independent of the server, there can be different interfaces to the same server-side program to cater for different parameter settings. The interface customization is relatively quick (between 10 and 60 min) and all client interfaces are integrated into a single modular environment which will run on any computer platform supporting Java. The system has been developed to allow for a number of future enhancements and features. ORBIT represents an important advance in the way researchers gain access to bioinformatics tools on the Internet.

  17. Array processor architecture

    NASA Technical Reports Server (NTRS)

    Barnes, George H. (Inventor); Lundstrom, Stephen F. (Inventor); Shafer, Philip E. (Inventor)

    1983-01-01

    A high speed parallel array data processing architecture fashioned under a computational envelope approach includes a data base memory for secondary storage of programs and data, and a plurality of memory modules interconnected to a plurality of processing modules by a connection network of the Omega gender. Programs and data are fed from the data base memory to the plurality of memory modules and from hence the programs are fed through the connection network to the array of processors (one copy of each program for each processor). Execution of the programs occur with the processors operating normally quite independently of each other in a multiprocessing fashion. For data dependent operations and other suitable operations, all processors are instructed to finish one given task or program branch before all are instructed to proceed in parallel processing fashion on the next instruction. Even when functioning in the parallel processing mode however, the processors are not locked-step but execute their own copy of the program individually unless or until another overall processor array synchronization instruction is issued.

  18. [Computer-assisted multimedia interactive learning program "Primary Open-Angle Glaucoma"].

    PubMed

    Dick, V B; Zenz, H; Eisenmann, D; Tekaat, C J; Wagner, R; Jacobi, K W

    1996-05-01

    Advances in the area of information technology have opened up new possibilities for the use of interactive media in the training of medical students. Classical instructional technologies, such as video, slides, audio cassettes and computer programs with a textbook orientation, have been merged into one multimedia computer system. The medical profession has been increasingly integrating computer-based applications which can be used, for example, for record keeping within a medical practice. The goal of this development is to provide access to all modes of information storage and retrieval as well as documentation and training systems within a specific context. Since the beginning of the winter semester 1995, the Department of Ophthalmology in Giessen has used the learning program "Primary Open Angle Glaucoma" in student instruction. One factor that contributed to the implementation of this project was that actual training using patients within the clinic is difficult to conduct. Media-supported training that can provide a simulation of actual practice offers a suitable substitute. The learning program has been installed on Power PCs (Apple MacIntosh), which make up the technical foundation of our system. The program was developed using Hypercard software, which provides userfriendly graphical work environment. This controls the input and retrieval of data, direct editing of documents, immediate simulation, the creation of on-screen documents and the integration of slides that have been scanned in as well as QuickTime films. All of this can be accomplished without any special knowledge of programming language or operating systems on the part of the user. The glaucoma learning program is structured along the lines of anatomy, including an explanation of the circulation of the aqueous humor, pathology, clinical symptoms and findings, diagnosis and treatment. This structure along with the possibility for creating a list of personal files for the user with a collection of illustrations and text allows for quick access to learning content. The program is designed in such a way that working with and through it is done in a manner conducive to learning. Student response to the learning program as an accompaniment to instruction has been positive. Independent, supplemental student learning by means of an interactive learning program has raised the quality of study within the sciences. The use of a pedagogically sound multimedia program, that is oriented toward problem solving and based on actual cases offers students the opportunity to actively work ophthalmological material. An additional benefit is the development of competence in working with computer-support information systems, something that is playing an ever-increasing role within the medical profession.

  19. An Integrated Spin-Labeling/Computational-Modeling Approach for Mapping Global Structures of Nucleic Acids.

    PubMed

    Tangprasertchai, Narin S; Zhang, Xiaojun; Ding, Yuan; Tham, Kenneth; Rohs, Remo; Haworth, Ian S; Qin, Peter Z

    2015-01-01

    The technique of site-directed spin labeling (SDSL) provides unique information on biomolecules by monitoring the behavior of a stable radical tag (i.e., spin label) using electron paramagnetic resonance (EPR) spectroscopy. In this chapter, we describe an approach in which SDSL is integrated with computational modeling to map conformations of nucleic acids. This approach builds upon a SDSL tool kit previously developed and validated, which includes three components: (i) a nucleotide-independent nitroxide probe, designated as R5, which can be efficiently attached at defined sites within arbitrary nucleic acid sequences; (ii) inter-R5 distances in the nanometer range, measured via pulsed EPR; and (iii) an efficient program, called NASNOX, that computes inter-R5 distances on given nucleic acid structures. Following a general framework of data mining, our approach uses multiple sets of measured inter-R5 distances to retrieve "correct" all-atom models from a large ensemble of models. The pool of models can be generated independently without relying on the inter-R5 distances, thus allowing a large degree of flexibility in integrating the SDSL-measured distances with a modeling approach best suited for the specific system under investigation. As such, the integrative experimental/computational approach described here represents a hybrid method for determining all-atom models based on experimentally-derived distance measurements. © 2015 Elsevier Inc. All rights reserved.

  20. An Integrated Spin-Labeling/Computational-Modeling Approach for Mapping Global Structures of Nucleic Acids

    PubMed Central

    Tangprasertchai, Narin S.; Zhang, Xiaojun; Ding, Yuan; Tham, Kenneth; Rohs, Remo; Haworth, Ian S.; Qin, Peter Z.

    2015-01-01

    The technique of site-directed spin labeling (SDSL) provides unique information on biomolecules by monitoring the behavior of a stable radical tag (i.e., spin label) using electron paramagnetic resonance (EPR) spectroscopy. In this chapter, we describe an approach in which SDSL is integrated with computational modeling to map conformations of nucleic acids. This approach builds upon a SDSL tool kit previously developed and validated, which includes three components: (i) a nucleotide-independent nitroxide probe, designated as R5, which can be efficiently attached at defined sites within arbitrary nucleic acid sequences; (ii) inter-R5 distances in the nanometer range, measured via pulsed EPR; and (iii) an efficient program, called NASNOX, that computes inter-R5 distances on given nucleic acid structures. Following a general framework of data mining, our approach uses multiple sets of measured inter-R5 distances to retrieve “correct” all-atom models from a large ensemble of models. The pool of models can be generated independently without relying on the inter-R5 distances, thus allowing a large degree of flexibility in integrating the SDSL-measured distances with a modeling approach best suited for the specific system under investigation. As such, the integrative experimental/computational approach described here represents a hybrid method for determining all-atom models based on experimentally-derived distance measurements. PMID:26477260

  1. General-Purpose Front End for Real-Time Data Processing

    NASA Technical Reports Server (NTRS)

    James, Mark

    2007-01-01

    FRONTIER is a computer program that functions as a front end for any of a variety of other software of both the artificial intelligence (AI) and conventional data-processing types. As used here, front end signifies interface software needed for acquiring and preprocessing data and making the data available for analysis by the other software. FRONTIER is reusable in that it can be rapidly tailored to any such other software with minimum effort. Each component of FRONTIER is programmable and is executed in an embedded virtual machine. Each component can be reconfigured during execution. The virtual-machine implementation making FRONTIER independent of the type of computing hardware on which it is executed.

  2. Computational methods for a three-dimensional model of the petroleum-discovery process

    USGS Publications Warehouse

    Schuenemeyer, J.H.; Bawiec, W.J.; Drew, L.J.

    1980-01-01

    A discovery-process model devised by Drew, Schuenemeyer, and Root can be used to predict the amount of petroleum to be discovered in a basin from some future level of exploratory effort: the predictions are based on historical drilling and discovery data. Because marginal costs of discovery and production are a function of field size, the model can be used to make estimates of future discoveries within deposit size classes. The modeling approach is a geometric one in which the area searched is a function of the size and shape of the targets being sought. A high correlation is assumed between the surface-projection area of the fields and the volume of petroleum. To predict how much oil remains to be found, the area searched must be computed, and the basin size and discovery efficiency must be estimated. The basin is assumed to be explored randomly rather than by pattern drilling. The model may be used to compute independent estimates of future oil at different depth intervals for a play involving multiple producing horizons. We have written FORTRAN computer programs that are used with Drew, Schuenemeyer, and Root's model to merge the discovery and drilling information and perform the necessary computations to estimate undiscovered petroleum. These program may be modified easily for the estimation of remaining quantities of commodities other than petroleum. ?? 1980.

  3. Default "Gunel and Dickey" Bayes factors for contingency tables.

    PubMed

    Jamil, Tahira; Ly, Alexander; Morey, Richard D; Love, Jonathon; Marsman, Maarten; Wagenmakers, Eric-Jan

    2017-04-01

    The analysis of R×C contingency tables usually features a test for independence between row and column counts. Throughout the social sciences, the adequacy of the independence hypothesis is generally evaluated by the outcome of a classical p-value null-hypothesis significance test. Unfortunately, however, the classical p-value comes with a number of well-documented drawbacks. Here we outline an alternative, Bayes factor method to quantify the evidence for and against the hypothesis of independence in R×C contingency tables. First we describe different sampling models for contingency tables and provide the corresponding default Bayes factors as originally developed by Gunel and Dickey (Biometrika, 61(3):545-557 (1974)). We then illustrate the properties and advantages of a Bayes factor analysis of contingency tables through simulations and practical examples. Computer code is available online and has been incorporated in the "BayesFactor" R package and the JASP program ( jasp-stats.org ).

  4. A depth-first search algorithm to compute elementary flux modes by linear programming.

    PubMed

    Quek, Lake-Ee; Nielsen, Lars K

    2014-07-30

    The decomposition of complex metabolic networks into elementary flux modes (EFMs) provides a useful framework for exploring reaction interactions systematically. Generating a complete set of EFMs for large-scale models, however, is near impossible. Even for moderately-sized models (<400 reactions), existing approaches based on the Double Description method must iterate through a large number of combinatorial candidates, thus imposing an immense processor and memory demand. Based on an alternative elementarity test, we developed a depth-first search algorithm using linear programming (LP) to enumerate EFMs in an exhaustive fashion. Constraints can be introduced to directly generate a subset of EFMs satisfying the set of constraints. The depth-first search algorithm has a constant memory overhead. Using flux constraints, a large LP problem can be massively divided and parallelized into independent sub-jobs for deployment into computing clusters. Since the sub-jobs do not overlap, the approach scales to utilize all available computing nodes with minimal coordination overhead or memory limitations. The speed of the algorithm was comparable to efmtool, a mainstream Double Description method, when enumerating all EFMs; the attrition power gained from performing flux feasibility tests offsets the increased computational demand of running an LP solver. Unlike the Double Description method, the algorithm enables accelerated enumeration of all EFMs satisfying a set of constraints.

  5. Bio-inspired computational heuristics to study Lane-Emden systems arising in astrophysics model.

    PubMed

    Ahmad, Iftikhar; Raja, Muhammad Asif Zahoor; Bilal, Muhammad; Ashraf, Farooq

    2016-01-01

    This study reports novel hybrid computational methods for the solutions of nonlinear singular Lane-Emden type differential equation arising in astrophysics models by exploiting the strength of unsupervised neural network models and stochastic optimization techniques. In the scheme the neural network, sub-part of large field called soft computing, is exploited for modelling of the equation in an unsupervised manner. The proposed approximated solutions of higher order ordinary differential equation are calculated with the weights of neural networks trained with genetic algorithm, and pattern search hybrid with sequential quadratic programming for rapid local convergence. The results of proposed solvers for solving the nonlinear singular systems are in good agreements with the standard solutions. Accuracy and convergence the design schemes are demonstrated by the results of statistical performance measures based on the sufficient large number of independent runs.

  6. Progressive Fracture of Composite Structures

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.; Minnetyan, Levon

    2008-01-01

    A new approach is described for evaluating fracture in composite structures. This approach is independent of classical fracture mechanics parameters like fracture toughness. It relies on computational simulation and is programmed in a stand-alone integrated computer code. It is multiscale, multifunctional because it includes composite mechanics for the composite behavior and finite element analysis for predicting the structural response. It contains seven modules; layered composite mechanics (micro, macro, laminate), finite element, updating scheme, local fracture, global fracture, stress based failure modes, and fracture progression. The computer code is called CODSTRAN (Composite Durability Structural ANalysis). It is used in the present paper to evaluate the global fracture of four composite shell problems and one composite built-up structure. Results show that the composite shells and the built-up composite structure global fracture are enhanced when internal pressure is combined with shear loads.

  7. Upgrade to MODFLOW-GUI; addition of MODPATH, ZONEBDGT, and additional MODFLOW packages to the U.S. Geological Survey MODFLOW-96 Graphical-User Interface

    USGS Publications Warehouse

    Winston, R.B.

    1999-01-01

    This report describes enhancements to a Graphical-User Interface (GUI) for MODFLOW-96, the U.S. Geological Survey (USGS) modular, three-dimensional, finitedifference ground-water flow model, and MOC3D, the USGS three-dimensional, method-ofcharacteristics solute-transport model. The GUI is a plug-in extension (PIE) for the commercial program Argus ONEe. The GUI has been modified to support MODPATH (a particle tracking post-processing package for MODFLOW), ZONEBDGT (a computer program for calculating subregional water budgets), and the Stream, Horizontal-Flow Barrier, and Flow and Head Boundary packages in MODFLOW. Context-sensitive help has been added to make the GUI easier to use and to understand. In large part, the help consists of quotations from the relevant sections of this report and its predecessors. The revised interface includes automatic creation of geospatial information layers required for the added programs and packages, and menus and dialog boxes for input of parameters for simulation control. The GUI creates formatted ASCII files that can be read by MODFLOW-96, MOC3D, MODPATH, and ZONEBDGT. All four programs can be executed within the Argus ONEe application (Argus Interware, Inc., 1997). Spatial results of MODFLOW-96, MOC3D, and MODPATH can be visualized within Argus ONEe. Results from ZONEBDGT can be visualized in an independent program that can also be used to view budget data from MODFLOW, MOC3D, and SUTRA. Another independent program extracts hydrographs of head or drawdown at individual cells from formatted MODFLOW head and drawdown files. A web-based tutorial on the use of MODFLOW with Argus ONE has also been updated. The internal structure of the GUI has been modified to make it possible for advanced users to easily customize the GUI. Two additional, independent PIE?s were developed to allow users to edit the positions of nodes and to facilitate exporting the grid geometry to external programs.

  8. Independent Validation and Verification of automated information systems in the Department of Energy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hunteman, W.J.; Caldwell, R.

    1994-07-01

    The Department of Energy (DOE) has established an Independent Validation and Verification (IV&V) program for all classified automated information systems (AIS) operating in compartmented or multi-level modes. The IV&V program was established in DOE Order 5639.6A and described in the manual associated with the Order. This paper describes the DOE IV&V program, the IV&V process and activities, the expected benefits from an IV&V, and the criteria and methodologies used during an IV&V. The first IV&V under this program was conducted on the Integrated Computing Network (ICN) at Los Alamos National Laboratory and several lessons learned are presented. The DOE IV&Vmore » program is based on the following definitions. An IV&V is defined as the use of expertise from outside an AIS organization to conduct validation and verification studies on a classified AIS. Validation is defined as the process of applying the specialized security test and evaluation procedures, tools, and equipment needed to establish acceptance for joint usage of an AIS by one or more departments or agencies and their contractors. Verification is the process of comparing two levels of an AIS specification for proper correspondence (e.g., security policy model with top-level specifications, top-level specifications with source code, or source code with object code).« less

  9. An evaluation method of computer usability based on human-to-computer information transmission model.

    PubMed

    Ogawa, K

    1992-01-01

    This paper proposes a new evaluation and prediction method for computer usability. This method is based on our two previously proposed information transmission measures created from a human-to-computer information transmission model. The model has three information transmission levels: the device, software, and task content levels. Two measures, called the device independent information measure (DI) and the computer independent information measure (CI), defined on the software and task content levels respectively, are given as the amount of information transmitted. Two information transmission rates are defined as DI/T and CI/T, where T is the task completion time: the device independent information transmission rate (RDI), and the computer independent information transmission rate (RCI). The method utilizes the RDI and RCI rates to evaluate relatively the usability of software and device operations on different computer systems. Experiments using three different systems, in this case a graphical information input task, confirm that the method offers an efficient way of determining computer usability.

  10. MIRACAL: A mission radiation calculation program for analysis of lunar and interplanetary missions

    NASA Technical Reports Server (NTRS)

    Nealy, John E.; Striepe, Scott A.; Simonsen, Lisa C.

    1992-01-01

    A computational procedure and data base are developed for manned space exploration missions for which estimates are made for the energetic particle fluences encountered and the resulting dose equivalent incurred. The data base includes the following options: statistical or continuum model for ordinary solar proton events, selection of up to six large proton flare spectra, and galactic cosmic ray fluxes for elemental nuclei of charge numbers 1 through 92. The program requires an input trajectory definition information and specifications of optional parameters, which include desired spectral data and nominal shield thickness. The procedure may be implemented as an independent program or as a subroutine in trajectory codes. This code should be most useful in mission optimization and selection studies for which radiation exposure is of special importance.

  11. Task Description Language

    NASA Technical Reports Server (NTRS)

    Simmons, Reid; Apfelbaum, David

    2005-01-01

    Task Description Language (TDL) is an extension of the C++ programming language that enables programmers to quickly and easily write complex, concurrent computer programs for controlling real-time autonomous systems, including robots and spacecraft. TDL is based on earlier work (circa 1984 through 1989) on the Task Control Architecture (TCA). TDL provides syntactic support for hierarchical task-level control functions, including task decomposition, synchronization, execution monitoring, and exception handling. A Java-language-based compiler transforms TDL programs into pure C++ code that includes calls to a platform-independent task-control-management (TCM) library. TDL has been used to control and coordinate multiple heterogeneous robots in projects sponsored by NASA and the Defense Advanced Research Projects Agency (DARPA). It has also been used in Brazil to control an autonomous airship and in Canada to control a robotic manipulator.

  12. Provider-Independent Use of the Cloud

    NASA Astrophysics Data System (ADS)

    Harmer, Terence; Wright, Peter; Cunningham, Christina; Perrott, Ron

    Utility computing offers researchers and businesses the potential of significant cost-savings, making it possible for them to match the cost of their computing and storage to their demand for such resources. A utility compute provider enables the purchase of compute infrastructures on-demand; when a user requires computing resources a provider will provision a resource for them and charge them only for their period of use of that resource. There has been a significant growth in the number of cloud computing resource providers and each has a different resource usage model, application process and application programming interface (API)-developing generic multi-resource provider applications is thus difficult and time consuming. We have developed an abstraction layer that provides a single resource usage model, user authentication model and API for compute providers that enables cloud-provider neutral applications to be developed. In this paper we outline the issues in using external resource providers, give examples of using a number of the most popular cloud providers and provide examples of developing provider neutral applications. In addition, we discuss the development of the API to create a generic provisioning model based on a common architecture for cloud computing providers.

  13. Ground state of the time-independent Gross Pitaevskii equation

    NASA Astrophysics Data System (ADS)

    Dion, Claude M.; Cancès, Eric

    2007-11-01

    We present a suite of programs to determine the ground state of the time-independent Gross-Pitaevskii equation, used in the simulation of Bose-Einstein condensates. The calculation is based on the Optimal Damping Algorithm, ensuring a fast convergence to the true ground state. Versions are given for the one-, two-, and three-dimensional equation, using either a spectral method, well suited for harmonic trapping potentials, or a spatial grid. Program summaryProgram title: GPODA Catalogue identifier: ADZN_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZN_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 5339 No. of bytes in distributed program, including test data, etc.: 19 426 Distribution format: tar.gz Programming language: Fortran 90 Computer: ANY (Compilers under which the program has been tested: Absoft Pro Fortran, The Portland Group Fortran 90/95 compiler, Intel Fortran Compiler) RAM: From <1 MB in 1D to ˜10 MB for a large 3D grid Classification: 2.7, 4.9 External routines: LAPACK, BLAS, DFFTPACK Nature of problem: The order parameter (or wave function) of a Bose-Einstein condensate (BEC) is obtained, in a mean field approximation, by the Gross-Pitaevskii equation (GPE) [F. Dalfovo, S. Giorgini, L.P. Pitaevskii, S. Stringari, Rev. Mod. Phys. 71 (1999) 463]. The GPE is a nonlinear Schrödinger-like equation, including here a confining potential. The stationary state of a BEC is obtained by finding the ground state of the time-independent GPE, i.e., the order parameter that minimizes the energy. In addition to the standard three-dimensional GPE, tight traps can lead to effective two- or even one-dimensional BECs, so the 2D and 1D GPEs are also considered. Solution method: The ground state of the time-independent of the GPE is calculated using the Optimal Damping Algorithm [E. Cancès, C. Le Bris, Int. J. Quantum Chem. 79 (2000) 82]. Two sets of programs are given, using either a spectral representation of the order parameter [C.M. Dion, E. Cancès, Phys. Rev. E 67 (2003) 046706], suitable for a (quasi) harmonic trapping potential, or by discretizing the order parameter on a spatial grid. Running time: From seconds in 1D to a few hours for large 3D grids

  14. Improved separability of dipole sources by tripolar versus conventional disk electrodes: a modeling study using independent component analysis.

    PubMed

    Cao, H; Besio, W; Jones, S; Medvedev, A

    2009-01-01

    Tripolar electrodes have been shown to have less mutual information and higher spatial resolution than disc electrodes. In this work, a four-layer anisotropic concentric spherical head computer model was programmed, then four configurations of time-varying dipole signals were used to generate the scalp surface signals that would be obtained with tripolar and disc electrodes, and four important EEG artifacts were tested: eye blinking, cheek movements, jaw movements, and talking. Finally, a fast fixed-point algorithm was used for signal independent component analysis (ICA). The results show that signals from tripolar electrodes generated better ICA separation results than from disc electrodes for EEG signals with these four types of artifacts.

  15. Independent Verification and Validation of Complex User Interfaces: A Human Factors Approach

    NASA Technical Reports Server (NTRS)

    Whitmore, Mihriban; Berman, Andrea; Chmielewski, Cynthia

    1996-01-01

    The Usability Testing and Analysis Facility (UTAF) at the NASA Johnson Space Center has identified and evaluated a potential automated software interface inspection tool capable of assessing the degree to which space-related critical and high-risk software system user interfaces meet objective human factors standards across each NASA program and project. Testing consisted of two distinct phases. Phase 1 compared analysis times and similarity of results for the automated tool and for human-computer interface (HCI) experts. In Phase 2, HCI experts critiqued the prototype tool's user interface. Based on this evaluation, it appears that a more fully developed version of the tool will be a promising complement to a human factors-oriented independent verification and validation (IV&V) process.

  16. Programming methodology for a general purpose automation controller

    NASA Technical Reports Server (NTRS)

    Sturzenbecker, M. C.; Korein, J. U.; Taylor, R. H.

    1987-01-01

    The General Purpose Automation Controller is a multi-processor architecture for automation programming. A methodology has been developed whose aim is to simplify the task of programming distributed real-time systems for users in research or manufacturing. Programs are built by configuring function blocks (low-level computations) into processes using data flow principles. These processes are activated through the verb mechanism. Verbs are divided into two classes: those which support devices, such as robot joint servos, and those which perform actions on devices, such as motion control. This programming methodology was developed in order to achieve the following goals: (1) specifications for real-time programs which are to a high degree independent of hardware considerations such as processor, bus, and interconnect technology; (2) a component approach to software, so that software required to support new devices and technologies can be integrated by reconfiguring existing building blocks; (3) resistance to error and ease of debugging; and (4) a powerful command language interface.

  17. Structural Weight Estimation for Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Cerro, Jeff; Martinovic, Zoran; Su, Philip; Eldred, Lloyd

    2002-01-01

    This paper describes some of the work in progress to develop automated structural weight estimation procedures within the Vehicle Analysis Branch (VAB) of the NASA Langley Research Center. One task of the VAB is to perform system studies at the conceptual and early preliminary design stages on launch vehicles and in-space transportation systems. Some examples of these studies for Earth to Orbit (ETO) systems are the Future Space Transportation System [1], Orbit On Demand Vehicle [2], Venture Star [3], and the Personnel Rescue Vehicle[4]. Structural weight calculation for launch vehicle studies can exist on several levels of fidelity. Typically historically based weight equations are used in a vehicle sizing program. Many of the studies in the vehicle analysis branch have been enhanced in terms of structural weight fraction prediction by utilizing some level of off-line structural analysis to incorporate material property, load intensity, and configuration effects which may not be captured by the historical weight equations. Modification of Mass Estimating Relationships (MER's) to assess design and technology impacts on vehicle performance are necessary to prioritize design and technology development decisions. Modern CAD/CAE software, ever increasing computational power and platform independent computer programming languages such as JAVA provide new means to create greater depth of analysis tools which can be included into the conceptual design phase of launch vehicle development. Commercial framework computing environments provide easy to program techniques which coordinate and implement the flow of data in a distributed heterogeneous computing environment. It is the intent of this paper to present a process in development at NASA LaRC for enhanced structural weight estimation using this state of the art computational power.

  18. DUKSUP: A Computer Program for High Thrust Launch Vehicle Trajectory Design and Optimization

    NASA Technical Reports Server (NTRS)

    Williams, C. H.; Spurlock, O. F.

    2014-01-01

    From the late 1960's through 1997, the leadership of NASA's Intermediate and Large class unmanned expendable launch vehicle projects resided at the NASA Lewis (now Glenn) Research Center (LeRC). One of LeRC's primary responsibilities --- trajectory design and performance analysis --- was accomplished by an internally-developed analytic three dimensional computer program called DUKSUP. Because of its Calculus of Variations-based optimization routine, this code was generally more capable of finding optimal solutions than its contemporaries. A derivation of optimal control using the Calculus of Variations is summarized including transversality, intermediate, and final conditions. The two point boundary value problem is explained. A brief summary of the code's operation is provided, including iteration via the Newton-Raphson scheme and integration of variational and motion equations via a 4th order Runge-Kutta scheme. Main subroutines are discussed. The history of the LeRC trajectory design efforts in the early 1960's is explained within the context of supporting the Centaur upper stage program. How the code was constructed based on the operation of the Atlas/Centaur launch vehicle, the limits of the computers of that era, the limits of the computer programming languages, and the missions it supported are discussed. The vehicles DUKSUP supported (Atlas/Centaur, Titan/Centaur, and Shuttle/Centaur) are briefly described. The types of missions, including Earth orbital and interplanetary, are described. The roles of flight constraints and their impact on launch operations are detailed (such as jettisoning hardware on heating, Range Safety, ground station tracking, and elliptical parking orbits). The computer main frames on which the code was hosted are described. The applications of the code are detailed, including independent check of contractor analysis, benchmarking, leading edge analysis, and vehicle performance improvement assessments. Several of DUKSUP's many major impacts on launches are discussed including Intelsat, Voyager, Pioneer Venus, HEAO, Galileo, and Cassini.

  19. DUKSUP: A Computer Program for High Thrust Launch Vehicle Trajectory Design and Optimization

    NASA Technical Reports Server (NTRS)

    Spurlock, O. Frank; Williams, Craig H.

    2015-01-01

    From the late 1960s through 1997, the leadership of NASAs Intermediate and Large class unmanned expendable launch vehicle projects resided at the NASA Lewis (now Glenn) Research Center (LeRC). One of LeRCs primary responsibilities --- trajectory design and performance analysis --- was accomplished by an internally-developed analytic three dimensional computer program called DUKSUP. Because of its Calculus of Variations-based optimization routine, this code was generally more capable of finding optimal solutions than its contemporaries. A derivation of optimal control using the Calculus of Variations is summarized including transversality, intermediate, and final conditions. The two point boundary value problem is explained. A brief summary of the codes operation is provided, including iteration via the Newton-Raphson scheme and integration of variational and motion equations via a 4th order Runge-Kutta scheme. Main subroutines are discussed. The history of the LeRC trajectory design efforts in the early 1960s is explained within the context of supporting the Centaur upper stage program. How the code was constructed based on the operation of the AtlasCentaur launch vehicle, the limits of the computers of that era, the limits of the computer programming languages, and the missions it supported are discussed. The vehicles DUKSUP supported (AtlasCentaur, TitanCentaur, and ShuttleCentaur) are briefly described. The types of missions, including Earth orbital and interplanetary, are described. The roles of flight constraints and their impact on launch operations are detailed (such as jettisoning hardware on heating, Range Safety, ground station tracking, and elliptical parking orbits). The computer main frames on which the code was hosted are described. The applications of the code are detailed, including independent check of contractor analysis, benchmarking, leading edge analysis, and vehicle performance improvement assessments. Several of DUKSUPs many major impacts on launches are discussed including Intelsat, Voyager, Pioneer Venus, HEAO, Galileo, and Cassini.

  20. Democratic Population Decisions Result in Robust Policy-Gradient Learning: A Parametric Study with GPU Simulations

    PubMed Central

    Richmond, Paul; Buesing, Lars; Giugliano, Michele; Vasilaki, Eleni

    2011-01-01

    High performance computing on the Graphics Processing Unit (GPU) is an emerging field driven by the promise of high computational power at a low cost. However, GPU programming is a non-trivial task and moreover architectural limitations raise the question of whether investing effort in this direction may be worthwhile. In this work, we use GPU programming to simulate a two-layer network of Integrate-and-Fire neurons with varying degrees of recurrent connectivity and investigate its ability to learn a simplified navigation task using a policy-gradient learning rule stemming from Reinforcement Learning. The purpose of this paper is twofold. First, we want to support the use of GPUs in the field of Computational Neuroscience. Second, using GPU computing power, we investigate the conditions under which the said architecture and learning rule demonstrate best performance. Our work indicates that networks featuring strong Mexican-Hat-shaped recurrent connections in the top layer, where decision making is governed by the formation of a stable activity bump in the neural population (a “non-democratic” mechanism), achieve mediocre learning results at best. In absence of recurrent connections, where all neurons “vote” independently (“democratic”) for a decision via population vector readout, the task is generally learned better and more robustly. Our study would have been extremely difficult on a desktop computer without the use of GPU programming. We present the routines developed for this purpose and show that a speed improvement of 5x up to 42x is provided versus optimised Python code. The higher speed is achieved when we exploit the parallelism of the GPU in the search of learning parameters. This suggests that efficient GPU programming can significantly reduce the time needed for simulating networks of spiking neurons, particularly when multiple parameter configurations are investigated. PMID:21572529

  1. An interactive computer approach to performing resource analysis for a multi-resource/multi-project problem. [Spacelab inventory procurement planning

    NASA Technical Reports Server (NTRS)

    Schlagheck, R. A.

    1977-01-01

    New planning techniques and supporting computer tools are needed for the optimization of resources and costs for space transportation and payload systems. Heavy emphasis on cost effective utilization of resources has caused NASA program planners to look at the impact of various independent variables that affect procurement buying. A description is presented of a category of resource planning which deals with Spacelab inventory procurement analysis. Spacelab is a joint payload project between NASA and the European Space Agency and will be flown aboard the Space Shuttle starting in 1980. In order to respond rapidly to the various procurement planning exercises, a system was built that could perform resource analysis in a quick and efficient manner. This system is known as the Interactive Resource Utilization Program (IRUP). Attention is given to aspects of problem definition, an IRUP system description, questions of data base entry, the approach used for project scheduling, and problems of resource allocation.

  2. Animating functional anatomy for the web.

    PubMed

    Guttmann, G D

    2000-04-15

    The instructor sometimes has a complex task in explaining the concepts of functional anatomy and embryology to health professional students. However, animations can easily illustrate functional anatomy, clinical procedures, or the developing embryo. Web animation increases the accessibility of this information and makes it much more useful for independent student learning. A modified version of the animation can also be used for patient education. This article defines animation, provides a brief history of animation, discusses the principles of animation, illustrates and evaluates some of the video-editing or movie-making computer software programs, and shows examples of two of the author's animations. These two animations are the inferior alveolar nerve block from the mandibular nerve anesthetics unit and normal temporomandibular joint (TMJ) function from the muscles of the mastication and the TMJ function unit. The software discussed are the industry leaders and have made the job of producing computer-based animations much easier. The programs are Adobe Premiere, Adobe After Effects, Apple QuickTime and Macromedia Flash .

  3. User's Manual for Space Debris Surfaces (SD_SURF)

    NASA Technical Reports Server (NTRS)

    Elfer, N. C.

    1996-01-01

    A unique collection of computer codes, Space Debris Surfaces (SD_SURF), have been developed to assist in the design and analysis of space debris protection systems. SD_SURF calculates and summarizes a vehicle's vulnerability to space debris as a function of impact velocity and obliquity. An SD_SURF analysis will show which velocities and obliquities are the most probable to cause a penetration. This determination can help the analyst select a shield design which is best suited to the predominant penetration mechanism. The analysis also indicates the most suitable parameters for development or verification testing. The SD_SURF programs offer the option of either FORTRAN programs and Microsoft EXCEL spreadsheets and macros. The FORTRAN programs work with BUMPERII version 1.2a or 1.3 (Cosmic released). The EXCEL spreadsheets and macros can be used independently or with selected output from the SD_SURF FORTRAN programs.

  4. Role of theory in space science

    NASA Technical Reports Server (NTRS)

    1983-01-01

    The goal of theory is to understand how the fundamental laws of physics laws of physics and chemistry give rise to the features of the universe. It is recommended that NASA establish independent theoretical research programs in planetary sciences and in astrophysics similar to the solar-system plasma-physics theory program, which is characterized by stable, long-term support for theorists in university departments, NASA centers, and other organizations engaged in research in topics relevant to present and future space-derived data. It is recommended that NASA keep these programs under review to full benefit from the resulting research and to assure opportunities for inflow of new ideas and investigators. Also, provisions should be made by NASA for the computing needs of the theorists in the programs. Finally, it is recommended that NASA involve knowledgeable theorists in mission planning activities at all levels, from the formulation of long-term scientific strategies through the planning and operation of specific missions.

  5. Swan: A tool for porting CUDA programs to OpenCL

    NASA Astrophysics Data System (ADS)

    Harvey, M. J.; De Fabritiis, G.

    2011-04-01

    The use of modern, high-performance graphical processing units (GPUs) for acceleration of scientific computation has been widely reported. The majority of this work has used the CUDA programming model supported exclusively by GPUs manufactured by NVIDIA. An industry standardisation effort has recently produced the OpenCL specification for GPU programming. This offers the benefits of hardware-independence and reduced dependence on proprietary tool-chains. Here we describe a source-to-source translation tool, "Swan" for facilitating the conversion of an existing CUDA code to use the OpenCL model, as a means to aid programmers experienced with CUDA in evaluating OpenCL and alternative hardware. While the performance of equivalent OpenCL and CUDA code on fixed hardware should be comparable, we find that a real-world CUDA application ported to OpenCL exhibits an overall 50% increase in runtime, a reduction in performance attributable to the immaturity of contemporary compilers. The ported application is shown to have platform independence, running on both NVIDIA and AMD GPUs without modification. We conclude that OpenCL is a viable platform for developing portable GPU applications but that the more mature CUDA tools continue to provide best performance. Program summaryProgram title: Swan Catalogue identifier: AEIH_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIH_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU Public License version 2 No. of lines in distributed program, including test data, etc.: 17 736 No. of bytes in distributed program, including test data, etc.: 131 177 Distribution format: tar.gz Programming language: C Computer: PC Operating system: Linux RAM: 256 Mbytes Classification: 6.5 External routines: NVIDIA CUDA, OpenCL Nature of problem: Graphical Processing Units (GPUs) from NVIDIA are preferentially programed with the proprietary CUDA programming toolkit. An alternative programming model promoted as an industry standard, OpenCL, provides similar capabilities to CUDA and is also supported on non-NVIDIA hardware (including multicore ×86 CPUs, AMD GPUs and IBM Cell processors). The adaptation of a program from CUDA to OpenCL is relatively straightforward but laborious. The Swan tool facilitates this conversion. Solution method:Swan performs a translation of CUDA kernel source code into an OpenCL equivalent. It also generates the C source code for entry point functions, simplifying kernel invocation from the host program. A concise host-side API abstracts the CUDA and OpenCL APIs. A program adapted to use Swan has no dependency on the CUDA compiler for the host-side program. The converted program may be built for either CUDA or OpenCL, with the selection made at compile time. Restrictions: No support for CUDA C++ features Running time: Nominal

  6. Low-Thrust Many-Revolution Trajectory Optimization via Differential Dynamic Programming and a Sundman Transformation

    NASA Astrophysics Data System (ADS)

    Aziz, Jonathan D.; Parker, Jeffrey S.; Scheeres, Daniel J.; Englander, Jacob A.

    2018-01-01

    Low-thrust trajectories about planetary bodies characteristically span a high count of orbital revolutions. Directing the thrust vector over many revolutions presents a challenging optimization problem for any conventional strategy. This paper demonstrates the tractability of low-thrust trajectory optimization about planetary bodies by applying a Sundman transformation to change the independent variable of the spacecraft equations of motion to an orbit angle and performing the optimization with differential dynamic programming. Fuel-optimal geocentric transfers are computed with the transfer duration extended up to 2000 revolutions. The flexibility of the approach to higher fidelity dynamics is shown with Earth's J 2 perturbation and lunar gravity included for a 500 revolution transfer.

  7. Low-Thrust Many-Revolution Trajectory Optimization via Differential Dynamic Programming and a Sundman Transformation

    NASA Astrophysics Data System (ADS)

    Aziz, Jonathan D.; Parker, Jeffrey S.; Scheeres, Daniel J.; Englander, Jacob A.

    2018-06-01

    Low-thrust trajectories about planetary bodies characteristically span a high count of orbital revolutions. Directing the thrust vector over many revolutions presents a challenging optimization problem for any conventional strategy. This paper demonstrates the tractability of low-thrust trajectory optimization about planetary bodies by applying a Sundman transformation to change the independent variable of the spacecraft equations of motion to an orbit angle and performing the optimization with differential dynamic programming. Fuel-optimal geocentric transfers are computed with the transfer duration extended up to 2000 revolutions. The flexibility of the approach to higher fidelity dynamics is shown with Earth's J 2 perturbation and lunar gravity included for a 500 revolution transfer.

  8. Space shuttle solid rocket booster recovery system definition. Volume 2: SRB water impact Monte Carlo computer program, user's manual

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The HD 220 program was created as part of the space shuttle solid rocket booster recovery system definition. The model was generated to investigate the damage to SRB components under water impact loads. The random nature of environmental parameters, such as ocean waves and wind conditions, necessitates estimation of the relative frequency of occurrence for these parameters. The nondeterministic nature of component strengths also lends itself to probabilistic simulation. The Monte Carlo technique allows the simultaneous perturbation of multiple independent parameters and provides outputs describing the probability distribution functions of the dependent parameters. This allows the user to determine the required statistics for each output parameter.

  9. Current trends for customized biomedical software tools.

    PubMed

    Khan, Haseeb Ahmad

    2017-01-01

    In the past, biomedical scientists were solely dependent on expensive commercial software packages for various applications. However, the advent of user-friendly programming languages and open source platforms has revolutionized the development of simple and efficient customized software tools for solving specific biomedical problems. Many of these tools are designed and developed by biomedical scientists independently or with the support of computer experts and often made freely available for the benefit of scientific community. The current trends for customized biomedical software tools are highlighted in this short review.

  10. Decentralized digital adaptive control of robot motion

    NASA Technical Reports Server (NTRS)

    Tarokh, M.

    1990-01-01

    A decentralized model reference adaptive scheme is developed for digital control of robot manipulators. The adaptation laws are derived using hyperstability theory, which guarantees asymptotic trajectory tracking despite gross robot parameter variations. The control scheme has a decentralized structure in the sense that each local controller receives only its joint angle measurement to produce its joint torque. The independent joint controllers have simple structures and can be programmed using a very simple and computationally fast algorithm. As a result, the scheme is suitable for real-time motion control.

  11. A guide to onboard checkout. Volume 7: RF communications

    NASA Technical Reports Server (NTRS)

    1971-01-01

    The radio frequency communications subsystem for a space station is considered, with respect to onboard checkout requirements. The subsystem comprises all equipment necessary for transmitting and receiving, tracking and ranging, command, multiple voice and television information, and broadband experiment data. The communications subsystem provides a radio frequency interface between the space station and ground stations, either directly or indirectly, through a data relay satellite system, independent free-flying experiment modules, and logistics vehicles. Reliability, maintenance, and failure analyses are discussed, and computer programming techniques are presented.

  12. Operating Systems Standards Working Group (OSSWG) Next Generation Computer Resources (NGCR) Program First Annual Report - October 1990

    DTIC Science & Technology

    1991-04-01

    Rich Bergman NOSC Dale Brouhard NOSC Stephen Cecil CRANE Tom Conrad NUSC Linda Elderhorst NATC Karen Gordon IDA Daniel Green NSWC Steve Howell NSWC Phil...dictate. Some mods for SAFENET have been 22 acceptable to joint WG, some not. 23 o Vendors make prototype and these are evaluated for 24 conformance by...Need to make 13 recommendation to NGCR concern what is feasible for I 34 O.S. 15 o Language binding and/or language independent? Make 16

  13. A life prediction model for laminated composite structural components

    NASA Technical Reports Server (NTRS)

    Allen, David H.

    1990-01-01

    A life prediction methodology for laminated continuous fiber composites subjected to fatigue loading conditions was developed. A summary is presented of research completed. A phenomenological damage evolution law was formulated for matrix cracking which is independent of stacking sequence. Mechanistic and physical support was developed for the phenomenological evolution law proposed above. The damage evolution law proposed above was implemented to a finite element computer program. And preliminary predictions were obtained for a structural component undergoing fatigue loading induced damage.

  14. Neural-Network-Development Program

    NASA Technical Reports Server (NTRS)

    Phillips, Todd A.

    1993-01-01

    NETS, software tool for development and evaluation of neural networks, provides simulation of neural-network algorithms plus computing environment for development of such algorithms. Uses back-propagation learning method for all of networks it creates. Enables user to customize patterns of connections between layers of network. Also provides features for saving, during learning process, values of weights, providing more-precise control over learning process. Written in ANSI standard C language. Machine-independent version (MSC-21588) includes only code for command-line-interface version of NETS 3.0.

  15. An Analysis of the United States Special Operations Command’s Acquisition Process to Determine Its Compliance with Acquisition Reform Initiatives of the Past Decade

    DTIC Science & Technology

    1996-12-01

    This includes an exemption from publishing the opportunity in the Commerce Business Daily ( CBD ) and elimination of the requirement to hold the...of assigned programs. In discharging this responsibility, thc 990 coor.int-tes his efforts with ether ASN(RDMA) offices, b. TEFlO..M-VAL OPERATIONS...Communications, Computers and Information Systems CA Civil Affairs CAIV Cost as an Independent Variable CBD Commerce Business Daily CBPL Capabilities

  16. ORBKIT: A modular python toolbox for cross-platform postprocessing of quantum chemical wavefunction data.

    PubMed

    Hermann, Gunter; Pohl, Vincent; Tremblay, Jean Christophe; Paulus, Beate; Hege, Hans-Christian; Schild, Axel

    2016-06-15

    ORBKIT is a toolbox for postprocessing electronic structure calculations based on a highly modular and portable Python architecture. The program allows computing a multitude of electronic properties of molecular systems on arbitrary spatial grids from the basis set representation of its electronic wavefunction, as well as several grid-independent properties. The required data can be extracted directly from the standard output of a large number of quantum chemistry programs. ORBKIT can be used as a standalone program to determine standard quantities, for example, the electron density, molecular orbitals, and derivatives thereof. The cornerstone of ORBKIT is its modular structure. The existing basic functions can be arranged in an individual way and can be easily extended by user-written modules to determine any other derived quantity. ORBKIT offers multiple output formats that can be processed by common visualization tools (VMD, Molden, etc.). Additionally, ORBKIT possesses routines to order molecular orbitals computed at different nuclear configurations according to their electronic character and to interpolate the wavefunction between these configurations. The program is open-source under GNU-LGPLv3 license and freely available at https://github.com/orbkit/orbkit/. This article provides an overview of ORBKIT with particular focus on its capabilities and applicability, and includes several example calculations. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  17. Applicant Characteristics Associated With Selection for Ranking at Independent Surgery Residency Programs.

    PubMed

    Dort, Jonathan M; Trickey, Amber W; Kallies, Kara J; Joshi, Amit R T; Sidwell, Richard A; Jarman, Benjamin T

    2015-01-01

    This study evaluated characteristics of applicants selected for interview and ranked by independent general surgery residency programs and assessed independent program application volumes, interview selection, rank list formation, and match success. Demographic and academic information was analyzed for 2014-2015 applicants. Applicant characteristics were compared by ranking status using univariate and multivariable statistical techniques. Characteristics independently associated with whether or not an applicant was ranked were identified using multivariable logistic regression modeling with backward stepwise variable selection and cluster-correlated robust variance estimates to account for correlations among individuals who applied to multiple programs. The Electronic Residency Application Service was used to obtain applicant data and program match outcomes at 33 independent surgery programs. All applicants selected to interview at 33 participating independent general surgery residency programs were included in the study. Applicants were 60% male with median age of 26 years. Birthplace was well distributed. Most applicants (73%) had ≥1 academic publication. Median United States Medical Licensing Exams (USMLE) Step 1 score was 228 (interquartile range: 218-240), and median USMLE Step 2 clinical knowledge score was 241 (interquartile range: 231-250). Residency programs in some regions more often ranked applicants who attended medical school within the same region. On multivariable analysis, significant predictors of ranking by an independent residency program were: USMLE scores, medical school region, and birth region. Independent programs received an average of 764 applications (range: 307-1704). On average, 12% interviews, and 81% of interviewed applicants were ranked. Most programs (84%) matched at least 1 applicant ranked in their top 10. Participating independent programs attract a large volume of applicants and have high standards in the selection process. This information can be used by surgery residency applicants to gauge their candidacy at independent programs. Independent programs offer a select number of interviews, rank most applicants that they interview, and successfully match competitive applicants. Copyright © 2015 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  18. Knowledge and attitude about computer and internet usage among dental students in Western Rajasthan, India.

    PubMed

    Jali, Pramod K; Singh, Shamsher; Babaji, Prashant; Chaurasia, Vishwajit Rampratap; Somasundaram, P; Lau, Himani

    2014-01-01

    Internet is a useful tool to update the knowledge. The aim of the present study was to assess the current level of knowledge on the computer and internet among under graduate dental students. The study consists of self-administered close ended questionnaire survey. Questionnaires were distributed to undergraduate dental students. The study was conducted during July to September 2012. In the selected samples, response rate was 100%. Most (94.4%) of the students had computer knowledge and 77.4% had their own computer and access at home. Nearly 40.8% of students use computer for general purpose, 28.5% for entertainment and 22.8% used for research purpose. Most of the students had internet knowledge (92.9%) and they used it independently (79.1%). Nearly 42.1% used internet occasionally whereas, 34.4% used regularly, 21.7% rarely and 1.8% don't use respectively. Internet was preferred for getting information (48.8%) due to easy accessibility and recent updates. For dental purpose students used internet 2-3 times/week (45.3%). Most (95.3%) of the students responded to have computer based learning program in the curriculum. Computer knowledge was observed to be good among dental students.

  19. Design and implementation of a medium speed communications interface and protocol for a low cost, refreshed display computer

    NASA Technical Reports Server (NTRS)

    Phyne, J. R.; Nelson, M. D.

    1975-01-01

    The design and implementation of hardware and software systems involved in using a 40,000 bit/second communication line as the connecting link between an IMLAC PDS 1-D display computer and a Univac 1108 computer system were described. The IMLAC consists of two independent processors sharing a common memory. The display processor generates the deflection and beam control currents as it interprets a program contained in the memory; the minicomputer has a general instruction set and is responsible for starting and stopping the display processor and for communicating with the outside world through the keyboard, teletype, light pen, and communication line. The processing time associated with each data byte was minimized by designing the input and output processes as finite state machines which automatically sequence from each state to the next. Several tests of the communication link and the IMLAC software were made using a special low capacity computer grade cable between the IMLAC and the Univac.

  20. Streaming support for data intensive cloud-based sequence analysis.

    PubMed

    Issa, Shadi A; Kienzler, Romeo; El-Kalioby, Mohamed; Tonellato, Peter J; Wall, Dennis; Bruggmann, Rémy; Abouelhoda, Mohamed

    2013-01-01

    Cloud computing provides a promising solution to the genomics data deluge problem resulting from the advent of next-generation sequencing (NGS) technology. Based on the concepts of "resources-on-demand" and "pay-as-you-go", scientists with no or limited infrastructure can have access to scalable and cost-effective computational resources. However, the large size of NGS data causes a significant data transfer latency from the client's site to the cloud, which presents a bottleneck for using cloud computing services. In this paper, we provide a streaming-based scheme to overcome this problem, where the NGS data is processed while being transferred to the cloud. Our scheme targets the wide class of NGS data analysis tasks, where the NGS sequences can be processed independently from one another. We also provide the elastream package that supports the use of this scheme with individual analysis programs or with workflow systems. Experiments presented in this paper show that our solution mitigates the effect of data transfer latency and saves both time and cost of computation.

  1. MICROPROCESSOR-BASED DATA-ACQUISITION SYSTEM FOR A BOREHOLE RADAR.

    USGS Publications Warehouse

    Bradley, Jerry A.; Wright, David L.

    1987-01-01

    An efficient microprocessor-based system is described that permits real-time acquisition, stacking, and digital recording of data generated by a borehole radar system. Although the system digitizes, stacks, and records independently of a computer, it is interfaced to a desktop computer for program control over system parameters such as sampling interval, number of samples, number of times the data are stacked prior to recording on nine-track tape, and for graphics display of the digitized data. The data can be transferred to the desktop computer during recording, or it can be played back from a tape at a latter time. Using the desktop computer, the operator observes results while recording data and generates hard-copy graphics in the field. Thus, the radar operator can immediately evaluate the quality of data being obtained, modify system parameters, study the radar logs before leaving the field, and rerun borehole logs if necessary. The system has proven to be reliable in the field and has increased productivity both in the field and in the laboratory.

  2. Quality of life assessment software for computer-inexperienced older adults: multimedia utility elicitation for activities of daily living.

    PubMed Central

    Goldstein, M. K.; Miller, D. E.; Davies, S.; Garber, A. M.

    2002-01-01

    Functional status as measured by dependencies in the Activities of Daily Living (ADLs) is an important indicator of overall health for older adults. Methodologies for outcomes-based medical-decision-making for public policy, such as decision modeling and cost-effectiveness analysis, require utilities for outcome health states. Utilities have been reported for many disease states, but have not been indexed by functional status, which is a strong predictor of outcome in geriatrics. We describe here a utility elicitation program developed specifically for use with computer-inexperienced older adults: Functional Limitation And Independence Rating (FLAIR1). FLAIR1 design features address common physical problems of the aged and computer attitudes of inexperienced users that could impede computer acceptance. We interviewed 400 adults ages 65 years and older with FLAIR1. In exit interviews with 154 respondents, 118 (76%) found FLAIR1 easy to use. Design features in FLAIR1 can be applied to other software for older adults PMID:12463834

  3. Modeling and minimizing interference from corneal birefringence in retinal birefringence scanning for foveal fixation detection

    PubMed Central

    Irsch, Kristina; Gramatikov, Boris; Wu, Yi-Kai; Guyton, David

    2011-01-01

    Utilizing the measured corneal birefringence from a data set of 150 eyes of 75 human subjects, an algorithm and related computer program, based on Müller-Stokes matrix calculus, were developed in MATLAB for assessing the influence of corneal birefringence on retinal birefringence scanning (RBS) and for converging upon an optical/mechanical design using wave plates (“wave-plate-enhanced RBS”) that allows foveal fixation detection essentially independently of corneal birefringence. The RBS computer model, and in particular the optimization algorithm, were verified with experimental human data using an available monocular RBS-based eye fixation monitor. Fixation detection using wave-plate-enhanced RBS is adaptable to less cooperative subjects, including young children at risk for developing amblyopia. PMID:21750772

  4. Informatics and physics intersubject communications in the 7th and 8th grades of the basics level by means of computer modeling

    NASA Astrophysics Data System (ADS)

    Vasina, A. V.

    2017-01-01

    The author of the article imparts pedagogical experience of realization of intersubject communications of school basic courses of informatics, technology and physics through research activity of students with the use of specialized programs for the development and studying of computer models of physical processes. The considered technique is based on the principles of independent scholar activity of students, intersubject communications such as educational disciplines of technology, physics and informatics; it helps to develop the research activity of students and a professional and practical orientation of education. As an example the lesson of modeling of flotation with the use of the environment "1C Physical simulator" is considered.

  5. [Intranet applications in radiology].

    PubMed

    Knopp, M V; von Hippel, G M; Koch, T; Knopp, M A

    2000-01-01

    The aim of the paper is to present the conceptual basis and capabilities of intranet applications in radiology. The intranet, which is the local brother of the internet can be readily realized using existing computer components and a network. All current computer operating systems support intranet applications which allow hard and software independent communication of text, images, video and sound with the use of browser software without dedicated programs on the individual personal computers. Radiological applications for text communication e.g. department specific bulletin boards and access to examination protocols; use of image communication for viewing and limited processing and documentation of radiological images can be achieved on decentralized PCs as well as speech communication for dictation, distribution of dictation and speech recognition. The intranet helps to optimize the organizational efficiency and cost effectiveness in the daily work of radiological departments in outpatients and hospital settings. The general interest in internet and intranet technology will guarantee its continuous development.

  6. Development of the cardiovascular system: an interactive video computer program.

    PubMed Central

    Smolen, A. J.; Zeiset, G. E.; Beaston-Wimmer, P.

    1992-01-01

    The major aim of this project is to provide interactive video computer based courseware that can be used by the medical student and others to supplement his or her learning of this very important aspect of basic biomedical education. Embryology is a science that depends on the ability of the student to visualize dynamic changes in structure which occur in four dimensions--X, Y, Z, and time. Traditional didactic methods, including lectures employing photographic slides and laboratories employing histological sections, are limited to two dimensions--X and Y. The third spatial dimension and the dimension of time cannot be readily illustrated using these methods. Computer based learning, particularly when used in conjunction with interactive video, can be used effectively to illustrate developmental processes in all four dimensions. This methodology can also be used to foster the critical skills of independent learning and problem solving. PMID:1483013

  7. Multiscale Multifunctional Progressive Fracture of Composite Structures

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.; Minnetyan, L.

    2012-01-01

    A new approach is described for evaluating fracture in composite structures. This approach is independent of classical fracture mechanics parameters like fracture toughness. It relies on computational simulation and is programmed in a stand-alone integrated computer code. It is multiscale, multifunctional because it includes composite mechanics for the composite behavior and finite element analysis for predicting the structural response. It contains seven modules; layered composite mechanics (micro, macro, laminate), finite element, updating scheme, local fracture, global fracture, stress based failure modes, and fracture progression. The computer code is called CODSTRAN (Composite Durability Structural ANalysis). It is used in the present paper to evaluate the global fracture of four composite shell problems and one composite built-up structure. Results show that the composite shells. Global fracture is enhanced when internal pressure is combined with shear loads. The old reference denotes that nothing has been added to this comprehensive report since then.

  8. SEAHT: A computer program for the use of intersecting arcs of altimeter data for sea surface height refinement

    NASA Technical Reports Server (NTRS)

    Allen, C. P.; Martin, C. F.

    1977-01-01

    The SEAHT program is designed to process multiple passes of altimeter data with intersecting ground tracks, with the estimation of corrections for orbital errors to each pass such that the data has the best overall agreement at the crossover points. Orbit error for each pass is modeled as a polynomial in time, with optional orders of 0, 1, or 2. One or more passes may be constrained in the adjustment process, thus allowing passes with the best orbits to provide the overall level and orientation of the estimated sea surface heights. Intersections which disagree by more than an input edit level are not used in the error parameter estimation. In the program implementation, passes are grouped into South-North passes and North-South passes, with the North-South passes partitioned out for the estimation of orbit error parameters. Computer core utilization is thus dependent on the number of parameters estimated for the set of South-North arcs, but is independent on the number of North-South passes. Estimated corrections for each pass are applied to the data at its input data rate and an output tape is written which contains the corrected data.

  9. Virtual Satellite

    NASA Technical Reports Server (NTRS)

    Hammrs, Stephan R.

    2008-01-01

    Virtual Satellite (VirtualSat) is a computer program that creates an environment that facilitates the development, verification, and validation of flight software for a single spacecraft or for multiple spacecraft flying in formation. In this environment, enhanced functionality and autonomy of navigation, guidance, and control systems of a spacecraft are provided by a virtual satellite that is, a computational model that simulates the dynamic behavior of the spacecraft. Within this environment, it is possible to execute any associated software, the development of which could benefit from knowledge of, and possible interaction (typically, exchange of data) with, the virtual satellite. Examples of associated software include programs for simulating spacecraft power and thermal- management systems. This environment is independent of the flight hardware that will eventually host the flight software, making it possible to develop the software simultaneously with, or even before, the hardware is delivered. Optionally, by use of interfaces included in VirtualSat, hardware can be used instead of simulated. The flight software, coded in the C or C++ programming language, is compilable and loadable into VirtualSat without any special modifications. Thus, VirtualSat can serve as a relatively inexpensive software test-bed for development test, integration, and post-launch maintenance of spacecraft flight software.

  10. Faster computation of exact RNA shape probabilities.

    PubMed

    Janssen, Stefan; Giegerich, Robert

    2010-03-01

    Abstract shape analysis allows efficient computation of a representative sample of low-energy foldings of an RNA molecule. More comprehensive information is obtained by computing shape probabilities, accumulating the Boltzmann probabilities of all structures within each abstract shape. Such information is superior to free energies because it is independent of sequence length and base composition. However, up to this point, computation of shape probabilities evaluates all shapes simultaneously and comes with a computation cost which is exponential in the length of the sequence. We device an approach called RapidShapes that computes the shapes above a specified probability threshold T by generating a list of promising shapes and constructing specialized folding programs for each shape to compute its share of Boltzmann probability. This aims at a heuristic improvement of runtime, while still computing exact probability values. Evaluating this approach and several substrategies, we find that only a small proportion of shapes have to be actually computed. For an RNA sequence of length 400, this leads, depending on the threshold, to a 10-138 fold speed-up compared with the previous complete method. Thus, probabilistic shape analysis has become feasible in medium-scale applications, such as the screening of RNA transcripts in a bacterial genome. RapidShapes is available via http://bibiserv.cebitec.uni-bielefeld.de/rnashapes

  11. Implementation of a fully-balanced periodic tridiagonal solver on a parallel distributed memory architecture

    NASA Technical Reports Server (NTRS)

    Eidson, T. M.; Erlebacher, G.

    1994-01-01

    While parallel computers offer significant computational performance, it is generally necessary to evaluate several programming strategies. Two programming strategies for a fairly common problem - a periodic tridiagonal solver - are developed and evaluated. Simple model calculations as well as timing results are presented to evaluate the various strategies. The particular tridiagonal solver evaluated is used in many computational fluid dynamic simulation codes. The feature that makes this algorithm unique is that these simulation codes usually require simultaneous solutions for multiple right-hand-sides (RHS) of the system of equations. Each RHS solutions is independent and thus can be computed in parallel. Thus a Gaussian elimination type algorithm can be used in a parallel computation and the more complicated approaches such as cyclic reduction are not required. The two strategies are a transpose strategy and a distributed solver strategy. For the transpose strategy, the data is moved so that a subset of all the RHS problems is solved on each of the several processors. This usually requires significant data movement between processor memories across a network. The second strategy attempts to have the algorithm allow the data across processor boundaries in a chained manner. This usually requires significantly less data movement. An approach to accomplish this second strategy in a near-perfect load-balanced manner is developed. In addition, an algorithm will be shown to directly transform a sequential Gaussian elimination type algorithm into the parallel chained, load-balanced algorithm.

  12. ASKI: A modular toolbox for scattering-integral-based seismic full waveform inversion and sensitivity analysis utilizing external forward codes

    NASA Astrophysics Data System (ADS)

    Schumacher, Florian; Friederich, Wolfgang

    Due to increasing computational resources, the development of new numerically demanding methods and software for imaging Earth's interior remains of high interest in Earth sciences. Here, we give a description from a user's and programmer's perspective of the highly modular, flexible and extendable software package ASKI-Analysis of Sensitivity and Kernel Inversion-recently developed for iterative scattering-integral-based seismic full waveform inversion. In ASKI, the three fundamental steps of solving the seismic forward problem, computing waveform sensitivity kernels and deriving a model update are solved by independent software programs that interact via file output/input only. Furthermore, the spatial discretizations of the model space used for solving the seismic forward problem and for deriving model updates, respectively, are kept completely independent. For this reason, ASKI does not contain a specific forward solver but instead provides a general interface to established community wave propagation codes. Moreover, the third fundamental step of deriving a model update can be repeated at relatively low costs applying different kinds of model regularization or re-selecting/weighting the inverted dataset without need to re-solve the forward problem or re-compute the kernels. Additionally, ASKI offers the user sensitivity and resolution analysis tools based on the full sensitivity matrix and allows to compose customized workflows in a consistent computational environment. ASKI is written in modern Fortran and Python, it is well documented and freely available under terms of the GNU General Public License (http://www.rub.de/aski).

  13. Integrated digital flight-control system for the space shuttle orbiter

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The integrated digital flight control system is presented which provides rotational and translational control of the space shuttle orbiter in all phases of flight: from launch ascent through orbit to entry and touchdown, and during powered horizontal flights. The program provides a versatile control system structure while maintaining uniform communications with other programs, sensors, and control effectors by using an executive routine/functional subroutine format. The program reads all external variables at a single point, copies them into its dedicated storage, and then calls the required subroutines in the proper sequence. As a result, the flight control program is largely independent of other programs in the GN&C computer complex and is equally insensitive to the characteristics of the processor configuration. The integrated structure of the control system and the DFCS executive routine which embodies that structure are described along with the input and output. The specific estimation and control algorithms used in the various mission phases are given.

  14. Separation of left and right lungs using 3D information of sequential CT images and a guided dynamic programming algorithm

    PubMed Central

    Park, Sang Cheol; Leader, Joseph Ken; Tan, Jun; Lee, Guee Sang; Kim, Soo Hyung; Na, In Seop; Zheng, Bin

    2011-01-01

    Objective this article presents a new computerized scheme that aims to accurately and robustly separate left and right lungs on CT examinations. Methods we developed and tested a method to separate the left and right lungs using sequential CT information and a guided dynamic programming algorithm using adaptively and automatically selected start point and end point with especially severe and multiple connections. Results the scheme successfully identified and separated all 827 connections on the total 4034 CT images in an independent testing dataset of CT examinations. The proposed scheme separated multiple connections regardless of their locations, and the guided dynamic programming algorithm reduced the computation time to approximately 4.6% in comparison with the traditional dynamic programming and avoided the permeation of the separation boundary into normal lung tissue. Conclusions The proposed method is able to robustly and accurately disconnect all connections between left and right lungs and the guided dynamic programming algorithm is able to remove redundant processing. PMID:21412104

  15. Computer image analysis in obtaining characteristics of images: greenhouse tomatoes in the process of generating learning sets of artificial neural networks

    NASA Astrophysics Data System (ADS)

    Zaborowicz, M.; Przybył, J.; Koszela, K.; Boniecki, P.; Mueller, W.; Raba, B.; Lewicki, A.; Przybył, K.

    2014-04-01

    The aim of the project was to make the software which on the basis on image of greenhouse tomato allows for the extraction of its characteristics. Data gathered during the image analysis and processing were used to build learning sets of artificial neural networks. Program enables to process pictures in jpeg format, acquisition of statistical information of the picture and export them to an external file. Produced software is intended to batch analyze collected research material and obtained information saved as a csv file. Program allows for analysis of 33 independent parameters implicitly to describe tested image. The application is dedicated to processing and image analysis of greenhouse tomatoes. The program can be used for analysis of other fruits and vegetables of a spherical shape.

  16. MHOST: An efficient finite element program for inelastic analysis of solids and structures

    NASA Technical Reports Server (NTRS)

    Nakazawa, S.

    1988-01-01

    An efficient finite element program for 3-D inelastic analysis of gas turbine hot section components was constructed and validated. A novel mixed iterative solution strategy is derived from the augmented Hu-Washizu variational principle in order to nodally interpolate coordinates, displacements, deformation, strains, stresses and material properties. A series of increasingly sophisticated material models incorporated in MHOST include elasticity, secant plasticity, infinitesimal and finite deformation plasticity, creep and unified viscoplastic constitutive model proposed by Walker. A library of high performance elements is built into this computer program utilizing the concepts of selective reduced integrations and independent strain interpolations. A family of efficient solution algorithms is implemented in MHOST for linear and nonlinear equation solution including the classical Newton-Raphson, modified, quasi and secant Newton methods with optional line search and the conjugate gradient method.

  17. 75 FR 13521 - Centers for Independent Living Program-Training and Technical Assistance

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-22

    ... DEPARTMENT OF EDUCATION Centers for Independent Living Program--Training and Technical Assistance... for Independent Living Program--Training and Technical Assistance (CIL-TA program). The Assistant... appropriated for the CIL program to provide training and technical assistance to CILs, agencies eligible to...

  18. Naïve and Robust: Class-Conditional Independence in Human Classification Learning

    ERIC Educational Resources Information Center

    Jarecki, Jana B.; Meder, Björn; Nelson, Jonathan D.

    2018-01-01

    Humans excel in categorization. Yet from a computational standpoint, learning a novel probabilistic classification task involves severe computational challenges. The present paper investigates one way to address these challenges: assuming class-conditional independence of features. This feature independence assumption simplifies the inference…

  19. A depth-first search algorithm to compute elementary flux modes by linear programming

    PubMed Central

    2014-01-01

    Background The decomposition of complex metabolic networks into elementary flux modes (EFMs) provides a useful framework for exploring reaction interactions systematically. Generating a complete set of EFMs for large-scale models, however, is near impossible. Even for moderately-sized models (<400 reactions), existing approaches based on the Double Description method must iterate through a large number of combinatorial candidates, thus imposing an immense processor and memory demand. Results Based on an alternative elementarity test, we developed a depth-first search algorithm using linear programming (LP) to enumerate EFMs in an exhaustive fashion. Constraints can be introduced to directly generate a subset of EFMs satisfying the set of constraints. The depth-first search algorithm has a constant memory overhead. Using flux constraints, a large LP problem can be massively divided and parallelized into independent sub-jobs for deployment into computing clusters. Since the sub-jobs do not overlap, the approach scales to utilize all available computing nodes with minimal coordination overhead or memory limitations. Conclusions The speed of the algorithm was comparable to efmtool, a mainstream Double Description method, when enumerating all EFMs; the attrition power gained from performing flux feasibility tests offsets the increased computational demand of running an LP solver. Unlike the Double Description method, the algorithm enables accelerated enumeration of all EFMs satisfying a set of constraints. PMID:25074068

  20. MK3TOOLS & NetCDF - storing VLBI data in a machine independent array oriented data format

    NASA Astrophysics Data System (ADS)

    Hobiger, T.; Koyama, Y.; Kondo, T.

    2007-07-01

    In the beginning of 2002 the International VLBI Service (IVS) has agreed to introduce a Platform-independent VLBI exchange format (PIVEX) which permits the exchange of observational data and stimulates the research across different analysis groups. Unfortunately PIVEX has never been implemented and many analysis software packages are still depending on prior processing (e.g. ambiguity resolution and computation of ionosphere corrections) done by CALC/SOLVE. Thus MK3TOOLS which handles MK3 databases without CALC/SOLVE being installed has been developed. It uses the NetCDF format to store the data and since interfaces exist for a variety of programming languages (FORTRAN, C/C++, JAVA, Perl, Python) it can be easily incorporated in existing and upcoming analysis software packages.

  1. SIGPI. Fault Tree Cut Set System Performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patenaude, C.J.

    1992-01-13

    SIGPI computes the probabilistic performance of complex systems by combining cut set or other binary product data with probability information on each basic event. SIGPI is designed to work with either coherent systems, where the system fails when certain combinations of components fail, or noncoherent systems, where at least one cut set occurs only if at least one component of the system is operating properly. The program can handle conditionally independent components, dependent components, or a combination of component types and has been used to evaluate responses to environmental threats and seismic events. The three data types that can bemore » input are cut set data in disjoint normal form, basic component probabilities for independent basic components, and mean and covariance data for statistically dependent basic components.« less

  2. SIGPI. Fault Tree Cut Set System Performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patenaude, C.J.

    1992-01-14

    SIGPI computes the probabilistic performance of complex systems by combining cut set or other binary product data with probability information on each basic event. SIGPI is designed to work with either coherent systems, where the system fails when certain combinations of components fail, or noncoherent systems, where at least one cut set occurs only if at least one component of the system is operating properly. The program can handle conditionally independent components, dependent components, or a combination of component types and has been used to evaluate responses to environmental threats and seismic events. The three data types that can bemore » input are cut set data in disjoint normal form, basic component probabilities for independent basic components, and mean and covariance data for statistically dependent basic components.« less

  3. Restructuring VA ambulatory care and medical education: the PACE model of primary care.

    PubMed

    Cope, D W; Sherman, S; Robbins, A S

    1996-07-01

    The Veterans Health Administration (VHA) Western Region and associated medical schools formulated a set of recommendations for an improved ambulatory health care delivery system during a 1988 strategic planning conference. As a result, the Department of Veterans Affairs (VA) Medical Center in Sepulveda, California, initiated the Pilot (now Primary) Ambulatory Care and Education (PACE) program in 1990 to implement and evaluate a model program. The PACE program represents a significant departure from traditional VA and non-VA academic medical center care, shifting the focus of care from the inpatient to the outpatient setting. From its inception, the PACE program has used an interdisciplinary team approach with three independent global care firms. Each firm is interdisciplinary in composition, with a matrix management structure that expands role function and empowers team members. Emphasis is on managed primary care, stressing a biopsychosocial approach and cost-effective comprehensive care emphasizing prevention and health maintenance. Information management is provided through a network of personal computers that serve as a front end to the VHA Decentralized Hospital Computer Program (DHCP) mainframe. In addition to providing comprehensive and cost-effective care, the PACE program educates trainees in all health care disciplines, conducts research, and disseminates information about important procedures and outcomes. Undergraduate and graduate trainees from 11 health care disciplines rotate through the PACE program to learn an integrated approach to managed ambulatory care delivery. All trainees are involved in a problem-based approach to learning that emphasizes shared training experiences among health care disciplines. This paper describes the transitional phases of the PACE program (strategic planning, reorganization, and quality improvement) that are relevant for other institutions that are shifting to training programs emphasizing primary and ambulatory care.

  4. A Parallel Compact Multi-Dimensional Numerical Algorithm with Aeroacoustics Applications

    NASA Technical Reports Server (NTRS)

    Povitsky, Alex; Morris, Philip J.

    1999-01-01

    In this study we propose a novel method to parallelize high-order compact numerical algorithms for the solution of three-dimensional PDEs (Partial Differential Equations) in a space-time domain. For this numerical integration most of the computer time is spent in computation of spatial derivatives at each stage of the Runge-Kutta temporal update. The most efficient direct method to compute spatial derivatives on a serial computer is a version of Gaussian elimination for narrow linear banded systems known as the Thomas algorithm. In a straightforward pipelined implementation of the Thomas algorithm processors are idle due to the forward and backward recurrences of the Thomas algorithm. To utilize processors during this time, we propose to use them for either non-local data independent computations, solving lines in the next spatial direction, or local data-dependent computations by the Runge-Kutta method. To achieve this goal, control of processor communication and computations by a static schedule is adopted. Thus, our parallel code is driven by a communication and computation schedule instead of the usual "creative, programming" approach. The obtained parallelization speed-up of the novel algorithm is about twice as much as that for the standard pipelined algorithm and close to that for the explicit DRP algorithm.

  5. [Acoustic voice analysis using the Praat program: comparative study with the Dr. Speech program].

    PubMed

    Núñez Batalla, Faustino; González Márquez, Rocío; Peláez González, M Belén; González Laborda, Irene; Fernández Fernández, María; Morato Galán, Marta

    2014-01-01

    The European Laryngological Society (ELS) basic protocol for functional assessment of voice pathology includes 5 different approaches: perception, videostroboscopy, acoustics, aerodynamics and subjective rating by the patient. In this study we focused on acoustic voice analysis. The purpose of the present study was to correlate the results obtained by the commercial software Dr. Speech and the free software Praat in 2 fields: 1. Narrow-band spectrogram (the presence of noise according to Yanagihara, and the presence of subharmonics) (semi-quantitative). 2. Voice acoustic parameters (jitter, shimmer, harmonics-to-noise ratio, fundamental frequency) (quantitative). We studied a total of 99 voice samples from individuals with Reinke's oedema diagnosed using videostroboscopy. One independent observer used Dr. Speech 3.0 and a second one used the Praat program (Phonetic Sciences, University of Amsterdam). The spectrographic analysis consisted of obtaining a narrow-band spectrogram from the previous digitalised voice samples by the 2 independent observers. They then determined the presence of noise in the spectrogram, using the Yanagihara grades, as well as the presence of subharmonics. As a final result, the acoustic parameters of jitter, shimmer, harmonics-to-noise ratio and fundamental frequency were obtained from the 2 acoustic analysis programs. The results indicated that the sound spectrogram and the numerical values obtained for shimmer and jitter were similar for both computer programs, even though types 1, 2 and 3 voice samples were analysed. The Praat and Dr. Speech programs provide similar results in the acoustic analysis of pathological voices. Copyright © 2013 Elsevier España, S.L. All rights reserved.

  6. Automatic computation of the travelling wave solutions to nonlinear PDEs

    NASA Astrophysics Data System (ADS)

    Liang, Songxin; Jeffrey, David J.

    2008-05-01

    Various extensions of the tanh-function method and their implementations for finding explicit travelling wave solutions to nonlinear partial differential equations (PDEs) have been reported in the literature. However, some solutions are often missed by these packages. In this paper, a new algorithm and its implementation called TWS for solving single nonlinear PDEs are presented. TWS is implemented in MAPLE 10. It turns out that, for PDEs whose balancing numbers are not positive integers, TWS works much better than existing packages. Furthermore, TWS obtains more solutions than existing packages for most cases. Program summaryProgram title:TWS Catalogue identifier:AEAM_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAM_v1_0.html Program obtainable from:CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions:Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.:1250 No. of bytes in distributed program, including test data, etc.:78 101 Distribution format:tar.gz Programming language:Maple 10 Computer:A laptop with 1.6 GHz Pentium CPU Operating system:Windows XP Professional RAM:760 Mbytes Classification:5 Nature of problem:Finding the travelling wave solutions to single nonlinear PDEs. Solution method:Based on tanh-function method. Restrictions:The current version of this package can only deal with single autonomous PDEs or ODEs, not systems of PDEs or ODEs. However, the PDEs can have any finite number of independent space variables in addition to time t. Unusual features:For PDEs whose balancing numbers are not positive integers, TWS works much better than existing packages. Furthermore, TWS obtains more solutions than existing packages for most cases. Additional comments:It is easy to use. Running time:Less than 20 seconds for most cases, between 20 to 100 seconds for some cases, over 100 seconds for few cases. References: [1] E.S. Cheb-Terrab, K. von Bulow, Comput. Phys. Comm. 90 (1995) 102. [2] S.A. Elwakil, S.K. El-Labany, M.A. Zahran, R. Sabry, Phys. Lett. A 299 (2002) 179. [3] E. Fan, Phys. Lett. 277 (2000) 212. [4] W. Malfliet, Amer. J. Phys. 60 (1992) 650. [5] W. Malfliet, W. Hereman, Phys. Scripta 54 (1996) 563. [6] E.J. Parkes, B.R. Duffy, Comput. Phys. Comm. 98 (1996) 288.

  7. Automating tasks in protein structure determination with the clipper python module

    PubMed Central

    McNicholas, Stuart; Croll, Tristan; Burnley, Tom; Palmer, Colin M.; Hoh, Soon Wen; Jenkins, Huw T.; Dodson, Eleanor

    2017-01-01

    Abstract Scripting programming languages provide the fastest means of prototyping complex functionality. Those with a syntax and grammar resembling human language also greatly enhance the maintainability of the produced source code. Furthermore, the combination of a powerful, machine‐independent scripting language with binary libraries tailored for each computer architecture allows programs to break free from the tight boundaries of efficiency traditionally associated with scripts. In the present work, we describe how an efficient C++ crystallographic library such as Clipper can be wrapped, adapted and generalized for use in both crystallographic and electron cryo‐microscopy applications, scripted with the Python language. We shall also place an emphasis on best practices in automation, illustrating how this can be achieved with this new Python module. PMID:28901669

  8. Helping people in a minimally conscious state develop responding and stimulation control through a microswitch-aided program.

    PubMed

    Lancioni, Giulio E; Singh, Nirbhay N; O'Reilly, Mark F; Sigafoos, Jeff; D'Amico, Fiora; Buonocunto, Francesca; Navarro, Jorge; Lanzilotti, Crocifissa; Fiore, Pietro; Megna, Marisa; Damiani, Sabino; Marvulli, Riccardo

    2017-06-01

    Postcoma persons in a minimally conscious state (MCS) and with extensive motor impairment cannot independently access and control environmental stimulation. Assessing the effects of a microswitch-aided program aimed at helping MCS persons develop responding and stimulation control and conducting a social validation/evaluation of the program. A single-subject ABAB design was used for each participant to determine the impact of the program on his or her responding. Staff interviews were used for the social validation/evaluation of the program. Rehabilitation and care facilities that the participants attended. Eleven MCS persons with extensive motor impairment and lack of speech or any other functional communication. For each participant, baseline (A) phases were alternated with intervention (B) phases during which the program was used. The program relied on microswitches to monitor participants' specific responses (e.g., prolonged eyelid closures) and on a computer system to enable those responses to control stimulation. In practice, the participants could use a simple response such as prolonged eyelid closure to generate a new stimulation input. Sixty-six staff people took part in the social validation of the program. They were to compare the program to basic and elaborate forms of externally controlled stimulation, scoring each of them on a six-item questionnaire. All participants showed increased response frequencies (and thus higher levels of independent stimulation input/control) during the B phases of the study. Their frequencies for each intervention phase more than doubled their frequencies for the preceding baseline phase with the difference between the two being clearly significant (P<0.01). Staff involved in the social validation procedure provided significantly higher scoring (P<0.01) for the program on five of the six questionnaire items. A microswitch-aided program can be an effective and socially acceptable tool in the work with MCS persons. The participants and staff's data can be taken as an encouragement for the use of a microswitch-aided program within care and rehabilitation settings for MCS persons.

  9. Acute imaging does not improve ASTRAL score's accuracy despite having a prognostic value.

    PubMed

    Ntaios, George; Papavasileiou, Vasileios; Faouzi, Mohamed; Vanacker, Peter; Wintermark, Max; Michel, Patrik

    2014-10-01

    The ASTRAL score was recently shown to reliably predict three-month functional outcome in patients with acute ischemic stroke. The study aims to investigate whether information from multimodal imaging increases ASTRAL score's accuracy. All patients registered in the ASTRAL registry until March 2011 were included. In multivariate logistic-regression analyses, we added covariates derived from parenchymal, vascular, and perfusion imaging to the 6-parameter model of the ASTRAL score. If a specific imaging covariate remained an independent predictor of three-month modified Rankin score>2, the area-under-the-curve (AUC) of this new model was calculated and compared with ASTRAL score's AUC. We also performed similar logistic regression analyses in arbitrarily chosen patient subgroups. When added to the ASTRAL score, the following covariates on admission computed tomography/magnetic resonance imaging-based multimodal imaging were not significant predictors of outcome: any stroke-related acute lesion, any nonstroke-related lesions, chronic/subacute stroke, leukoaraiosis, significant arterial pathology in ischemic territory on computed tomography angiography/magnetic resonance angiography/Doppler, significant intracranial arterial pathology in ischemic territory, and focal hypoperfusion on perfusion-computed tomography. The Alberta Stroke Program Early CT score on plain imaging and any significant extracranial arterial pathology on computed tomography angiography/magnetic resonance angiography/Doppler were independent predictors of outcome (odds ratio: 0·93, 95% CI: 0·87-0·99 and odds ratio: 1·49, 95% CI: 1·08-2·05, respectively) but did not increase ASTRAL score's AUC (0·849 vs. 0·850, and 0·8563 vs. 0·8564, respectively). In exploratory analyses in subgroups of different prognosis, age or stroke severity, no covariate was found to increase ASTRAL score's AUC, either. The addition of information derived from multimodal imaging does not increase ASTRAL score's accuracy to predict functional outcome despite having an independent prognostic value. More selected radiological parameters applied in specific subgroups of stroke patients may add prognostic value of multimodal imaging. © 2014 World Stroke Organization.

  10. OPTIMAL AIRCRAFT TRAJECTORIES FOR SPECIFIED RANGE

    NASA Technical Reports Server (NTRS)

    Lee, H.

    1994-01-01

    For an aircraft operating over a fixed range, the operating costs are basically a sum of fuel cost and time cost. While minimum fuel and minimum time trajectories are relatively easy to calculate, the determination of a minimum cost trajectory can be a complex undertaking. This computer program was developed to optimize trajectories with respect to a cost function based on a weighted sum of fuel cost and time cost. As a research tool, the program could be used to study various characteristics of optimum trajectories and their comparison to standard trajectories. It might also be used to generate a model for the development of an airborne trajectory optimization system. The program could be incorporated into an airline flight planning system, with optimum flight plans determined at takeoff time for the prevailing flight conditions. The use of trajectory optimization could significantly reduce the cost for a given aircraft mission. The algorithm incorporated in the program assumes that a trajectory consists of climb, cruise, and descent segments. The optimization of each segment is not done independently, as in classical procedures, but is performed in a manner which accounts for interaction between the segments. This is accomplished by the application of optimal control theory. The climb and descent profiles are generated by integrating a set of kinematic and dynamic equations, where the total energy of the aircraft is the independent variable. At each energy level of the climb and descent profiles, the air speed and power setting necessary for an optimal trajectory are determined. The variational Hamiltonian of the problem consists of the rate of change of cost with respect to total energy and a term dependent on the adjoint variable, which is identical to the optimum cruise cost at a specified altitude. This variable uniquely specifies the optimal cruise energy, cruise altitude, cruise Mach number, and, indirectly, the climb and descent profiles. If the optimum cruise cost is specified, an optimum trajectory can easily be generated; however, the range obtained for a particular optimum cruise cost is not known a priori. For short range flights, the program iteratively varies the optimum cruise cost until the computed range converges to the specified range. For long-range flights, iteration is unnecessary since the specified range can be divided into a cruise segment distance and full climb and descent distances. The user must supply the program with engine fuel flow rate coefficients and an aircraft aerodynamic model. The program currently includes coefficients for the Pratt-Whitney JT8D-7 engine and an aerodynamic model for the Boeing 727. Input to the program consists of the flight range to be covered and the prevailing flight conditions including pressure, temperature, and wind profiles. Information output by the program includes: optimum cruise tables at selected weights, optimal cruise quantities as a function of cruise weight and cruise distance, climb and descent profiles, and a summary of the complete synthesized optimal trajectory. This program is written in FORTRAN IV for batch execution and has been implemented on a CDC 6000 series computer with a central memory requirement of approximately 100K (octal) of 60 bit words. This aircraft trajectory optimization program was developed in 1979.

  11. 10 CFR 431.176 - Voluntary Independent Certification Programs.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Water Heating Products § 431.176 Voluntary Independent Certification Programs. (a) The Department will approve a Voluntary Independent Certification Program (VICP) for a commercial HVAC and WH product if the... Section 431.176 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY EFFICIENCY PROGRAM FOR CERTAIN...

  12. A federated design for a neurobiological simulation engine: the CBI federated software architecture.

    PubMed

    Cornelis, Hugo; Coop, Allan D; Bower, James M

    2012-01-01

    Simulator interoperability and extensibility has become a growing requirement in computational biology. To address this, we have developed a federated software architecture. It is federated by its union of independent disparate systems under a single cohesive view, provides interoperability through its capability to communicate, execute programs, or transfer data among different independent applications, and supports extensibility by enabling simulator expansion or enhancement without the need for major changes to system infrastructure. Historically, simulator interoperability has relied on development of declarative markup languages such as the neuron modeling language NeuroML, while simulator extension typically occurred through modification of existing functionality. The software architecture we describe here allows for both these approaches. However, it is designed to support alternative paradigms of interoperability and extensibility through the provision of logical relationships and defined application programming interfaces. They allow any appropriately configured component or software application to be incorporated into a simulator. The architecture defines independent functional modules that run stand-alone. They are arranged in logical layers that naturally correspond to the occurrence of high-level data (biological concepts) versus low-level data (numerical values) and distinguish data from control functions. The modular nature of the architecture and its independence from a given technology facilitates communication about similar concepts and functions for both users and developers. It provides several advantages for multiple independent contributions to software development. Importantly, these include: (1) Reduction in complexity of individual simulator components when compared to the complexity of a complete simulator, (2) Documentation of individual components in terms of their inputs and outputs, (3) Easy removal or replacement of unnecessary or obsoleted components, (4) Stand-alone testing of components, and (5) Clear delineation of the development scope of new components.

  13. A Federated Design for a Neurobiological Simulation Engine: The CBI Federated Software Architecture

    PubMed Central

    Cornelis, Hugo; Coop, Allan D.; Bower, James M.

    2012-01-01

    Simulator interoperability and extensibility has become a growing requirement in computational biology. To address this, we have developed a federated software architecture. It is federated by its union of independent disparate systems under a single cohesive view, provides interoperability through its capability to communicate, execute programs, or transfer data among different independent applications, and supports extensibility by enabling simulator expansion or enhancement without the need for major changes to system infrastructure. Historically, simulator interoperability has relied on development of declarative markup languages such as the neuron modeling language NeuroML, while simulator extension typically occurred through modification of existing functionality. The software architecture we describe here allows for both these approaches. However, it is designed to support alternative paradigms of interoperability and extensibility through the provision of logical relationships and defined application programming interfaces. They allow any appropriately configured component or software application to be incorporated into a simulator. The architecture defines independent functional modules that run stand-alone. They are arranged in logical layers that naturally correspond to the occurrence of high-level data (biological concepts) versus low-level data (numerical values) and distinguish data from control functions. The modular nature of the architecture and its independence from a given technology facilitates communication about similar concepts and functions for both users and developers. It provides several advantages for multiple independent contributions to software development. Importantly, these include: (1) Reduction in complexity of individual simulator components when compared to the complexity of a complete simulator, (2) Documentation of individual components in terms of their inputs and outputs, (3) Easy removal or replacement of unnecessary or obsoleted components, (4) Stand-alone testing of components, and (5) Clear delineation of the development scope of new components. PMID:22242154

  14. FESTR: Finite-Element Spectral Transfer of Radiation spectroscopic modeling and analysis code

    DOE PAGES

    Hakel, Peter

    2016-10-01

    Here we report on the development of a new spectral postprocessor of hydrodynamic simulations of hot, dense plasmas. Based on given time histories of one-, two-, and three-dimensional spatial distributions of materials, and their local temperature and density conditions, spectroscopically-resolved signals are computed. The effects of radiation emission and absorption by the plasma on the emergent spectra are simultaneously taken into account. This program can also be used independently of hydrodynamic calculations to analyze available experimental data with the goal of inferring plasma conditions.

  15. Pressure model of a four-way spool valve for simulating electrohydraulic control systems

    NASA Technical Reports Server (NTRS)

    Gebben, V. D.

    1976-01-01

    An equation that relates the pressure flow characteristics of hydraulic spool valves was developed. The dependent variable is valve output pressure, and the independent variables are spool position and flow. This causal form of equation is preferred in applications that simulate the effects of hydraulic line dynamics. Results from this equation are compared with those from the conventional valve equation, whose dependent variable is flow. A computer program of the valve equations includes spool stops, leakage spool clearances, and dead-zone characteristics of overlap spools.

  16. FESTR: Finite-Element Spectral Transfer of Radiation spectroscopic modeling and analysis code

    NASA Astrophysics Data System (ADS)

    Hakel, Peter

    2016-10-01

    We report on the development of a new spectral postprocessor of hydrodynamic simulations of hot, dense plasmas. Based on given time histories of one-, two-, and three-dimensional spatial distributions of materials, and their local temperature and density conditions, spectroscopically-resolved signals are computed. The effects of radiation emission and absorption by the plasma on the emergent spectra are simultaneously taken into account. This program can also be used independently of hydrodynamic calculations to analyze available experimental data with the goal of inferring plasma conditions.

  17. Real-time Java simulations of multiple interference dielectric filters

    NASA Astrophysics Data System (ADS)

    Kireev, Alexandre N.; Martin, Olivier J. F.

    2008-12-01

    An interactive Java applet for real-time simulation and visualization of the transmittance properties of multiple interference dielectric filters is presented. The most commonly used interference filters as well as the state-of-the-art ones are embedded in this platform-independent applet which can serve research and education purposes. The Transmittance applet can be freely downloaded from the site http://cpc.cs.qub.ac.uk. Program summaryProgram title: Transmittance Catalogue identifier: AEBQ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEBQ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 5778 No. of bytes in distributed program, including test data, etc.: 90 474 Distribution format: tar.gz Programming language: Java Computer: Developed on PC-Pentium platform Operating system: Any Java-enabled OS. Applet was tested on Windows ME, XP, Sun Solaris, Mac OS RAM: Variable Classification: 18 Nature of problem: Sophisticated wavelength selective multiple interference filters can include some tens or even hundreds of dielectric layers. The spectral response of such a stack is not obvious. On the other hand, there is a strong demand from application designers and students to get a quick insight into the properties of a given filter. Solution method: A Java applet was developed for the computation and the visualization of the transmittance of multilayer interference filters. It is simple to use and the embedded filter library can serve educational purposes. Also, its ability to handle complex structures will be appreciated as a useful research and development tool. Running time: Real-time simulations

  18. SSL - THE SIMPLE SOCKETS LIBRARY

    NASA Technical Reports Server (NTRS)

    Campbell, C. E.

    1994-01-01

    The Simple Sockets Library (SSL) allows C programmers to develop systems of cooperating programs using Berkeley streaming Sockets running under the TCP/IP protocol over Ethernet. The SSL provides a simple way to move information between programs running on the same or different machines and does so with little overhead. The SSL can create three types of Sockets: namely a server, a client, and an accept Socket. The SSL's Sockets are designed to be used in a fashion reminiscent of the use of FILE pointers so that a C programmer who is familiar with reading and writing files will immediately feel comfortable with reading and writing with Sockets. The SSL consists of three parts: the library, PortMaster, and utilities. The user of the SSL accesses it by linking programs to the SSL library. The PortMaster initializes connections between clients and servers. The PortMaster also supports a "firewall" facility to keep out socket requests from unapproved machines. The "firewall" is a file which contains Internet addresses for all approved machines. There are three utilities provided with the SSL. SKTDBG can be used to debug programs that make use of the SSL. SPMTABLE lists the servers and port numbers on requested machine(s). SRMSRVR tells the PortMaster to forcibly remove a server name from its list. The package also includes two example programs: multiskt.c, which makes multiple accepts on one server, and sktpoll.c, which repeatedly attempts to connect a client to some server at one second intervals. SSL is a machine independent library written in the C-language for computers connected via Ethernet using the TCP/IP protocol. It has been successfully compiled and implemented on a variety of platforms, including Sun series computers running SunOS, DEC VAX series computers running VMS, SGI computers running IRIX, DECstations running ULTRIX, DEC alpha AXPs running OSF/1, IBM RS/6000 computers running AIX, IBM PC and compatibles running BSD/386 UNIX and HP Apollo 3000/4000/9000/400T computers running HP-UX. SSL requires 45K of RAM to run under SunOS and 80K of RAM to run under VMS. For use on IBM PC series computers and compatibles running DOS, SSL requires Microsoft C 6.0 and the Wollongong TCP/IP package. Source code for sample programs and debugging tools are provided. The documentation is available on the distribution medium in TeX and PostScript formats. The standard distribution medium for SSL is a .25 inch streaming magnetic tape cartridge (QIC-24) in UNIX tar format. It is also available on a 3.5 inch diskette in UNIX tar format and a 5.25 inch 360K MS-DOS format diskette. The SSL was developed in 1992 and was updated in 1993.

  19. SSR_pipeline: a bioinformatic infrastructure for identifying microsatellites from paired-end Illumina high-throughput DNA sequencing data

    USGS Publications Warehouse

    Miller, Mark P.; Knaus, Brian J.; Mullins, Thomas D.; Haig, Susan M.

    2013-01-01

    SSR_pipeline is a flexible set of programs designed to efficiently identify simple sequence repeats (e.g., microsatellites) from paired-end high-throughput Illumina DNA sequencing data. The program suite contains 3 analysis modules along with a fourth control module that can automate analyses of large volumes of data. The modules are used to 1) identify the subset of paired-end sequences that pass Illumina quality standards, 2) align paired-end reads into a single composite DNA sequence, and 3) identify sequences that possess microsatellites (both simple and compound) conforming to user-specified parameters. The microsatellite search algorithm is extremely efficient, and we have used it to identify repeats with motifs from 2 to 25bp in length. Each of the 3 analysis modules can also be used independently to provide greater flexibility or to work with FASTQ or FASTA files generated from other sequencing platforms (Roche 454, Ion Torrent, etc.). We demonstrate use of the program with data from the brine fly Ephydra packardi (Diptera: Ephydridae) and provide empirical timing benchmarks to illustrate program performance on a common desktop computer environment. We further show that the Illumina platform is capable of identifying large numbers of microsatellites, even when using unenriched sample libraries and a very small percentage of the sequencing capacity from a single DNA sequencing run. All modules from SSR_pipeline are implemented in the Python programming language and can therefore be used from nearly any computer operating system (Linux, Macintosh, and Windows).

  20. SSR_pipeline: a bioinformatic infrastructure for identifying microsatellites from paired-end Illumina high-throughput DNA sequencing data.

    PubMed

    Miller, Mark P; Knaus, Brian J; Mullins, Thomas D; Haig, Susan M

    2013-01-01

    SSR_pipeline is a flexible set of programs designed to efficiently identify simple sequence repeats (e.g., microsatellites) from paired-end high-throughput Illumina DNA sequencing data. The program suite contains 3 analysis modules along with a fourth control module that can automate analyses of large volumes of data. The modules are used to 1) identify the subset of paired-end sequences that pass Illumina quality standards, 2) align paired-end reads into a single composite DNA sequence, and 3) identify sequences that possess microsatellites (both simple and compound) conforming to user-specified parameters. The microsatellite search algorithm is extremely efficient, and we have used it to identify repeats with motifs from 2 to 25 bp in length. Each of the 3 analysis modules can also be used independently to provide greater flexibility or to work with FASTQ or FASTA files generated from other sequencing platforms (Roche 454, Ion Torrent, etc.). We demonstrate use of the program with data from the brine fly Ephydra packardi (Diptera: Ephydridae) and provide empirical timing benchmarks to illustrate program performance on a common desktop computer environment. We further show that the Illumina platform is capable of identifying large numbers of microsatellites, even when using unenriched sample libraries and a very small percentage of the sequencing capacity from a single DNA sequencing run. All modules from SSR_pipeline are implemented in the Python programming language and can therefore be used from nearly any computer operating system (Linux, Macintosh, and Windows).

  1. The effectiveness of an interactive computer program versus traditional lecture in athletic training education.

    PubMed

    Wiksten, D L; Patterson, P; Antonio, K; De La Cruz, D; Buxton, B P

    1998-07-01

    To evaluate the effectiveness of an interactive athletic training educational curriculum (IATEC) computer program as compared with traditional lecture instruction. Instructions on assessment of the quadriceps angle (Q-angle) were compared. Dependent measures consisted of cognitive knowledge, practical skill assessment, and attitudes toward the 2 methods of instruction. Sixty-six subjects were selected and then randomly assigned to 3 different groups: traditional lecture, IATEC, and control. The traditional lecture group (n = 22) received a 50-minute lecture/demonstration covering the same instructional content as the Q-angle module of the IATEC program. The IATEC group (n = 20; 2 subjects were dropped from this group due to scheduling conflicts) worked independently for 50 to 65 minutes using the Q-angle module of the IATEC program. The control group (n = 22) received no instruction. Subjects were recruited from an undergraduate athletic training education program and were screened for prior knowledge of the Q-angle. A 9-point multiple choice examination was used to determine cognitive knowledge of the Q-angle. A 12-point yes-no checklist was used to determine whether or not the subjects were able to correctly measure the Q-angle. The Allen Attitude Toward Computer-Assisted Instruction Semantic Differential Survey was used to assess student attitudes toward the 2 methods of instruction. The survey examined overall attitudes, in addition to 3 subscales: comfort, creativity, and function. The survey was scored from 1 to 7, with 7 being the most favorable and 1 being the least favorable. Results of a 1-way ANOVA on cognitive knowledge of the Q-angle revealed that the traditional lecture and IATEC groups performed significantly better than the control group, and the traditional lecture group performed significantly better than the IATEC group. Results of a 1-way ANOVA on practical skill performance revealed that the traditional lecture and IATEC groups performed significantly better than the control group, but there were no significant differences between the traditional lecture and IATEC groups on practical skill performance. Results of a t test indicated significantly more favorable attitudes (P < .05) for the traditional lecture group when compared with the IATEC group for comfort, creativity, and function. Our results suggest that use of the IATEC computer module is an effective means of instruction; however, use of the IATEC program alone may not be sufficient for educating students in cognitive knowledge. Further research is needed to determine the effectiveness of the IATEC computer program as a supplement to traditional lecture instruction in athletic training education.

  2. The Effectiveness of an Interactive Computer Program Versus Traditional Lecture in Athletic Training Education

    PubMed Central

    Wiksten, Denise Lebsack; Patterson, Patricia; Antonio, Kimberly; De La Cruz, Daniel; Buxton, Barton P.

    1998-01-01

    Objective: To evaluate the effectiveness of an interactive athletic training educational curriculum (IATEC) computer program as compared with traditional lecture instruction. Instructions on assessment of the quadriceps angle (Q-angle) were compared. Dependent measures consisted of cognitive knowledge, practical skill assessment, and attitudes toward the 2 methods of instruction. Design and Setting: Sixty-six subjects were selected and then randomly assigned to 3 different groups: traditional lecture, IATEC, and control. The traditional lecture group (n = 22) received a 50-minute lecture/demonstration covering the same instructional content as the Q-angle module of the IATEC program. The IATEC group (n = 20; 2 subjects were dropped from this group due to scheduling conflicts) worked independently for 50 to 65 minutes using the Q-angle module of the IATEC program. The control group (n = 22) received no instruction. Subjects: Subjects were recruited from an undergraduate athletic training education program and were screened for prior knowledge of the Q-angle. Measurements: A 9-point multiple choice examination was used to determine cognitive knowledge of the Q-angle. A 12-point yes-no checklist was used to determine whether or not the subjects were able to correctly measure the Q-angle. The Allen Attitude Toward Computer-Assisted Instruction Semantic Differential Survey was used to assess student attitudes toward the 2 methods of instruction. The survey examined overall attitudes, in addition to 3 subscales: comfort, creativity, and function. The survey was scored from 1 to 7, with 7 being the most favorable and 1 being the least favorable. Results: Results of a 1-way ANOVA on cognitive knowledge of the Q-angle revealed that the traditional lecture and IATEC groups performed significantly better than the control group, and the traditional lecture group performed significantly better than the IATEC group. Results of a 1-way ANOVA on practical skill performance revealed that the traditional lecture and IATEC groups performed significantly better than the control group, but there were no significant differences between the traditional lecture and IATEC groups on practical skill performance. Results of a t test indicated significantly more favorable attitudes (P < .05) for the traditional lecture group when compared with the IATEC group for comfort, creativity, and function. Conclusions: Our results suggest that use of the IATEC computer module is an effective means of instruction; however, use of the IATEC program alone may not be sufficient for educating students in cognitive knowledge. Further research is needed to determine the effectiveness of the IATEC computer program as a supplement to traditional lecture instruction in athletic training education. PMID:16558517

  3. Task Selection, Task Switching and Multitasking during Computer-Based Independent Study

    ERIC Educational Resources Information Center

    Judd, Terry

    2015-01-01

    Detailed logs of students' computer use, during independent study sessions, were captured in an open-access computer laboratory. Each log consisted of a chronological sequence of tasks representing either the application or the Internet domain displayed in the workstation's active window. Each task was classified using a three-tier schema…

  4. Reward and uncertainty in exploration programs

    NASA Technical Reports Server (NTRS)

    Kaufman, G. M.; Bradley, P. G.

    1971-01-01

    A set of variables which are crucial to the economic outcome of petroleum exploration are discussed. These are treated as random variables; the values they assume indicate the number of successes that occur in a drilling program and determine, for a particular discovery, the unit production cost and net economic return if that reservoir is developed. In specifying the joint probability law for those variables, extreme and probably unrealistic assumptions are made. In particular, the different random variables are assumed to be independently distributed. Using postulated probability functions and specified parameters, values are generated for selected random variables, such as reservoir size. From this set of values the economic magnitudes of interest, net return and unit production cost are computed. This constitutes a single trial, and the procedure is repeated many times. The resulting histograms approximate the probability density functions of the variables which describe the economic outcomes of an exploratory drilling program.

  5. The Effect of a Formal Mentoring Program on Career Satisfaction and Intent to Stay in the Faculty Role for Novice Nurse Faculty.

    PubMed

    Jeffers, Stephanie; Mariani, Bette

    The purpose of this mixed-method study was to explore the influence of a formal mentoring program on career satisfaction of novice full-time nurse faculty in academia. The transition from the role of clinician to faculty in an academic setting can be challenging for novice nurse faculty. A link to an electronic survey with open-ended questions was emailed to 1435 participants. The response rate was 17.6 percent (N = 124). Mean scores were obtained, and independent t-test were computed to compare scores of faculty who had participated in a mentoring program with scores of nonparticipants. Content analysis of the open-ended answers was conducted, and common themes were identified. By examining characteristics that contribute to the success of novice nursing faculty, recruitment and retention of faculty may improve, which is essential due to the worsening nursing faculty shortage.

  6. Knowledge and attitude about computer and internet usage among dental students in Western Rajasthan, India

    PubMed Central

    Jali, Pramod K.; Singh, Shamsher; Babaji, Prashant; Chaurasia, Vishwajit Rampratap; Somasundaram, P; Lau, Himani

    2014-01-01

    Background: Internet is a useful tool to update the knowledge. The aim of the present study was to assess the current level of knowledge on the computer and internet among under graduate dental students. Materials and Methods: The study consists of self-administered close ended questionnaire survey. Questionnaires were distributed to undergraduate dental students. The study was conducted during July to September 2012. Results: In the selected samples, response rate was 100%. Most (94.4%) of the students had computer knowledge and 77.4% had their own computer and access at home. Nearly 40.8% of students use computer for general purpose, 28.5% for entertainment and 22.8% used for research purpose. Most of the students had internet knowledge (92.9%) and they used it independently (79.1%). Nearly 42.1% used internet occasionally whereas, 34.4% used regularly, 21.7% rarely and 1.8% don’t use respectively. Internet was preferred for getting information (48.8%) due to easy accessibility and recent updates. For dental purpose students used internet 2-3 times/week (45.3%). Most (95.3%) of the students responded to have computer based learning program in the curriculum. Conclusion: Computer knowledge was observed to be good among dental students. PMID:24818091

  7. Testing New Programming Paradigms with NAS Parallel Benchmarks

    NASA Technical Reports Server (NTRS)

    Jin, H.; Frumkin, M.; Schultz, M.; Yan, J.

    2000-01-01

    Over the past decade, high performance computing has evolved rapidly, not only in hardware architectures but also with increasing complexity of real applications. Technologies have been developing to aim at scaling up to thousands of processors on both distributed and shared memory systems. Development of parallel programs on these computers is always a challenging task. Today, writing parallel programs with message passing (e.g. MPI) is the most popular way of achieving scalability and high performance. However, writing message passing programs is difficult and error prone. Recent years new effort has been made in defining new parallel programming paradigms. The best examples are: HPF (based on data parallelism) and OpenMP (based on shared memory parallelism). Both provide simple and clear extensions to sequential programs, thus greatly simplify the tedious tasks encountered in writing message passing programs. HPF is independent of memory hierarchy, however, due to the immaturity of compiler technology its performance is still questionable. Although use of parallel compiler directives is not new, OpenMP offers a portable solution in the shared-memory domain. Another important development involves the tremendous progress in the internet and its associated technology. Although still in its infancy, Java promisses portability in a heterogeneous environment and offers possibility to "compile once and run anywhere." In light of testing these new technologies, we implemented new parallel versions of the NAS Parallel Benchmarks (NPBs) with HPF and OpenMP directives, and extended the work with Java and Java-threads. The purpose of this study is to examine the effectiveness of alternative programming paradigms. NPBs consist of five kernels and three simulated applications that mimic the computation and data movement of large scale computational fluid dynamics (CFD) applications. We started with the serial version included in NPB2.3. Optimization of memory and cache usage was applied to several benchmarks, noticeably BT and SP, resulting in better sequential performance. In order to overcome the lack of an HPF performance model and guide the development of the HPF codes, we employed an empirical performance model for several primitives found in the benchmarks. We encountered a few limitations of HPF, such as lack of supporting the "REDISTRIBUTION" directive and no easy way to handle irregular computation. The parallelization with OpenMP directives was done at the outer-most loop level to achieve the largest granularity. The performance of six HPF and OpenMP benchmarks is compared with their MPI counterparts for the Class-A problem size in the figure in next page. These results were obtained on an SGI Origin2000 (195MHz) with MIPSpro-f77 compiler 7.2.1 for OpenMP and MPI codes and PGI pghpf-2.4.3 compiler with MPI interface for HPF programs.

  8. Mississippi Curriculum Framework for Computer Information Systems Technology. Computer Information Systems Technology (Program CIP: 52.1201--Management Information Systems & Business Data). Computer Programming (Program CIP: 52.1201). Network Support (Program CIP: 52.1290--Computer Network Support Technology). Postsecondary Programs.

    ERIC Educational Resources Information Center

    Mississippi Research and Curriculum Unit for Vocational and Technical Education, State College.

    This document, which is intended for use by community and junior colleges throughout Mississippi, contains curriculum frameworks for two programs in the state's postsecondary-level computer information systems technology cluster: computer programming and network support. Presented in the introduction are program descriptions and suggested course…

  9. Application of a soft computing technique in predicting the percentage of shear force carried by walls in a rectangular channel with non-homogeneous roughness.

    PubMed

    Khozani, Zohreh Sheikh; Bonakdari, Hossein; Zaji, Amir Hossein

    2016-01-01

    Two new soft computing models, namely genetic programming (GP) and genetic artificial algorithm (GAA) neural network (a combination of modified genetic algorithm and artificial neural network methods) were developed in order to predict the percentage of shear force in a rectangular channel with non-homogeneous roughness. The ability of these methods to estimate the percentage of shear force was investigated. Moreover, the independent parameters' effectiveness in predicting the percentage of shear force was determined using sensitivity analysis. According to the results, the GP model demonstrated superior performance to the GAA model. A comparison was also made between the GP program determined as the best model and five equations obtained in prior research. The GP model with the lowest error values (root mean square error ((RMSE) of 0.0515) had the best function compared with the other equations presented for rough and smooth channels as well as smooth ducts. The equation proposed for rectangular channels with rough boundaries (RMSE of 0.0642) outperformed the prior equations for smooth boundaries.

  10. An interactive technology approach to educate older adults about drug interactions arising from over-the-counter self-medication practices.

    PubMed

    Neafsey, Patricia J; Strickler, Zoe; Shellman, Juliette; Chartier, Virginia

    2002-01-01

    An interactive computer program (Personal Education Program [PEP]) designed for the learning styles and psychomotor skills of older adults was used to teach older adults about potential drug interactions that can result from self-medication with over-the-counter (OTC) agents and alcohol. Subjects used the PEP on notebook computers equipped with infrared sensitive touchscreens. Subjects were recruited from senior centers. Those who met age, vision, literacy, independence, and medication use criteria were randomly assigned to one of three groups: (1) PEP plus information booklet; (2) information booklet only; or (3) control. A repeated measures (three time periods 2 weeks apart), three-group design was used. Users of PEP had significantly greater knowledge and self-efficacy scores than both the conventional and control groups at all three time points. The PEP group reported fewer adverse self-medication behaviors over time. Reported self-medication behaviors did not change over time for either the conventional or control groups. Subjects indicated a high degree of satisfaction with the PEP and reported their intent to make specific changes in self-medication behaviors.

  11. The Computation of Orthogonal Independent Cluster Solutions and Their Oblique Analogs in Factor Analysis.

    ERIC Educational Resources Information Center

    Hofmann, Richard J.

    A very general model for the computation of independent cluster solutions in factor analysis is presented. The model is discussed as being either orthogonal or oblique. Furthermore, it is demonstrated that for every orthogonal independent cluster solution there is an oblique analog. Using three illustrative examples, certain generalities are made…

  12. A microcomputer program for energy assessment and aggregation using the triangular probability distribution

    USGS Publications Warehouse

    Crovelli, R.A.; Balay, R.H.

    1991-01-01

    A general risk-analysis method was developed for petroleum-resource assessment and other applications. The triangular probability distribution is used as a model with an analytic aggregation methodology based on probability theory rather than Monte-Carlo simulation. Among the advantages of the analytic method are its computational speed and flexibility, and the saving of time and cost on a microcomputer. The input into the model consists of a set of components (e.g. geologic provinces) and, for each component, three potential resource estimates: minimum, most likely (mode), and maximum. Assuming a triangular probability distribution, the mean, standard deviation, and seven fractiles (F100, F95, F75, F50, F25, F5, and F0) are computed for each component, where for example, the probability of more than F95 is equal to 0.95. The components are aggregated by combining the means, standard deviations, and respective fractiles under three possible siutations (1) perfect positive correlation, (2) complete independence, and (3) any degree of dependence between these two polar situations. A package of computer programs named the TRIAGG system was written in the Turbo Pascal 4.0 language for performing the analytic probabilistic methodology. The system consists of a program for processing triangular probability distribution assessments and aggregations, and a separate aggregation routine for aggregating aggregations. The user's documentation and program diskette of the TRIAGG system are available from USGS Open File Services. TRIAGG requires an IBM-PC/XT/AT compatible microcomputer with 256kbyte of main memory, MS-DOS 3.1 or later, either two diskette drives or a fixed disk, and a 132 column printer. A graphics adapter and color display are optional. ?? 1991.

  13. XML-Based Generator of C++ Code for Integration With GUIs

    NASA Technical Reports Server (NTRS)

    Hua, Hook; Oyafuso, Fabiano; Klimeck, Gerhard

    2003-01-01

    An open source computer program has been developed to satisfy a need for simplified organization of structured input data for scientific simulation programs. Typically, such input data are parsed in from a flat American Standard Code for Information Interchange (ASCII) text file into computational data structures. Also typically, when a graphical user interface (GUI) is used, there is a need to completely duplicate the input information while providing it to a user in a more structured form. Heretofore, the duplication of the input information has entailed duplication of software efforts and increases in susceptibility to software errors because of the concomitant need to maintain two independent input-handling mechanisms. The present program implements a method in which the input data for a simulation program are completely specified in an Extensible Markup Language (XML)-based text file. The key benefit for XML is storing input data in a structured manner. More importantly, XML allows not just storing of data but also describing what each of the data items are. That XML file contains information useful for rendering the data by other applications. It also then generates data structures in the C++ language that are to be used in the simulation program. In this method, all input data are specified in one place only, and it is easy to integrate the data structures into both the simulation program and the GUI. XML-to-C is useful in two ways: 1. As an executable, it generates the corresponding C++ classes and 2. As a library, it automatically fills the objects with the input data values.

  14. Radiation measurements from polar and geosynchronous satellites

    NASA Technical Reports Server (NTRS)

    Vonderhaar, T. H.

    1973-01-01

    During the 1960's, radiation budget measurements from satellites have allowed quantitative study of the global energetics of our atmosphere-ocean system. A continuing program is planned, including independent measurement of the solar constant. Thus far, the measurements returned from two basically different types of satellite experiments are in agreement on the long term global scales where they are most comparable. This fact, together with independent estimates of the accuracy of measurement from each system, shows that the energy exchange between earth and space is now measured better than it can be calculated. Examples of application of the radiation budget data were shown. They can be related to the age-old problem of climate change, to the basic question of the thermal forcing of our circulation systems, and to the contemporary problems of local area energetics and computer modeling of the atmosphere.

  15. An independent brain-computer interface using covert non-spatial visual selective attention

    NASA Astrophysics Data System (ADS)

    Zhang, Dan; Maye, Alexander; Gao, Xiaorong; Hong, Bo; Engel, Andreas K.; Gao, Shangkai

    2010-02-01

    In this paper, a novel independent brain-computer interface (BCI) system based on covert non-spatial visual selective attention of two superimposed illusory surfaces is described. Perception of two superimposed surfaces was induced by two sets of dots with different colors rotating in opposite directions. The surfaces flickered at different frequencies and elicited distinguishable steady-state visual evoked potentials (SSVEPs) over parietal and occipital areas of the brain. By selectively attending to one of the two surfaces, the SSVEP amplitude at the corresponding frequency was enhanced. An online BCI system utilizing the attentional modulation of SSVEP was implemented and a 3-day online training program with healthy subjects was carried out. The study was conducted with Chinese subjects at Tsinghua University, and German subjects at University Medical Center Hamburg-Eppendorf (UKE) using identical stimulation software and equivalent technical setup. A general improvement of control accuracy with training was observed in 8 out of 18 subjects. An averaged online classification accuracy of 72.6 ± 16.1% was achieved on the last training day. The system renders SSVEP-based BCI paradigms possible for paralyzed patients with substantial head or ocular motor impairments by employing covert attention shifts instead of changing gaze direction.

  16. Programmable single-cell mammalian biocomputers.

    PubMed

    Ausländer, Simon; Ausländer, David; Müller, Marius; Wieland, Markus; Fussenegger, Martin

    2012-07-05

    Synthetic biology has advanced the design of standardized control devices that program cellular functions and metabolic activities in living organisms. Rational interconnection of these synthetic switches resulted in increasingly complex designer networks that execute input-triggered genetic instructions with precision, robustness and computational logic reminiscent of electronic circuits. Using trigger-controlled transcription factors, which independently control gene expression, and RNA-binding proteins that inhibit the translation of transcripts harbouring specific RNA target motifs, we have designed a set of synthetic transcription–translation control devices that could be rewired in a plug-and-play manner. Here we show that these combinatorial circuits integrated a two-molecule input and performed digital computations with NOT, AND, NAND and N-IMPLY expression logic in single mammalian cells. Functional interconnection of two N-IMPLY variants resulted in bitwise intracellular XOR operations, and a combinatorial arrangement of three logic gates enabled independent cells to perform programmable half-subtractor and half-adder calculations. Individual mammalian cells capable of executing basic molecular arithmetic functions isolated or coordinated to metabolic activities in a predictable, precise and robust manner may provide new treatment strategies and bio-electronic interfaces in future gene-based and cell-based therapies.

  17. An independent brain-computer interface using covert non-spatial visual selective attention.

    PubMed

    Zhang, Dan; Maye, Alexander; Gao, Xiaorong; Hong, Bo; Engel, Andreas K; Gao, Shangkai

    2010-02-01

    In this paper, a novel independent brain-computer interface (BCI) system based on covert non-spatial visual selective attention of two superimposed illusory surfaces is described. Perception of two superimposed surfaces was induced by two sets of dots with different colors rotating in opposite directions. The surfaces flickered at different frequencies and elicited distinguishable steady-state visual evoked potentials (SSVEPs) over parietal and occipital areas of the brain. By selectively attending to one of the two surfaces, the SSVEP amplitude at the corresponding frequency was enhanced. An online BCI system utilizing the attentional modulation of SSVEP was implemented and a 3-day online training program with healthy subjects was carried out. The study was conducted with Chinese subjects at Tsinghua University, and German subjects at University Medical Center Hamburg-Eppendorf (UKE) using identical stimulation software and equivalent technical setup. A general improvement of control accuracy with training was observed in 8 out of 18 subjects. An averaged online classification accuracy of 72.6 +/- 16.1% was achieved on the last training day. The system renders SSVEP-based BCI paradigms possible for paralyzed patients with substantial head or ocular motor impairments by employing covert attention shifts instead of changing gaze direction.

  18. Platform-independent method for computer aided schematic drawings

    DOEpatents

    Vell, Jeffrey L [Slingerlands, NY; Siganporia, Darius M [Clifton Park, NY; Levy, Arthur J [Fort Lauderdale, FL

    2012-02-14

    A CAD/CAM method is disclosed for a computer system to capture and interchange schematic drawing and associated design information. The schematic drawing and design information are stored in an extensible, platform-independent format.

  19. Transfluxor circuit amplifies sensing current for computer memories

    NASA Technical Reports Server (NTRS)

    Milligan, G. C.

    1964-01-01

    To transfer data from the magnetic memory core to an independent core, a reliable sensing amplifier has been developed. Later the data in the independent core is transferred to the arithmetical section of the computer.

  20. Application of advanced grid generation techniques for flow field computations about complex configurations

    NASA Technical Reports Server (NTRS)

    Kathong, Monchai; Tiwari, Surendra N.

    1988-01-01

    In the computation of flowfields about complex configurations, it is very difficult to construct a boundary-fitted coordinate system. An alternative approach is to use several grids at once, each of which is generated independently. This procedure is called the multiple grids or zonal grids approach; its applications are investigated. The method conservative providing conservation of fluxes at grid interfaces. The Euler equations are solved numerically on such grids for various configurations. The numerical scheme used is the finite-volume technique with a three-stage Runge-Kutta time integration. The code is vectorized and programmed to run on the CDC VPS-32 computer. Steady state solutions of the Euler equations are presented and discussed. The solutions include: low speed flow over a sphere, high speed flow over a slender body, supersonic flow through a duct, and supersonic internal/external flow interaction for an aircraft configuration at various angles of attack. The results demonstrate that the multiple grids approach along with the conservative interfacing is capable of computing the flows about the complex configurations where the use of a single grid system is not possible.

  1. A new constitutive model for simulation of softening, plateau, and densification phenomena for trabecular bone under compression.

    PubMed

    Lee, Chi-Seung; Lee, Jae-Myung; Youn, BuHyun; Kim, Hyung-Sik; Shin, Jong Ki; Goh, Tae Sik; Lee, Jung Sub

    2017-01-01

    A new type of constitutive model and its computational implementation procedure for the simulation of a trabecular bone are proposed in the present study. A yield surface-independent Frank-Brockman elasto-viscoplastic model is introduced to express the nonlinear material behavior such as softening beyond yield point, plateau, and densification under compressive loads. In particular, the hardening- and softening-dominant material functions are introduced and adopted in the plastic multiplier to describe each nonlinear material behavior separately. In addition, the elasto-viscoplastic model is transformed into an implicit type discrete model, and is programmed as a user-defined material subroutine in commercial finite element analysis code. In particular, the consistent tangent modulus method is proposed to improve the computational convergence and to save computational time during finite element analysis. Through the developed material library, the nonlinear stress-strain relationship is analyzed qualitatively and quantitatively, and the simulation results are compared with the results of compression test on the trabecular bone to validate the proposed constitutive model, computational method, and material library. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Streaming Support for Data Intensive Cloud-Based Sequence Analysis

    PubMed Central

    Issa, Shadi A.; Kienzler, Romeo; El-Kalioby, Mohamed; Tonellato, Peter J.; Wall, Dennis; Bruggmann, Rémy; Abouelhoda, Mohamed

    2013-01-01

    Cloud computing provides a promising solution to the genomics data deluge problem resulting from the advent of next-generation sequencing (NGS) technology. Based on the concepts of “resources-on-demand” and “pay-as-you-go”, scientists with no or limited infrastructure can have access to scalable and cost-effective computational resources. However, the large size of NGS data causes a significant data transfer latency from the client's site to the cloud, which presents a bottleneck for using cloud computing services. In this paper, we provide a streaming-based scheme to overcome this problem, where the NGS data is processed while being transferred to the cloud. Our scheme targets the wide class of NGS data analysis tasks, where the NGS sequences can be processed independently from one another. We also provide the elastream package that supports the use of this scheme with individual analysis programs or with workflow systems. Experiments presented in this paper show that our solution mitigates the effect of data transfer latency and saves both time and cost of computation. PMID:23710461

  3. Auto-Generated Semantic Processing Services

    NASA Technical Reports Server (NTRS)

    Davis, Rodney; Hupf, Greg

    2009-01-01

    Auto-Generated Semantic Processing (AGSP) Services is a suite of software tools for automated generation of other computer programs, denoted cross-platform semantic adapters, that support interoperability of computer-based communication systems that utilize a variety of both new and legacy communication software running in a variety of operating- system/computer-hardware combinations. AGSP has numerous potential uses in military, space-exploration, and other government applications as well as in commercial telecommunications. The cross-platform semantic adapters take advantage of common features of computer- based communication systems to enforce semantics, messaging protocols, and standards of processing of streams of binary data to ensure integrity of data and consistency of meaning among interoperating systems. The auto-generation aspect of AGSP Services reduces development time and effort by emphasizing specification and minimizing implementation: In effect, the design, building, and debugging of software for effecting conversions among complex communication protocols, custom device mappings, and unique data-manipulation algorithms is replaced with metadata specifications that map to an abstract platform-independent communications model. AGSP Services is modular and has been shown to be easily integrable into new and legacy NASA flight and ground communication systems.

  4. Flow visualization of CFD using graphics workstations

    NASA Technical Reports Server (NTRS)

    Lasinski, Thomas; Buning, Pieter; Choi, Diana; Rogers, Stuart; Bancroft, Gordon

    1987-01-01

    High performance graphics workstations are used to visualize the fluid flow dynamics obtained from supercomputer solutions of computational fluid dynamic programs. The visualizations can be done independently on the workstation or while the workstation is connected to the supercomputer in a distributed computing mode. In the distributed mode, the supercomputer interactively performs the computationally intensive graphics rendering tasks while the workstation performs the viewing tasks. A major advantage of the workstations is that the viewers can interactively change their viewing position while watching the dynamics of the flow fields. An overview of the computer hardware and software required to create these displays is presented. For complex scenes the workstation cannot create the displays fast enough for good motion analysis. For these cases, the animation sequences are recorded on video tape or 16 mm film a frame at a time and played back at the desired speed. The additional software and hardware required to create these video tapes or 16 mm movies are also described. Photographs illustrating current visualization techniques are discussed. Examples of the use of the workstations for flow visualization through animation are available on video tape.

  5. Computer simulation of two-dimensional unsteady flows in estuaries and embayments by the method of characteristics : basic theory and the formulation of the numerical method

    USGS Publications Warehouse

    Lai, Chintu

    1977-01-01

    Two-dimensional unsteady flows of homogeneous density in estuaries and embayments can be described by hyperbolic, quasi-linear partial differential equations involving three dependent and three independent variables. A linear combination of these equations leads to a parametric equation of characteristic form, which consists of two parts: total differentiation along the bicharacteristics and partial differentiation in space. For its numerical solution, the specified-time-interval scheme has been used. The unknown, partial space-derivative terms can be eliminated first by suitable combinations of difference equations, converted from the corresponding differential forms and written along four selected bicharacteristics and a streamline. Other unknowns are thus made solvable from the known variables on the current time plane. The computation is carried to the second-order accuracy by using trapezoidal rule of integration. Means to handle complex boundary conditions are developed for practical application. Computer programs have been written and a mathematical model has been constructed for flow simulation. The favorable computer outputs suggest further exploration and development of model worthwhile. (Woodard-USGS)

  6. 34 CFR 364.39 - What requirements apply to the administration of grants under the Centers for Independent Living...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 34 Education 2 2010-07-01 2010-07-01 false What requirements apply to the administration of grants under the Centers for Independent Living program? 364.39 Section 364.39 Education Regulations of the..., DEPARTMENT OF EDUCATION STATE INDEPENDENT LIVING SERVICES PROGRAM AND CENTERS FOR INDEPENDENT LIVING PROGRAM...

  7. 34 CFR 364.39 - What requirements apply to the administration of grants under the Centers for Independent Living...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 34 Education 2 2011-07-01 2010-07-01 true What requirements apply to the administration of grants under the Centers for Independent Living program? 364.39 Section 364.39 Education Regulations of the..., DEPARTMENT OF EDUCATION STATE INDEPENDENT LIVING SERVICES PROGRAM AND CENTERS FOR INDEPENDENT LIVING PROGRAM...

  8. Description and User Manual for a Web-Based Interface to a Transit-Loss Accounting Program for Monument and Fountain Creeks, El Paso and Pueblo Counties, Colorado

    USGS Publications Warehouse

    Kuhn, Gerhard; Krammes, Gary S.; Beal, Vivian J.

    2007-01-01

    The U.S. Geological Survey, in cooperation with Colorado Springs Utilities, the Colorado Water Conservation Board, and the El Paso County Water Authority, began a study in 2004 with the following objectives: (1) Apply a stream-aquifer model to Monument Creek, (2) use the results of the modeling to develop a transit-loss accounting program for Monument Creek, (3) revise an existing accounting program for Fountain Creek to easily incorporate ongoing and future changes in management of return flows of reusable water, and (4) integrate the two accounting programs into a single program and develop a Web-based interface to the integrated program that incorporates simple and reliable data entry that is automated to the fullest extent possible. This report describes the results of completing objectives (2), (3), and (4) of that study. The accounting program for Monument Creek was developed first by (1) using the existing accounting program for Fountain Creek as a prototype, (2) incorporating the transit-loss results from a stream-aquifer modeling analysis of Monument Creek, and (3) developing new output reports. The capabilities of the existing accounting program for Fountain Creek then were incorporated into the program for Monument Creek and the output reports were expanded to include Fountain Creek. A Web-based interface to the new transit-loss accounting program then was developed that provided automated data entry. An integrated system of 34 nodes and 33 subreaches was integrated by combining the independent node and subreach systems used in the previously completed stream-aquifer modeling studies for the Monument and Fountain Creek reaches. Important operational criteria that were implemented in the new transit-loss accounting program for Monument and Fountain Creeks included the following: (1) Retain all the reusable water-management capabilities incorporated into the existing accounting program for Fountain Creek; (2) enable daily accounting and transit-loss computations for a variable number of reusable return flows discharged into Monument Creek at selected locations; (3) enable diversion of all or a part of a reusable return flow at any selected node for purposes of storage in off-stream reservoirs or other similar types of reusable water management; (4) and provide flexibility in the accounting program to change the number of return-flow entities, the locations at which the return flows discharge into Monument or Fountain Creeks, or the locations to which the return flows are delivered. The primary component of the Web-based interface is a data-entry form that displays data stored in the accounting program input file; the data-entry form allows for entry and modification of new data, which then is rewritten to the input file. When the data-entry form is displayed, up-to-date discharge data for each station are automatically computed and entered on the data-entry form. Data for native return flows, reusable return flows, reusable return flow diversions, and native diversions also are entered automatically or manually, if needed. In computing the estimated quantities of reusable return flow and the associated transit losses, the accounting program uses two sets of computations. The first set of computations is made between any two adjacent streamflow-gaging stations (termed 'stream-segment loop'); the primary purpose of the stream-segment loop is to estimate the loss or gain in native discharge between the two adjacent streamflow-gaging stations. The second set of computations is made between any two adjacent nodes (termed 'subreach loop'); the actual transit-loss computations are made in the subreach loop, using the result from the stream-segment loop. The stream-segment loop is completed for a stream segment, and then the subreach loop is completed for each subreach within the segment. When the subreach loop is completed for all subreaches within a stream segment, the stream-segment loop is initiated for the ne

  9. Effects of competitor spacing in a new class of individual-tree indices of competition: semidistance-independent indices computed for Bitterlich versus fixed-area plots

    Treesearch

    Albert R. Stage; Thomas Ledermann

    2008-01-01

    We illustrate effects of competitor spacing for a new class of individual-tree indices of competition that we call semi-distance-independent. This new class is similar to the class of distance-independent indices except that the index is computed independently at each subsampling plot surrounding a subject tree for which growth is to be modelled. We derive the effects...

  10. Application of a BOSS – Gaussian Interface for QM/MM Simulations of Henry and Methyl Transfer Reactions

    PubMed Central

    Vilseck, Jonah Z.; Kostal, Jakub; Tirado-Rives, Julian; Jorgensen, William L.

    2015-01-01

    Hybrid quantum mechanics and molecular mechanics (QM/MM) computer simulations have become an indispensable tool for studying chemical and biological phenomena for systems too large to treat with quantum mechanics alone. For several decades, semi-empirical QM methods have been used in QM/MM simulations. However, with increased computational resources, the introduction of ab initio and density function methods into on-the-fly QM/MM simulations is being increasingly preferred. This adaptation can be accomplished with a program interface that tethers independent QM and MM software packages. This report introduces such an interface for the BOSS and Gaussian programs, featuring modification of BOSS to request QM energies and partial atomic charges from Gaussian. A customizable C-shell linker script facilitates the inter-program communication. The BOSS–Gaussian interface also provides convenient access to Charge Model 5 (CM5) partial atomic charges for multiple purposes including QM/MM studies of reactions. In this report, the BOSS–Gaussian interface is applied to a nitroaldol (Henry) reaction and two methyl transfer reactions in aqueous solution. Improved agreement with experiment is found by determining free-energy surfaces with MP2/CM5 QM/MM simulations than previously reported investigations employing semiempirical methods. PMID:26311531

  11. Design of ceramic components with the NASA/CARES computer program

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel N.; Manderscheid, Jane M.; Gyekenyesi, John P.

    1990-01-01

    The ceramics analysis and reliability evaluation of structures (CARES) computer program is described. The primary function of the code is to calculate the fast-fracture reliability or failure probability of macro-scopically isotropic ceramic components. These components may be subjected to complex thermomechanical loadings, such as those found in heat engine applications. CARES uses results from MSC/NASTRAN or ANSYS finite-element analysis programs to evaluate how inherent surface and/or volume type flaws component reliability. CARES utilizes the Batdorf model and the two-parameter Weibull cumulative distribution function to describe the effects of multiaxial stress states on material strength. The principle of independent action (PIA) and the Weibull normal stress averaging models are also included. Weibull material strength parameters, the Batdorf crack density coefficient, and other related statistical quantities are estimated from four-point bend bar or uniform uniaxial tensile specimen fracture strength data. Parameter estimation can be performed for a single or multiple failure modes by using a least-squares analysis or a maximum likelihood method. Kolmogorov-Smirnov and Anderson-Darling goodness-to-fit-tests, 90 percent confidence intervals on the Weibull parameters, and Kanofsky-Srinivasan 90 percent confidence band values are also provided. Examples are provided to illustrate the various features of CARES.

  12. Extending Landauer's bound from bit erasure to arbitrary computation

    NASA Astrophysics Data System (ADS)

    Wolpert, David

    The minimal thermodynamic work required to erase a bit, known as Landauer's bound, has been extensively investigated both theoretically and experimentally. However, when viewed as a computation that maps inputs to outputs, bit erasure has a very special property: the output does not depend on the input. Existing analyses of thermodynamics of bit erasure implicitly exploit this property, and thus cannot be directly extended to analyze the computation of arbitrary input-output maps. Here we show how to extend these earlier analyses of bit erasure to analyze the thermodynamics of arbitrary computations. Doing this establishes a formal connection between the thermodynamics of computers and much of theoretical computer science. We use this extension to analyze the thermodynamics of the canonical ``general purpose computer'' considered in computer science theory: a universal Turing machine (UTM). We consider a UTM which maps input programs to output strings, where inputs are drawn from an ensemble of random binary sequences, and prove: i) The minimal work needed by a UTM to run some particular input program X and produce output Y is the Kolmogorov complexity of Y minus the log of the ``algorithmic probability'' of Y. This minimal amount of thermodynamic work has a finite upper bound, which is independent of the output Y, depending only on the details of the UTM. ii) The expected work needed by a UTM to compute some given output Y is infinite. As a corollary, the overall expected work to run a UTM is infinite. iii) The expected work needed by an arbitrary Turing machine T (not necessarily universal) to compute some given output Y can either be infinite or finite, depending on Y and the details of T. To derive these results we must combine ideas from nonequilibrium statistical physics with fundamental results from computer science, such as Levin's coding theorem and other theorems about universal computation. I would like to ackowledge the Santa Fe Institute, Grant No. TWCF0079/AB47 from the Templeton World Charity Foundation, Grant No. FQXi-RHl3-1349 from the FQXi foundation, and Grant No. CHE-1648973 from the U.S. National Science Foundation.

  13. 78 FR 73195 - Privacy Act of 1974: CMS Computer Matching Program Match No. 2013-01; HHS Computer Matching...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-05

    .... Description of the Matching Program A. General The Computer Matching and Privacy Protection Act of 1988 (Pub... 1974: CMS Computer Matching Program Match No. 2013-01; HHS Computer Matching Program Match No. 1312...). ACTION: Notice of Computer Matching Program (CMP). SUMMARY: In accordance with the requirements of the...

  14. Handheld Computer Use in U.S. Family Practice Residency Programs

    PubMed Central

    Criswell, Dan F.; Parchman, Michael L.

    2002-01-01

    Objective: The purpose of the study was to evaluate the uses of handheld computers (also called personal digital assistants, or PDAs) in family practice residency programs in the United States. Study Design: In November 2000, the authors mailed a questionnaire to the program directors of all American Academy of Family Physicians (AAFP) and American College of Osteopathic Family Practice (ACOFP) residency programs in the United States. Measurements: Data and patterns of the use and non-use of handheld computers were identified. Results: Approximately 50 percent (306 of 610) of the programs responded to the survey. Two thirds of the programs reported that handheld computers were used in their residencies, and an additional 14 percent had plans for implementation within 24 months. Both the Palm and the Windows CE operating systems were used, with the Palm operating system the most common. Military programs had the highest rate of use (8 of 10 programs, 80 percent), and osteopathic programs had the lowest (23 of 55 programs, 42 percent). Of programs that reported handheld computer use, 45 percent had required handheld computer applications that are used uniformly by all users. Funding for handheld computers and related applications was non-budgeted in 76percent of the programs in which handheld computers were used. In programs providing a budget for handheld computers, the average annual budget per user was $461.58. Interested faculty or residents, rather than computer information services personnel, performed upkeep and maintenance of handheld computers in 72 percent of the programs in which the computers are used. In addition to the installed calendar, memo pad, and address book, the most common clinical uses of handheld computers in the programs were as medication reference tools, electronic textbooks, and clinical computational or calculator-type programs. Conclusions: Handheld computers are widely used in family practice residency programs in the United States. Although handheld computers were designed as electronic organizers, in family practice residencies they are used as medication reference tools, electronic textbooks, and clinical computational programs and to track activities that were previously associated with desktop database applications. PMID:11751806

  15. Handheld computer use in U.S. family practice residency programs.

    PubMed

    Criswell, Dan F; Parchman, Michael L

    2002-01-01

    The purpose of the study was to evaluate the uses of handheld computers (also called personal digital assistants, or PDAs) in family practice residency programs in the United States. In November 2000, the authors mailed a questionnaire to the program directors of all American Academy of Family Physicians (AAFP) and American College of Osteopathic Family Practice (ACOFP) residency programs in the United States. Data and patterns of the use and non-use of handheld computers were identified. Approximately 50 percent (306 of 610) of the programs responded to the survey. Two thirds of the programs reported that handheld computers were used in their residencies, and an additional 14 percent had plans for implementation within 24 months. Both the Palm and the Windows CE operating systems were used, with the Palm operating system the most common. Military programs had the highest rate of use (8 of 10 programs, 80 percent), and osteopathic programs had the lowest (23 of 55 programs, 42 percent). Of programs that reported handheld computer use, 45 percent had required handheld computer applications that are used uniformly by all users. Funding for handheld computers and related applications was non-budgeted in 76percent of the programs in which handheld computers were used. In programs providing a budget for handheld computers, the average annual budget per user was 461.58 dollars. Interested faculty or residents, rather than computer information services personnel, performed upkeep and maintenance of handheld computers in 72 percent of the programs in which the computers are used. In addition to the installed calendar, memo pad, and address book, the most common clinical uses of handheld computers in the programs were as medication reference tools, electronic textbooks, and clinical computational or calculator-type programs. Handheld computers are widely used in family practice residency programs in the United States. Although handheld computers were designed as electronic organizers, in family practice residencies they are used as medication reference tools, electronic textbooks, and clinical computational programs and to track activities that were previously associated with desktop database applications.

  16. Houston prefreshman enrichment program (Houston PREP). Final report, June 10, 1996--August 1, 1996

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1996-10-01

    The 1996 Houston Pre-freshman Enrichment Program (PREP) was conducted on the campus of the University of Houston-Downtown from June 10 to August 1, 1996. Program Participants were recruited from the Greater Houston area. All participants were identified as high achieving students with an interest in learning about the engineering and science professions. The goal of the program was to better prepare our pre-college youth prior to entering college as mathematics, science and engineering majors. The program participants were middle school and high school students from the Aldine, Alief, Channel View, Crockett, Cypress-Fairbanks, Fort Bend, Galena Park, Houston, Humble, Katy, Klein,more » North Forest, Pasadena, Private, and Spring Branch Independent School Districts. Of the 197 students starting the program, 170 completed, 142 students were from economically and socially disadvantage groups underrepresented in the engineering and science professions, and 121 of the 197 were female. Our First Year group for 1996 composed of 96% minority and women students. Our Second and Third Year students were 100% and 93.75% minority or women respectively. This gave an overall minority and female population of 93.75%. This year, special efforts were again made to recruit students from minority groups, which caused a significant increase in qualified applicants. However, due to space limitations, 140 applicants were rejected. Investigative and discovery learning were key elements of PREP. The academic components of the program included Algebraic Structures, Engineering, Introduction to Computer Science, Introduction to Physics, Logic and Its Application to Mathematics, Probability and Statistics, Problem Solving Seminar using computers and PLATO software, SAT Preparatory Seminars, and Technical Writing.« less

  17. The RANDOM computer program: A linear congruential random number generator

    NASA Technical Reports Server (NTRS)

    Miles, R. F., Jr.

    1986-01-01

    The RANDOM Computer Program is a FORTRAN program for generating random number sequences and testing linear congruential random number generators (LCGs). The linear congruential form of random number generator is discussed, and the selection of parameters of an LCG for a microcomputer described. This document describes the following: (1) The RANDOM Computer Program; (2) RANDOM.MOD, the computer code needed to implement an LCG in a FORTRAN program; and (3) The RANCYCLE and the ARITH Computer Programs that provide computational assistance in the selection of parameters for an LCG. The RANDOM, RANCYCLE, and ARITH Computer Programs are written in Microsoft FORTRAN for the IBM PC microcomputer and its compatibles. With only minor modifications, the RANDOM Computer Program and its LCG can be run on most micromputers or mainframe computers.

  18. SU-D-BRD-02: A Web-Based Image Processing and Plan Evaluation Platform (WIPPEP) for Future Cloud-Based Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chai, X; Liu, L; Xing, L

    Purpose: Visualization and processing of medical images and radiation treatment plan evaluation have traditionally been constrained to local workstations with limited computation power and ability of data sharing and software update. We present a web-based image processing and planning evaluation platform (WIPPEP) for radiotherapy applications with high efficiency, ubiquitous web access, and real-time data sharing. Methods: This software platform consists of three parts: web server, image server and computation server. Each independent server communicates with each other through HTTP requests. The web server is the key component that provides visualizations and user interface through front-end web browsers and relay informationmore » to the backend to process user requests. The image server serves as a PACS system. The computation server performs the actual image processing and dose calculation. The web server backend is developed using Java Servlets and the frontend is developed using HTML5, Javascript, and jQuery. The image server is based on open source DCME4CHEE PACS system. The computation server can be written in any programming language as long as it can send/receive HTTP requests. Our computation server was implemented in Delphi, Python and PHP, which can process data directly or via a C++ program DLL. Results: This software platform is running on a 32-core CPU server virtually hosting the web server, image server, and computation servers separately. Users can visit our internal website with Chrome browser, select a specific patient, visualize image and RT structures belonging to this patient and perform image segmentation running Delphi computation server and Monte Carlo dose calculation on Python or PHP computation server. Conclusion: We have developed a webbased image processing and plan evaluation platform prototype for radiotherapy. This system has clearly demonstrated the feasibility of performing image processing and plan evaluation platform through a web browser and exhibited potential for future cloud based radiotherapy.« less

  19. Research Institute for Advanced Computer Science: Annual Report October 1998 through September 1999

    NASA Technical Reports Server (NTRS)

    Leiner, Barry M.; Gross, Anthony R. (Technical Monitor)

    1999-01-01

    The Research Institute for Advanced Computer Science (RIACS) carries out basic research and technology development in computer science, in support of the National Aeronautics and Space Administration's missions. RIACS is located at the NASA Ames Research Center (ARC). It currently operates under a multiple year grant/cooperative agreement that began on October 1, 1997 and is up for renewal in the year 2002. ARC has been designated NASA's Center of Excellence in Information Technology. In this capacity, ARC is charged with the responsibility to build an Information Technology Research Program that is preeminent within NASA. RIACS serves as a bridge between NASA ARC and the academic community, and RIACS scientists and visitors work in close collaboration with NASA scientists. RIACS has the additional goal of broadening the base of researchers in these areas of importance to the nation's space and aeronautics enterprises. RIACS research focuses on the three cornerstones of information technology research necessary to meet the future challenges of NASA missions: (1) Automated Reasoning for Autonomous Systems. Techniques are being developed enabling spacecraft that will be self-guiding and self-correcting to the extent that they will require little or no human intervention. Such craft will be equipped to independently solve problems as they arise, and fulfill their missions with minimum direction from Earth. (2) Human-Centered Computing. Many NASA missions require synergy between humans and computers, with sophisticated computational aids amplifying human cognitive and perceptual abilities; (3) High Performance Computing and Networking Advances in the performance of computing and networking continue to have major impact on a variety of NASA endeavors, ranging from modeling and simulation to data analysis of large datasets to collaborative engineering, planning and execution. In addition, RIACS collaborates with NASA scientists to apply information technology research to a variety of NASA application domains. RIACS also engages in other activities, such as workshops, seminars, and visiting scientist programs, designed to encourage and facilitate collaboration between the university and NASA information technology research communities.

  20. Research Institute for Advanced Computer Science

    NASA Technical Reports Server (NTRS)

    Gross, Anthony R. (Technical Monitor); Leiner, Barry M.

    2000-01-01

    The Research Institute for Advanced Computer Science (RIACS) carries out basic research and technology development in computer science, in support of the National Aeronautics and Space Administration's missions. RIACS is located at the NASA Ames Research Center. It currently operates under a multiple year grant/cooperative agreement that began on October 1, 1997 and is up for renewal in the year 2002. Ames has been designated NASA's Center of Excellence in Information Technology. In this capacity, Ames is charged with the responsibility to build an Information Technology Research Program that is preeminent within NASA. RIACS serves as a bridge between NASA Ames and the academic community, and RIACS scientists and visitors work in close collaboration with NASA scientists. RIACS has the additional goal of broadening the base of researchers in these areas of importance to the nation's space and aeronautics enterprises. RIACS research focuses on the three cornerstones of information technology research necessary to meet the future challenges of NASA missions: (1) Automated Reasoning for Autonomous Systems. Techniques are being developed enabling spacecraft that will be self-guiding and self-correcting to the extent that they will require little or no human intervention. Such craft will be equipped to independently solve problems as they arise, and fulfill their missions with minimum direction from Earth; (2) Human-Centered Computing. Many NASA missions require synergy between humans and computers, with sophisticated computational aids amplifying human cognitive and perceptual abilities; (3) High Performance Computing and Networking. Advances in the performance of computing and networking continue to have major impact on a variety of NASA endeavors, ranging from modeling and simulation to data analysis of large datasets to collaborative engineering, planning and execution. In addition, RIACS collaborates with NASA scientists to apply information technology research to a variety of NASA application domains. RIACS also engages in other activities, such as workshops, seminars, and visiting scientist programs, designed to encourage and facilitate collaboration between the university and NASA information technology research communities.

  1. Does a Wii-based exercise program enhance balance control of independently functioning older adults? A systematic review.

    PubMed

    Laufer, Yocheved; Dar, Gali; Kodesh, Einat

    2014-01-01

    Exercise programs that challenge an individual's balance have been shown to reduce the risk of falls among older adults. Virtual reality computer-based technology that provides the user with opportunities to interact with virtual objects is used extensively for entertainment. There is a growing interest in the potential of virtual reality-based interventions for balance training in older adults. This work comprises a systematic review of the literature to determine the effects of intervention programs utilizing the Nintendo Wii console on balance control and functional performance in independently functioning older adults. STUDIES WERE OBTAINED BY SEARCHING THE FOLLOWING DATABASES: PubMed, CINAHL, PEDro, EMBASE, SPORTdiscus, and Google Scholar, followed by a hand search of bibliographic references of the included studies. Included were randomized controlled trials written in English in which Nintendo Wii Fit was used to enhance standing balance performance in older adults and compared with an alternative exercise treatment, placebo, or no treatment. Seven relevant studies were retrieved. The four studies examining the effect of Wii-based exercise compared with no exercise reported positive effects on at least one outcome measure related to balance performance in older adults. Studies comparing Wii-based training with alternative exercise programs generally indicated that the balance improvements achieved by Wii-based training are comparable with those achieved by other exercise programs. The review indicates that Wii-based exercise programs may serve as an alternative to more conventional forms of exercise aimed at improving balance control. However, due to the great variability between studies in terms of the intervention protocols and outcome measures, as well as methodological limitations, definitive recommendations as to optimal treatment protocols and the potential of such an intervention as a safe and effective home-based treatment cannot be made at this point.

  2. An all-at-once reduced Hessian SQP scheme for aerodynamic design optimization

    NASA Technical Reports Server (NTRS)

    Feng, Dan; Pulliam, Thomas H.

    1995-01-01

    This paper introduces a computational scheme for solving a class of aerodynamic design problems that can be posed as nonlinear equality constrained optimizations. The scheme treats the flow and design variables as independent variables, and solves the constrained optimization problem via reduced Hessian successive quadratic programming. It updates the design and flow variables simultaneously at each iteration and allows flow variables to be infeasible before convergence. The solution of an adjoint flow equation is never needed. In addition, a range space basis is chosen so that in a certain sense the 'cross term' ignored in reduced Hessian SQP methods is minimized. Numerical results for a nozzle design using the quasi-one-dimensional Euler equations show that this scheme is computationally efficient and robust. The computational cost of a typical nozzle design is only a fraction more than that of the corresponding analysis flow calculation. Superlinear convergence is also observed, which agrees with the theoretical properties of this scheme. All optimal solutions are obtained by starting far away from the final solution.

  3. Multi-Agent Methods for the Configuration of Random Nanocomputers

    NASA Technical Reports Server (NTRS)

    Lawson, John W.

    2004-01-01

    As computational devices continue to shrink, the cost of manufacturing such devices is expected to grow exponentially. One alternative to the costly, detailed design and assembly of conventional computers is to place the nano-electronic components randomly on a chip. The price for such a trivial assembly process is that the resulting chip would not be programmable by conventional means. In this work, we show that such random nanocomputers can be adaptively programmed using multi-agent methods. This is accomplished through the optimization of an associated high dimensional error function. By representing each of the independent variables as a reinforcement learning agent, we are able to achieve convergence must faster than with other methods, including simulated annealing. Standard combinational logic circuits such as adders and multipliers are implemented in a straightforward manner. In addition, we show that the intrinsic flexibility of these adaptive methods allows the random computers to be reconfigured easily, making them reusable. Recovery from faults is also demonstrated.

  4. Performance Characteristics of the Multi-Zone NAS Parallel Benchmarks

    NASA Technical Reports Server (NTRS)

    Jin, Haoqiang; VanderWijngaart, Rob F.

    2003-01-01

    We describe a new suite of computational benchmarks that models applications featuring multiple levels of parallelism. Such parallelism is often available in realistic flow computations on systems of grids, but had not previously been captured in bench-marks. The new suite, named NPB Multi-Zone, is extended from the NAS Parallel Benchmarks suite, and involves solving the application benchmarks LU, BT and SP on collections of loosely coupled discretization meshes. The solutions on the meshes are updated independently, but after each time step they exchange boundary value information. This strategy provides relatively easily exploitable coarse-grain parallelism between meshes. Three reference implementations are available: one serial, one hybrid using the Message Passing Interface (MPI) and OpenMP, and another hybrid using a shared memory multi-level programming model (SMP+OpenMP). We examine the effectiveness of hybrid parallelization paradigms in these implementations on three different parallel computers. We also use an empirical formula to investigate the performance characteristics of the multi-zone benchmarks.

  5. NASA/University JOint VEnture (JOVE) Program. VIXEN(tm): Object-Oriented, Technology-Adaptive, Virtual Information Exchange Environment

    NASA Technical Reports Server (NTRS)

    Anyiwo, Joshua C.

    2000-01-01

    Vixen is a collection of enabling technologies for uninhibited distributed object computing. In the Spring of 1995 when Vixen was proposed, it was an innovative idea very much ahead of its time. But today the technologies proposed in Vixen have become standard technologies for Enterprise Computing. Sun Microsystems J2EE/EJB specifications, among others, are independently proposed technologies of the Vixen type. I have brought Vixen completely under the J2EE standard in order to maximize interoperability and compatibility with other computing industry efforts. Vixen and the Enterprise JavaBean (EJB) Server technologies are now practically identical; OIL, another Vixen technology, and the Java Messaging System (JMS) are practically identical; and so on. There is no longer anything novel or patentable in the Vixen work performed under this grant. The above discussion, notwithstanding, my independent development of Vixen has significantly helped me, my university, my students and the local community. The undergraduate students who worked with me in developing Vixen have enhanced their expertise in what has become the cutting edge technology of their industry and are therefore well positioned for lucrative employment opportunities in the industry. My academic department has gained a new course: "Multi-media System Development", which provides a highly desirable expertise to our students for employment in any enterprise today. The many Outreach Programs that I conducted during this grant period have exposed local Middle School students to the contributions that NASA is making in our society as well as awakened desires in many such students for careers in Science and Technology. I have applied Vixen to the development of two software packages: (a) JAS: Joshua Application Server - which allows a user to configure an EJB Server to serve a J2EE compliant application over the world wide web; (b) PCM: Professor Course Manager: a J2EE compliant application for configuring a course for distance learning. These types of applications are, however, generally available in the industry today.

  6. 34 CFR 367.1 - What is the Independent Living Services for Older Individuals Who Are Blind program?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Individuals Who Are Blind program? 367.1 Section 367.1 Education Regulations of the Offices of the Department... EDUCATION INDEPENDENT LIVING SERVICES FOR OLDER INDIVIDUALS WHO ARE BLIND General § 367.1 What is the Independent Living Services for Older Individuals Who Are Blind program? This program supports projects that...

  7. 34 CFR 367.1 - What is the Independent Living Services for Older Individuals Who Are Blind program?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Individuals Who Are Blind program? 367.1 Section 367.1 Education Regulations of the Offices of the Department... EDUCATION INDEPENDENT LIVING SERVICES FOR OLDER INDIVIDUALS WHO ARE BLIND General § 367.1 What is the Independent Living Services for Older Individuals Who Are Blind program? This program supports projects that...

  8. 34 CFR 367.1 - What is the Independent Living Services for Older Individuals Who Are Blind program?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Individuals Who Are Blind program? 367.1 Section 367.1 Education Regulations of the Offices of the Department... EDUCATION INDEPENDENT LIVING SERVICES FOR OLDER INDIVIDUALS WHO ARE BLIND General § 367.1 What is the Independent Living Services for Older Individuals Who Are Blind program? This program supports projects that...

  9. A CAD (Classroom Assessment Design) of a Computer Programming Course

    ERIC Educational Resources Information Center

    Hawi, Nazir S.

    2012-01-01

    This paper presents a CAD (classroom assessment design) of an entry-level undergraduate computer programming course "Computer Programming I". CAD has been the product of a long experience in teaching computer programming courses including teaching "Computer Programming I" 22 times. Each semester, CAD is evaluated and modified…

  10. Foundations in Science and Mathematics Program for Middle School and High School Students

    NASA Astrophysics Data System (ADS)

    Desai, Karna Mahadev; Yang, Jing; Hemann, Jason

    2016-01-01

    The Foundations in Science and Mathematics (FSM) is a graduate student led summer program designed to help middle school and high school students strengthen their knowledge and skills in mathematics and science. FSM provides two-week-long courses over a broad spectrum of disciplines including astronomy, biology, chemistry, computer programming, geology, mathematics, and physics. Students can chose two types of courses: (1) courses that help students learn the fundamental concepts in basic sciences and mathematics (e.g., "Precalculus"); and (2) knowledge courses that might be excluded from formal schooling (e.g., "Introduction to Universe"). FSM has served over 500 students in the Bloomington, IN, community over six years by acquiring funding from Indiana University and the Indiana Space Grant Consortium. FSM offers graduate students the opportunity to obtain first hand experience through independent teaching and curriculum design as well as leadership experience.We present the design of the program, review the achievements, and explore the challenges we face. We are open to collaboration with similar educational outreach programs. For more information, please visit http://www.indiana.edu/~fsm/ .

  11. Emerald: an object-based language for distributed programming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hutchinson, N.C.

    1987-01-01

    Distributed systems have become more common, however constructing distributed applications remains a very difficult task. Numerous operating systems and programming languages have been proposed that attempt to simplify the programming of distributed applications. Here a programing language called Emerald is presented that simplifies distributed programming by extending the concepts of object-based languages to the distributed environment. Emerald supports a single model of computation: the object. Emerald objects include private entities such as integers and Booleans, as well as shared, distributed entities such as compilers, directories, and entire file systems. Emerald objects may move between machines in the system, but objectmore » invocation is location independent. The uniform semantic model used for describing all Emerald objects makes the construction of distributed applications in Emerald much simpler than in systems where the differences in implementation between local and remote entities are visible in the language semantics. Emerald incorporates a type system that deals only with the specification of objects - ignoring differences in implementation. Thus, two different implementations of the same abstraction may be freely mixed.« less

  12. A Multiple-star Combined Solution Program - Application to the Population II Binary μ Cas

    NASA Astrophysics Data System (ADS)

    Gudehus, D. H.

    2001-05-01

    A multiple-star combined-solution computer program which can simultaneously fit astrometric, speckle, and spectroscopic data, and solve for the orbital parameters, parallax, proper motion, and masses has been written and is now publicly available. Some features of the program are the ability to scale the weights at run time, hold selected parameters constant, handle up to five spectroscopic subcomponents for the primary and the secondary each, account for the light travel time across the system, account for apsidal motion, plot the results, and write the residuals in position to a standard file for further analysis. The spectroscopic subcomponent data can be represented by reflex velocities and/or by independent measurements. A companion editing program which can manage the data files is included in the package. The program has been applied to the Population II binary μ Cas to derive improved masses and an estimate of the primordial helium abundance. The source code, executables, sample data files, and documentation for OpenVMS and Unix, including Linux, are available at http://www.chara.gsu.edu/\\rlap\\ \\ gudehus/binary.html.

  13. The Vocational Training FacilityAn Interactive Learning Program to Return Persons With Physical Disabilities to Employment.

    PubMed

    Hammel, J M; Van Der Loos, H F; Lepage, P; Burgar, C; Perkash, I; Shafer, D; Topp, E; Lees, D

    1994-01-01

    This paper describes the results of the program-development phase of the Vocational Training Facility (VTF) taking place at the Palo Alto Veterans Affairs Medical Center Rehabilitation Research and Development Center. The VTF staff has developed a self-paced, multimedia curriculum comprised of adapted training packages, interactive videos, and additional training and testing materials designed to teach entry-level desktop publishing and reasonable accommodation skills to individuals with spinal cord injuries. The curriculum is taught via the Macintosh™ computer to allow independent, "hands-off" access to training materials. Each student is given an integrated workstation that is equipped with the Desktop Vocational Assistant Robot (De VAR); a set of low-and high-technology assistive hardware, software, and devices; and ergonomic furniture and adaptations customized to fit individual learning and access needs. Each student completes a 12-week, full-time training program followed by a 3-month internship with a local corporate sponsor. This paper summarizes the evaluation results of the VTF program by the first nine students, with spinal cord injuries ranging paraplegia to high-level quadriplegia, who have completed the program.

  14. Integrated Digital Flight Control System for the Space Shuttle Orbiter

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The objectives of the integrated digital flight control system (DFCS) is to provide rotational and translational control of the space shuttle orbiter in all phases of flight: from launch ascent through orbit to entry and touchdown, and during powered horizontal flights. The program provides a versatile control system structure while maintaining uniform communications with other programs, sensors, and control effectors by using an executive routine/functional subroutine format. The program reads all external variables at a single point, copies them into its dedicated storage, and then calls the required subroutines in the proper sequence. As a result, the flight control program is largely independent of other programs in the computer complex and is equally insensitive to characteristics of the processor configuration. The integrated structure is described of the control system and the DFCS executive routine which embodies that structure. The input and output, including jet selection are included. Specific estimation and control algorithm are shown for the various mission phases: cruise (including horizontal powered flight), entry, on-orbit, and boost. Attitude maneuver routines that interface with the DFCS are included.

  15. Community-Based Services for Independent Living: Topic Paper G.

    ERIC Educational Resources Information Center

    National Council on the Handicapped, Washington, DC.

    This paper assesses federal legislation and programs affecting community-based services for independent living for people with disabilities. Independent living entitlement programs are contained in Title VII of the Rehabilitation Act of 1973, and include comprehensive services, centers for independent living, and independent living services for…

  16. A numerical differentiation library exploiting parallel architectures

    NASA Astrophysics Data System (ADS)

    Voglis, C.; Hadjidoukas, P. E.; Lagaris, I. E.; Papageorgiou, D. G.

    2009-08-01

    We present a software library for numerically estimating first and second order partial derivatives of a function by finite differencing. Various truncation schemes are offered resulting in corresponding formulas that are accurate to order O(h), O(h), and O(h), h being the differencing step. The derivatives are calculated via forward, backward and central differences. Care has been taken that only feasible points are used in the case where bound constraints are imposed on the variables. The Hessian may be approximated either from function or from gradient values. There are three versions of the software: a sequential version, an OpenMP version for shared memory architectures and an MPI version for distributed systems (clusters). The parallel versions exploit the multiprocessing capability offered by computer clusters, as well as modern multi-core systems and due to the independent character of the derivative computation, the speedup scales almost linearly with the number of available processors/cores. Program summaryProgram title: NDL (Numerical Differentiation Library) Catalogue identifier: AEDG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDG_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 73 030 No. of bytes in distributed program, including test data, etc.: 630 876 Distribution format: tar.gz Programming language: ANSI FORTRAN-77, ANSI C, MPI, OPENMP Computer: Distributed systems (clusters), shared memory systems Operating system: Linux, Solaris Has the code been vectorised or parallelized?: Yes RAM: The library uses O(N) internal storage, N being the dimension of the problem Classification: 4.9, 4.14, 6.5 Nature of problem: The numerical estimation of derivatives at several accuracy levels is a common requirement in many computational tasks, such as optimization, solution of nonlinear systems, etc. The parallel implementation that exploits systems with multiple CPUs is very important for large scale and computationally expensive problems. Solution method: Finite differencing is used with carefully chosen step that minimizes the sum of the truncation and round-off errors. The parallel versions employ both OpenMP and MPI libraries. Restrictions: The library uses only double precision arithmetic. Unusual features: The software takes into account bound constraints, in the sense that only feasible points are used to evaluate the derivatives, and given the level of the desired accuracy, the proper formula is automatically employed. Running time: Running time depends on the function's complexity. The test run took 15 ms for the serial distribution, 0.6 s for the OpenMP and 4.2 s for the MPI parallel distribution on 2 processors.

  17. Comparison of two computer programs by predicting turbulent mixing of helium in a ducted supersonic airstream

    NASA Technical Reports Server (NTRS)

    Pan, Y. S.; Drummond, J. P.; Mcclinton, C. R.

    1978-01-01

    Two parabolic flow computer programs, SHIP (a finite-difference program) and COMOC (a finite-element program), are used for predicting three-dimensional turbulent reacting flow fields in supersonic combustors. The theoretical foundation of the two computer programs are described, and then the programs are applied to a three-dimensional turbulent mixing experiment. The cold (nonreacting) flow experiment was performed to study the mixing of helium jets with a supersonic airstream in a rectangular duct. Surveys of the flow field at an upstream were used as the initial data by programs; surveys at a downstream station provided comparison to assess program accuracy. Both computer programs predicted the experimental results and data trends reasonably well. However, the comparison between the computations from the two programs indicated that SHIP was more accurate in computation and more efficient in both computer storage and computing time than COMOC.

  18. Computer program CDCID: an automated quality control program using CDC update

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singer, G.L.; Aguilar, F.

    1984-04-01

    A computer program, CDCID, has been developed in coordination with a quality control program to provide a highly automated method of documenting changes to computer codes at EG and G Idaho, Inc. The method uses the standard CDC UPDATE program in such a manner that updates and their associated documentation are easily made and retrieved in various formats. The method allows each card image of a source program to point to the document which describes it, who created the card, and when it was created. The method described is applicable to the quality control of computer programs in general. Themore » computer program described is executable only on CDC computing systems, but the program could be modified and applied to any computing system with an adequate updating program.« less

  19. 18 CFR Appendix C to Part 2 - Nationwide Proceeding Computation of Federal Income Tax Allowance Independent Producers, Pipeline...

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 18 Conservation of Power and Water Resources 1 2012-04-01 2012-04-01 false Nationwide Proceeding Computation of Federal Income Tax Allowance Independent Producers, Pipeline Affiliates and Pipeline Producers... Total computed revenue 9,465,231,966 8,985,807,669 2,336,439,376 16(gross income) 17 18 revenue...

  20. 18 CFR Appendix C to Part 2 - Nationwide Proceeding Computation of Federal Income Tax Allowance Independent Producers, Pipeline...

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 18 Conservation of Power and Water Resources 1 2014-04-01 2014-04-01 false Nationwide Proceeding Computation of Federal Income Tax Allowance Independent Producers, Pipeline Affiliates and Pipeline Producers... Total computed revenue 9,465,231,966 8,985,807,669 2,336,439,376 16(gross income) 17 18 revenue...

  1. 18 CFR Appendix C to Part 2 - Nationwide Proceeding Computation of Federal Income Tax Allowance Independent Producers, Pipeline...

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 18 Conservation of Power and Water Resources 1 2013-04-01 2013-04-01 false Nationwide Proceeding Computation of Federal Income Tax Allowance Independent Producers, Pipeline Affiliates and Pipeline Producers... Total computed revenue 9,465,231,966 8,985,807,669 2,336,439,376 16(gross income) 17 18 revenue...

  2. Neural Networks for Computer Vision: A Framework for Specifications of a General Purpose Vision System

    NASA Astrophysics Data System (ADS)

    Skrzypek, Josef; Mesrobian, Edmond; Gungner, David J.

    1989-03-01

    The development of autonomous land vehicles (ALV) capable of operating in an unconstrained environment has proven to be a formidable research effort. The unpredictability of events in such an environment calls for the design of a robust perceptual system, an impossible task requiring the programming of a system bases on the expectation of future, unconstrained events. Hence, the need for a "general purpose" machine vision system that is capable of perceiving and understanding images in an unconstrained environment in real-time. The research undertaken at the UCLA Machine Perception Laboratory addresses this need by focusing on two specific issues: 1) the long term goals for machine vision research as a joint effort between the neurosciences and computer science; and 2) a framework for evaluating progress in machine vision. In the past, vision research has been carried out independently within different fields including neurosciences, psychology, computer science, and electrical engineering. Our interdisciplinary approach to vision research is based on the rigorous combination of computational neuroscience, as derived from neurophysiology and neuropsychology, with computer science and electrical engineering. The primary motivation behind our approach is that the human visual system is the only existing example of a "general purpose" vision system and using a neurally based computing substrate, it can complete all necessary visual tasks in real-time.

  3. Re-Computation of Numerical Results Contained in NACA Report No. 496

    NASA Technical Reports Server (NTRS)

    Perry, Boyd, III

    2015-01-01

    An extensive examination of NACA Report No. 496 (NACA 496), "General Theory of Aerodynamic Instability and the Mechanism of Flutter," by Theodore Theodorsen, is described. The examination included checking equations and solution methods and re-computing interim quantities and all numerical examples in NACA 496. The checks revealed that NACA 496 contains computational shortcuts (time- and effort-saving devices for engineers of the time) and clever artifices (employed in its solution methods), but, unfortunately, also contains numerous tripping points (aspects of NACA 496 that have the potential to cause confusion) and some errors. The re-computations were performed employing the methods and procedures described in NACA 496, but using modern computational tools. With some exceptions, the magnitudes and trends of the original results were in fair-to-very-good agreement with the re-computed results. The exceptions included what are speculated to be computational errors in the original in some instances and transcription errors in the original in others. Independent flutter calculations were performed and, in all cases, including those where the original and re-computed results differed significantly, were in excellent agreement with the re-computed results. Appendix A contains NACA 496; Appendix B contains a Matlab(Reistered) program that performs the re-computation of results; Appendix C presents three alternate solution methods, with examples, for the two-degree-of-freedom solution method of NACA 496; Appendix D contains the three-degree-of-freedom solution method (outlined in NACA 496 but never implemented), with examples.

  4. Configuring compute nodes of a parallel computer in an operational group into a plurality of independent non-overlapping collective networks

    DOEpatents

    Archer, Charles J.; Inglett, Todd A.; Ratterman, Joseph D.; Smith, Brian E.

    2010-03-02

    Methods, apparatus, and products are disclosed for configuring compute nodes of a parallel computer in an operational group into a plurality of independent non-overlapping collective networks, the compute nodes in the operational group connected together for data communications through a global combining network, that include: partitioning the compute nodes in the operational group into a plurality of non-overlapping subgroups; designating one compute node from each of the non-overlapping subgroups as a master node; and assigning, to the compute nodes in each of the non-overlapping subgroups, class routing instructions that organize the compute nodes in that non-overlapping subgroup as a collective network such that the master node is a physical root.

  5. Computer analysis of multicircuit shells of revolution by the field method

    NASA Technical Reports Server (NTRS)

    Cohen, G. A.

    1975-01-01

    The field method, presented previously for the solution of even-order linear boundary value problems defined on one-dimensional open branch domains, is extended to boundary value problems defined on one-dimensional domains containing circuits. This method converts the boundary value problem into two successive numerically stable initial value problems, which may be solved by standard forward integration techniques. In addition, a new method for the treatment of singular boundary conditions is presented. This method, which amounts to a partial interchange of the roles of force and displacement variables, is problem independent with respect to both accuracy and speed of execution. This method was implemented in a computer program to calculate the static response of ring stiffened orthotropic multicircuit shells of revolution to asymmetric loads. Solutions are presented for sample problems which illustrate the accuracy and efficiency of the method.

  6. 34 CFR 365.11 - How is the allotment of Federal funds for State independent living (IL) services computed?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 34 Education 2 2011-07-01 2010-07-01 true How is the allotment of Federal funds for State independent living (IL) services computed? 365.11 Section 365.11 Education Regulations of the Offices of the... EDUCATION STATE INDEPENDENT LIVING SERVICES How Does the Secretary Make a Grant to a State? § 365.11 How is...

  7. 34 CFR 365.11 - How is the allotment of Federal funds for State independent living (IL) services computed?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 34 Education 2 2010-07-01 2010-07-01 false How is the allotment of Federal funds for State independent living (IL) services computed? 365.11 Section 365.11 Education Regulations of the Offices of the... EDUCATION STATE INDEPENDENT LIVING SERVICES How Does the Secretary Make a Grant to a State? § 365.11 How is...

  8. QuickAssist: Reading and Learning Vocabulary Independently with the Help of CALL and NLP Technologies

    ERIC Educational Resources Information Center

    Wood, Peter

    2011-01-01

    Independent learning is a buzz word that is often used in connection with computer technologies applied to the area of foreign language instruction. This chapter takes a critical look at some of the stereotypes that exist with regard to computer-assisted language learning (CALL) as a money saver and an easy way to create an "independent"…

  9. SILHOUETTE - HIDDEN LINE COMPUTER CODE WITH GENERALIZED SILHOUETTE SOLUTION

    NASA Technical Reports Server (NTRS)

    Hedgley, D. R.

    1994-01-01

    Flexibility in choosing how to display computer-generated three-dimensional drawings has become increasingly important in recent years. A major consideration is the enhancement of the realism and aesthetics of the presentation. A polygonal representation of objects, even with hidden lines removed, is not always desirable. A more pleasing pictorial representation often can be achieved by removing some of the remaining visible lines, thus creating silhouettes (or outlines) of selected surfaces of the object. Additionally, it should be noted that this silhouette feature allows warped polygons. This means that any polygon can be decomposed into constituent triangles. Considering these triangles as members of the same family will present a polygon with no interior lines, and thus removes the restriction of flat polygons. SILHOUETTE is a program for calligraphic drawings that can render any subset of polygons as a silhouette with respect to itself. The program is flexible enough to be applicable to every class of object. SILHOUETTE offers all possible combinations of silhouette and nonsilhouette specifications for an arbitrary solid. Thus, it is possible to enhance the clarity of any three-dimensional scene presented in two dimensions. Input to the program can be line segments or polygons. Polygons designated with the same number will be drawn as a silhouette of those polygons. SILHOUETTE is written in FORTRAN 77 and requires a graphics package such as DI-3000. The program has been implemented on a DEC VAX series computer running VMS and used 65K of virtual memory without a graphics package linked in. The source code is intended to be machine independent. This program is available on a 5.25 inch 360K MS-DOS format diskette (standard distribution) and is also available on a 9-track 1600 BPI ASCII CARD IMAGE magnetic tape. SILHOUETTE was developed in 1986 and was last updated in 1992.

  10. MCdevelop - a universal framework for Stochastic Simulations

    NASA Astrophysics Data System (ADS)

    Slawinska, M.; Jadach, S.

    2011-03-01

    We present MCdevelop, a universal computer framework for developing and exploiting the wide class of Stochastic Simulations (SS) software. This powerful universal SS software development tool has been derived from a series of scientific projects for precision calculations in high energy physics (HEP), which feature a wide range of functionality in the SS software needed for advanced precision Quantum Field Theory calculations for the past LEP experiments and for the ongoing LHC experiments at CERN, Geneva. MCdevelop is a "spin-off" product of HEP to be exploited in other areas, while it will still serve to develop new SS software for HEP experiments. Typically SS involve independent generation of large sets of random "events", often requiring considerable CPU power. Since SS jobs usually do not share memory it makes them easy to parallelize. The efficient development, testing and running in parallel SS software requires a convenient framework to develop software source code, deploy and monitor batch jobs, merge and analyse results from multiple parallel jobs, even before the production runs are terminated. Throughout the years of development of stochastic simulations for HEP, a sophisticated framework featuring all the above mentioned functionality has been implemented. MCdevelop represents its latest version, written mostly in C++ (GNU compiler gcc). It uses Autotools to build binaries (optionally managed within the KDevelop 3.5.3 Integrated Development Environment (IDE)). It uses the open-source ROOT package for histogramming, graphics and the mechanism of persistency for the C++ objects. MCdevelop helps to run multiple parallel jobs on any computer cluster with NQS-type batch system. Program summaryProgram title:MCdevelop Catalogue identifier: AEHW_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHW_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 48 136 No. of bytes in distributed program, including test data, etc.: 355 698 Distribution format: tar.gz Programming language: ANSI C++ Computer: Any computer system or cluster with C++ compiler and UNIX-like operating system. Operating system: Most UNIX systems, Linux. The application programs were thoroughly tested under Ubuntu 7.04, 8.04 and CERN Scientific Linux 5. Has the code been vectorised or parallelised?: Tools (scripts) for optional parallelisation on a PC farm are included. RAM: 500 bytes Classification: 11.3 External routines: ROOT package version 5.0 or higher ( http://root.cern.ch/drupal/). Nature of problem: Developing any type of stochastic simulation program for high energy physics and other areas. Solution method: Object Oriented programming in C++ with added persistency mechanism, batch scripts for running on PC farms and Autotools.

  11. [Role of the independent microbiology laboratory in supporting infection control programs in small to mid-sized hospitals].

    PubMed

    Yanagisawa, Hideji

    2009-05-01

    With the revision of the Medical Service Law in 2006 by the Japanese Ministry of Health, Labour and Welfare (MHLW), all healthcare institutions are now required to implement a healthcare risk management program including infection control program. At a national level, an infection control surveillance program (JANIS) was implemented in July 2007. Regular weekly, monthly, and yearly infection control surveillance reports from independent microbiology laboratories can make significant contributions to infection control programs in small to mid-sized hospitals; furthermore, such programs are consistent with the framework of the MHLW's objective of strengthening risk management in healthcare institutions. Against the backdrop of current efforts to improve risk management, independent laboratories can make a significant contribution. Independent laboratories must play a role beyond merely receiving and processing specimens for microbiological examination. In addition to generating results for patients, hospital epidemiological data that contribute to local infection control programs must be a value-added component of the service. A major obstacle for independent laboratories to make a significant contribution to risk management is the current reimbursement system, which makes it economically impossible for independent laboratories to support infection control programs in healthcare institutions.

  12. An image-processing software package: UU and Fig for optical metrology applications

    NASA Astrophysics Data System (ADS)

    Chen, Lujie

    2013-06-01

    Modern optical metrology applications are largely supported by computational methods, such as phase shifting [1], Fourier Transform [2], digital image correlation [3], camera calibration [4], etc, in which image processing is a critical and indispensable component. While it is not too difficult to obtain a wide variety of image-processing programs from the internet; few are catered for the relatively special area of optical metrology. This paper introduces an image-processing software package: UU (data processing) and Fig (data rendering) that incorporates many useful functions to process optical metrological data. The cross-platform programs UU and Fig are developed based on wxWidgets. At the time of writing, it has been tested on Windows, Linux and Mac OS. The userinterface is designed to offer precise control of the underline processing procedures in a scientific manner. The data input/output mechanism is designed to accommodate diverse file formats and to facilitate the interaction with other independent programs. In terms of robustness, although the software was initially developed for personal use, it is comparably stable and accurate to most of the commercial software of similar nature. In addition to functions for optical metrology, the software package has a rich collection of useful tools in the following areas: real-time image streaming from USB and GigE cameras, computational geometry, computer vision, fitting of data, 3D image processing, vector image processing, precision device control (rotary stage, PZT stage, etc), point cloud to surface reconstruction, volume rendering, batch processing, etc. The software package is currently used in a number of universities for teaching and research.

  13. Modifications of the U.S. Geological Survey modular, finite-difference, ground-water flow model to read and write geographic information system files

    USGS Publications Warehouse

    Orzol, Leonard L.; McGrath, Timothy S.

    1992-01-01

    This report documents modifications to the U.S. Geological Survey modular, three-dimensional, finite-difference, ground-water flow model, commonly called MODFLOW, so that it can read and write files used by a geographic information system (GIS). The modified model program is called MODFLOWARC. Simulation programs such as MODFLOW generally require large amounts of input data and produce large amounts of output data. Viewing data graphically, generating head contours, and creating or editing model data arrays such as hydraulic conductivity are examples of tasks that currently are performed either by the use of independent software packages or by tedious manual editing, manipulating, and transferring data. Programs such as GIS programs are commonly used to facilitate preparation of the model input data and analyze model output data; however, auxiliary programs are frequently required to translate data between programs. Data translations are required when different programs use different data formats. Thus, the user might use GIS techniques to create model input data, run a translation program to convert input data into a format compatible with the ground-water flow model, run the model, run a translation program to convert the model output into the correct format for GIS, and use GIS to display and analyze this output. MODFLOWARC, avoids the two translation steps and transfers data directly to and from the ground-water-flow model. This report documents the design and use of MODFLOWARC and includes instructions for data input/output of the Basic, Block-centered flow, River, Recharge, Well, Drain, Evapotranspiration, General-head boundary, and Streamflow-routing packages. The modification to MODFLOW and the Streamflow-Routing package was minimized. Flow charts and computer-program code describe the modifications to the original computer codes for each of these packages. Appendix A contains a discussion on the operation of MODFLOWARC using a sample problem.

  14. Application Portable Parallel Library

    NASA Technical Reports Server (NTRS)

    Cole, Gary L.; Blech, Richard A.; Quealy, Angela; Townsend, Scott

    1995-01-01

    Application Portable Parallel Library (APPL) computer program is subroutine-based message-passing software library intended to provide consistent interface to variety of multiprocessor computers on market today. Minimizes effort needed to move application program from one computer to another. User develops application program once and then easily moves application program from parallel computer on which created to another parallel computer. ("Parallel computer" also include heterogeneous collection of networked computers). Written in C language with one FORTRAN 77 subroutine for UNIX-based computers and callable from application programs written in C language or FORTRAN 77.

  15. CTserver: A Computational Thermodynamics Server for the Geoscience Community

    NASA Astrophysics Data System (ADS)

    Kress, V. C.; Ghiorso, M. S.

    2006-12-01

    The CTserver platform is an Internet-based computational resource that provides on-demand services in Computational Thermodynamics (CT) to a diverse geoscience user base. This NSF-supported resource can be accessed at ctserver.ofm-research.org. The CTserver infrastructure leverages a high-quality and rigorously tested software library of routines for computing equilibrium phase assemblages and for evaluating internally consistent thermodynamic properties of materials, e.g. mineral solid solutions and a variety of geological fluids, including magmas. Thermodynamic models are currently available for 167 phases. Recent additions include Duan, Møller and Weare's model for supercritical C-O-H-S, extended to include SO2 and S2 species, and an entirely new associated solution model for O-S-Fe-Ni sulfide liquids. This software library is accessed via the CORBA Internet protocol for client-server communication. CORBA provides a standardized, object-oriented, language and platform independent, fast, low-bandwidth interface to phase property modules running on the server cluster. Network transport, language translation and resource allocation are handled by the CORBA interface. Users access server functionality in two principal ways. Clients written as browser- based Java applets may be downloaded which provide specific functionality such as retrieval of thermodynamic properties of phases, computation of phase equilibria for systems of specified composition, or modeling the evolution of these systems along some particular reaction path. This level of user interaction requires minimal programming effort and is ideal for classroom use. A more universal and flexible mode of CTserver access involves making remote procedure calls from user programs directly to the server public interface. The CTserver infrastructure relieves the user of the burden of implementing and testing the often complex thermodynamic models of real liquids and solids. A pilot application of this distributed architecture involves CFD computation of magma convection at Volcan Villarrica with magma properties and phase proportions calculated at each spatial node and at each time step via distributed function calls to MELTS-objects executing on the CTserver. Documentation and programming examples are provided at http://ctserver.ofm- research.org.

  16. A computer program for two-particle generalized coefficients of fractional parentage

    NASA Astrophysics Data System (ADS)

    Deveikis, A.; Juodagalvis, A.

    2008-10-01

    We present a FORTRAN90 program GCFP for the calculation of the generalized coefficients of fractional parentage (generalized CFPs or GCFP). The approach is based on the observation that the multi-shell CFPs can be expressed in terms of single-shell CFPs, while the latter can be readily calculated employing a simple enumeration scheme of antisymmetric A-particle states and an efficient method of construction of the idempotent matrix eigenvectors. The program provides fast calculation of GCFPs for a given particle number and produces results possessing numerical uncertainties below the desired tolerance. A single j-shell is defined by four quantum numbers, (e,l,j,t). A supplemental C++ program parGCFP allows calculation to be done in batches and/or in parallel. Program summaryProgram title:GCFP, parGCFP Catalogue identifier: AEBI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEBI_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 17 199 No. of bytes in distributed program, including test data, etc.: 88 658 Distribution format: tar.gz Programming language: FORTRAN 77/90 ( GCFP), C++ ( parGCFP) Computer: Any computer with suitable compilers. The program GCFP requires a FORTRAN 77/90 compiler. The auxiliary program parGCFP requires GNU-C++ compatible compiler, while its parallel version additionally requires MPI-1 standard libraries Operating system: Linux (Ubuntu, Scientific) (all programs), also checked on Windows XP ( GCFP, serial version of parGCFP) RAM: The memory demand depends on the computation and output mode. If this mode is not 4, the program GCFP demands the following amounts of memory on a computer with Linux operating system. It requires around 2 MB of RAM for the A=12 system at E⩽2. Computation of the A=50 particle system requires around 60 MB of RAM at E=0 and ˜70 MB at E=2 (note, however, that the calculation of this system will take a very long time). If the computation and output mode is set to 4, the memory demands by GCFP are significantly larger. Calculation of GCFPs of A=12 system at E=1 requires 145 MB. The program parGCFP requires additional 2.5 and 4.5 MB of memory for the serial and parallel version, respectively. Classification: 17.18 Nature of problem: The program GCFP generates a list of two-particle coefficients of fractional parentage for several j-shells with isospin. Solution method: The method is based on the observation that multishell coefficients of fractional parentage can be expressed in terms of single-shell CFPs [1]. The latter are calculated using the algorithm [2,3] for a spectral decomposition of an antisymmetrization operator matrix Y. The coefficients of fractional parentage are those eigenvectors of the antisymmetrization operator matrix Y that correspond to unit eigenvalues. A computer code for these coefficients is available [4]. The program GCFP offers computation of two-particle multishell coefficients of fractional parentage. The program parGCFP allows a batch calculation using one input file. Sets of GCFPs are independent and can be calculated in parallel. Restrictions:A<86 when E=0 (due to the memory constraints); small numbers of particles allow significantly higher excitations, though the shell with j⩾11/2 cannot get full (it is the implementation constraint). Unusual features: Using the program GCFP it is possible to determine allowed particle configurations without the GCFP computation. The GCFPs can be calculated either for all particle configurations at once or for a specified particle configuration. The values of GCFPs can be printed out with a complete specification in either one file or with the parent and daughter configurations printed in separate files. The latter output mode requires additional time and RAM memory. It is possible to restrict the ( J,T) values of the considered particle configurations. (Here J is the total angular momentum and T is the total isospin of the system.) The program parGCFP produces several result files the number of which equals to the number of particle configurations. To work correctly, the program GCFP needs to be compiled to read parameters from the standard input (the default setting). Running time: It depends on the size of the problem. The minimum time is required, if the computation and output mode ( CompMode) is not 4, but the resulting file is larger. A system with A=12 particles at E=0 (all 9411 GCFPs) took around 1 sec on a Pentium4 2.8 GHz processor with 1 MB L2 cache. The program required about 14 min to calculate all 1.3×10 GCFPs of E=1. The time for all 5.5×10 GCFPs of E=2 was about 53 hours. For this number of particles, the calculation time of both E=0 and E=1 with CompMode = 1 and 4 is nearly the same, when no other processes are running. The case of E=2 could not be calculated with CompMode = 4, because the RAM memory was insufficient. In general, the latter CompMode requires a longer computation time, although the resulting files are smaller in size. The program parGCFP puts virtually no time overhead. Its parallel version speeds-up the calculation. However, the results need to be collected from several files created for each configuration. References: [1] J. Levinsonas, Works of Lithuanian SSR Academy of Sciences 4 (1957) 17. [2] A. Deveikis, A. Bončkus, R. Kalinauskas, Lithuanian Phys. J. 41 (2001) 3. [3] A. Deveikis, R.K. Kalinauskas, B.R. Barrett, Ann. Phys. 296 (2002) 287. [4] A. Deveikis, Comput. Phys. Comm. 173 (2005) 186. (CPC Catalogue ID. ADWI_v1_0)

  17. Integration of NASA Research into Undergraduate Education in Math, Science, Engineering and Technology at North Carolina A&T State University

    NASA Technical Reports Server (NTRS)

    Monroe, Joseph; Kelkar, Ajit

    2003-01-01

    The NASA PAIR program incorporated the NASA-Sponsored research into the undergraduate environment at North Carolina Agricultural and Technical State University. This program is designed to significantly improve undergraduate education in the areas of mathematics, science, engineering, and technology (MSET) by directly benefiting from the experiences of NASA field centers, affiliated industrial partners and academic institutions. The three basic goals of the program were enhancing core courses in MSET curriculum, upgrading core-engineering laboratories to compliment upgraded MSET curriculum, and conduct research training for undergraduates in MSET disciplines through a sophomore shadow program and through Research Experience for Undergraduates (REU) programs. Since the inception of the program nine courses have been modified to include NASA related topics and research. These courses have impacted over 900 students in the first three years of the program. The Electrical Engineering circuit's lab is completely re-equipped to include Computer controlled and data acquisition equipment. The Physics lab is upgraded to implement better sensory data acquisition to enhance students understanding of course concepts. In addition a new instrumentation laboratory in the department of Mechanical Engineering is developed. Research training for A&T students was conducted through four different programs: Apprentice program, Developers program, Sophomore Shadow program and Independent Research program. These programs provided opportunities for an average of forty students per semester.

  18. Programming the social computer.

    PubMed

    Robertson, David; Giunchiglia, Fausto

    2013-03-28

    The aim of 'programming the global computer' was identified by Milner and others as one of the grand challenges of computing research. At the time this phrase was coined, it was natural to assume that this objective might be achieved primarily through extending programming and specification languages. The Internet, however, has brought with it a different style of computation that (although harnessing variants of traditional programming languages) operates in a style different to those with which we are familiar. The 'computer' on which we are running these computations is a social computer in the sense that many of the elementary functions of the computations it runs are performed by humans, and successful execution of a program often depends on properties of the human society over which the program operates. These sorts of programs are not programmed in a traditional way and may have to be understood in a way that is different from the traditional view of programming. This shift in perspective raises new challenges for the science of the Web and for computing in general.

  19. OIL—Output input language for data connectivity between geoscientific software applications

    NASA Astrophysics Data System (ADS)

    Amin Khan, Khalid; Akhter, Gulraiz; Ahmad, Zulfiqar

    2010-05-01

    Geoscientific computing has become so complex that no single software application can perform all the processing steps required to get the desired results. Thus for a given set of analyses, several specialized software applications are required, which must be interconnected for electronic flow of data. In this network of applications the outputs of one application become inputs of other applications. Each of these applications usually involve more than one data type and may have their own data formats, making them incompatible with other applications in terms of data connectivity. Consequently several data format conversion utilities are developed in-house to provide data connectivity between applications. Practically there is no end to this problem as each time a new application is added to the system, a set of new data conversion utilities need to be developed. This paper presents a flexible data format engine, programmable through a platform independent, interpreted language named; Output Input Language (OIL). Its unique architecture allows input and output formats to be defined independent of each other by two separate programs. Thus read and write for each format is coded only once and data connectivity link between two formats is established by a combination of their read and write programs. This results in fewer programs with no redundancy and maximum reuse, enabling rapid application development and easy maintenance of data connectivity links.

  20. Capitalizing on Community: the Small College Environment and the Development of Researchers

    NASA Astrophysics Data System (ADS)

    Stoneking, M. R.

    2014-03-01

    Liberal arts colleges constitute an important source of and training ground for future scientists. At Lawrence University, we take advantage of our small college environment to prepare physics students for research careers by complementing content acquisition with skill development and project experience distributed throughout the curriculum and with co-curricular elements that are tied to our close-knit supportive physics community. Small classes and frequent contact between physics majors and faculty members offer opportunities for regular and detailed feedback on the development of research relevant skills such as laboratory record-keeping, data analysis, electronic circuit design, computational programming, experimental design and modification, and scientific communication. Part of our approach is to balance collaborative group work on small projects (such as Arduino-based electronics projects and optical design challenges) with independent work (on, for example, advanced laboratory experimental extensions and senior capstone projects). Communal spaces and specialized facilities (experimental and computational) and active on-campus research programs attract eager students to the program, establish a community-based atmosphere, provide unique opportunities for the development of research aptitude, and offer opportunities for genuine contribution to a research program. Recently, we have also been encouraging innovativetendencies in physics majors through intentional efforts to develop personal characteristics, encouraging students to become more tolerant of ambiguity, risk-taking, initiative-seeking, and articulate. Indicators of the success of our approach include the roughly ten physics majors who graduate each year and our program's high ranking among institutions whose graduates go on to receive the Ph.D. in physics. Work supported in part by the National Science Foundation.

  1. How To Create an Independent Research Program.

    ERIC Educational Resources Information Center

    Krieger, Melanie Jacobs

    This guide explains how to establish a research program within a school and how to get students involved in independent research projects and national research competitions. Chapter 1, "Selling the Program," examines benefits to the community, school, teachers, and students. Chapter 2, "Assessing Your Situation," discusses how independent research…

  2. On the theory of 3-phase squirrel-cage induction motors including space harmonics and mutual slotting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Papp, G.C.

    1991-03-01

    In this paper general equations for the asynchronous squirrel-cage motor which contain the influence of space harmonics and the mutual slotting are derived by using among others the power-invariant symmetrical component transformation and a time-dependent transformation with which, under certain circumstances, the rotor-position angle can be removed from the coefficient matrix. The developed models implemented in a machine-independent computer program form powerful tools, with which the influence of space harmonics in relation to the geometric data of specific motors can be analyzed for steady-state and transient performances.

  3. IMAGES: An interactive image processing system

    NASA Technical Reports Server (NTRS)

    Jensen, J. R.

    1981-01-01

    The IMAGES interactive image processing system was created specifically for undergraduate remote sensing education in geography. The system is interactive, relatively inexpensive to operate, almost hardware independent, and responsive to numerous users at one time in a time-sharing mode. Most important, it provides a medium whereby theoretical remote sensing principles discussed in lecture may be reinforced in laboratory as students perform computer-assisted image processing. In addition to its use in academic and short course environments, the system has also been used extensively to conduct basic image processing research. The flow of information through the system is discussed including an overview of the programs.

  4. Scheduler for multiprocessor system switch with selective pairing

    DOEpatents

    Gara, Alan; Gschwind, Michael Karl; Salapura, Valentina

    2015-01-06

    System, method and computer program product for scheduling threads in a multiprocessing system with selective pairing of processor cores for increased processing reliability. A selective pairing facility is provided that selectively connects, i.e., pairs, multiple microprocessor or processor cores to provide one highly reliable thread (or thread group). The method configures the selective pairing facility to use checking provide one highly reliable thread for high-reliability and allocate threads to corresponding processor cores indicating need for hardware checking. The method configures the selective pairing facility to provide multiple independent cores and allocate threads to corresponding processor cores indicating inherent resilience.

  5. SIRU development. Volume 3: Software description and program documentation

    NASA Technical Reports Server (NTRS)

    Oehrle, J.

    1973-01-01

    The development and initial evaluation of a strapdown inertial reference unit (SIRU) system are discussed. The SIRU configuration is a modular inertial subsystem with hardware and software features that achieve fault tolerant operational capabilities. The SIRU redundant hardware design is formulated about a six gyro and six accelerometer instrument module package. The six axes array provides redundant independent sensing and the symmetry enables the formulation of an optimal software redundant data processing structure with self-contained fault detection and isolation (FDI) capabilities. The basic SIRU software coding system used in the DDP-516 computer is documented.

  6. A method to estimate weight and dimensions of aircraft gas turbine engines. Volume 1: Method of analysis

    NASA Technical Reports Server (NTRS)

    Pera, R. J.; Onat, E.; Klees, G. W.; Tjonneland, E.

    1977-01-01

    Weight and envelope dimensions of aircraft gas turbine engines are estimated within plus or minus 5% to 10% using a computer method based on correlations of component weight and design features of 29 data base engines. Rotating components are estimated by a preliminary design procedure where blade geometry, operating conditions, material properties, shaft speed, hub-tip ratio, etc., are the primary independent variables used. The development and justification of the method selected, the various methods of analysis, the use of the program, and a description of the input/output data are discussed.

  7. Computer modeling of a hot filament diamond deposition reactor

    NASA Technical Reports Server (NTRS)

    Kuczmarski, Maria A.; Washlock, Paul A.; Angus, John C.

    1991-01-01

    A commercial fluid mechanics program, FLUENT, has been applied to the modeling of a hot-filament diamond deposition reactor. Streamlines and contours of constant temperature and species concentrations are obtained for practical reactor geometries and conditions. The modeling is presently restricted to two-dimensional simulations and to a chemical mechanism of ten independent homogeneous and surface reactions. Comparisons are made between predicted power consumption, substrate temperature, and concentrations of atomic hydrogen and methyl-radical with values taken from the literature. The results to date indicate that the modeling can aid in the rational design and analysis of practical reactor configurations.

  8. Unstructured Euler flow solutions using hexahedral cell refinement

    NASA Technical Reports Server (NTRS)

    Melton, John E.; Cappuccio, Gelsomina; Thomas, Scott D.

    1991-01-01

    An attempt is made to extend grid refinement into three dimensions by using unstructured hexahedral grids. The flow solver is developed using the TIGER (topologically Independent Grid, Euler Refinement) as the starting point. The program uses an unstructured hexahedral mesh and a modified version of the Jameson four-stage, finite-volume Runge-Kutta algorithm for integration of the Euler equations. The unstructured mesh allows for local refinement appropriate for each freestream condition, thereby concentrating mesh cells in the regions of greatest interest. This increases the computational efficiency because the refinement is not required to extend throughout the entire flow field.

  9. Gender Differences in the Use of Computers, Programming, and Peer Interactions in Computer Science Classrooms

    ERIC Educational Resources Information Center

    Stoilescu, Dorian; Egodawatte, Gunawardena

    2010-01-01

    Research shows that female and male students in undergraduate computer science programs view computer culture differently. Female students are interested more in the use of computers than in doing programming, whereas male students see computer science mainly as a programming activity. The overall purpose of our research was not to find new…

  10. Identification and evaluation of fluvial-dominated deltaic (Class I oil) reservoirs in Oklahoma. Quarterly technical progress report, July 1--September 30, 1995

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mankin, C.J.; Banken, M.K.

    The Oklahoma Geological Survey (OGS), the Geo Information Systems department, and the School of Petroleum and Geological Engineering at the University of Oklahoma are engaged in a program to identify and address Oklahoma`s oil recovery opportunities in fluvial-dominated deltaic (FDD) reservoirs. This program includes the systematic and comprehensive collection and evaluation of information on all of Oklahoma`s FDD reservoirs and the recovery technologies that have been (or could be) applied to those reservoirs with commercial success. This data collection and evaluation effort will be the foundation for an aggressive, multifaceted technology transfer program that is designed to support all ofmore » Oklahoma`s oil industry, with particular emphasis on smaller companies and independent operators in their attempts to maximize the economic producibility of FDD reservoirs. Specifically, this project will identify all FDD oil reservoirs in the State; group those reservoirs into plays that have similar depositional origins; collect, organize and analyze all available data conduct characterization and simulation studies on selected reservoirs in each play; and implement a technology transfer program targeted to the operators of FDD reservoirs. Activities were focused primarily on technology transfer elements of the project. This included regional play analysis and mapping, geologic field studies, and reservoir modeling for secondary water flood simulations as used in publication folios and workshops. The computer laboratory was fully operational for operator use. Computer systems design and database development activities were ongoing.« less

  11. Senior Computational Scientist | Center for Cancer Research

    Cancer.gov

    The Basic Science Program (BSP) pursues independent, multidisciplinary research in basic and applied molecular biology, immunology, retrovirology, cancer biology, and human genetics. Research efforts and support are an integral part of the Center for Cancer Research (CCR) at the Frederick National Laboratory for Cancer Research (FNLCR). The Cancer & Inflammation Program (CIP), Basic Science Program, HLA Immunogenetics Section, under the leadership of Dr. Mary Carrington, studies the influence of human leukocyte antigens (HLA) and specific KIR/HLA genotypes on risk of and outcomes to infection, cancer, autoimmune disease, and maternal-fetal disease. Recent studies have focused on the impact of HLA gene expression in disease, the molecular mechanism regulating expression levels, and the functional basis for the effect of differential expression on disease outcome. The lab’s further focus is on the genetic basis for resistance/susceptibility to disease conferred by immunogenetic variation. KEY ROLES/RESPONSIBILITIES The Senior Computational Scientist will provide research support to the CIP-BSP-HLA Immunogenetics Section performing bio-statistical design, analysis and reporting of research projects conducted in the lab. This individual will be involved in the implementation of statistical models and data preparation. Successful candidate should have 5 or more years of competent, innovative biostatistics/bioinformatics research experience, beyond doctoral training Considerable experience with statistical software, such as SAS, R and S-Plus Sound knowledge, and demonstrated experience of theoretical and applied statistics Write program code to analyze data using statistical analysis software Contribute to the interpretation and publication of research results

  12. A Parallel Processing Algorithm for Remote Sensing Classification

    NASA Technical Reports Server (NTRS)

    Gualtieri, J. Anthony

    2005-01-01

    A current thread in parallel computation is the use of cluster computers created by networking a few to thousands of commodity general-purpose workstation-level commuters using the Linux operating system. For example on the Medusa cluster at NASA/GSFC, this provides for super computing performance, 130 G(sub flops) (Linpack Benchmark) at moderate cost, $370K. However, to be useful for scientific computing in the area of Earth science, issues of ease of programming, access to existing scientific libraries, and portability of existing code need to be considered. In this paper, I address these issues in the context of tools for rendering earth science remote sensing data into useful products. In particular, I focus on a problem that can be decomposed into a set of independent tasks, which on a serial computer would be performed sequentially, but with a cluster computer can be performed in parallel, giving an obvious speedup. To make the ideas concrete, I consider the problem of classifying hyperspectral imagery where some ground truth is available to train the classifier. In particular I will use the Support Vector Machine (SVM) approach as applied to hyperspectral imagery. The approach will be to introduce notions about parallel computation and then to restrict the development to the SVM problem. Pseudocode (an outline of the computation) will be described and then details specific to the implementation will be given. Then timing results will be reported to show what speedups are possible using parallel computation. The paper will close with a discussion of the results.

  13. Awareness of pharmaceutical cost-assistance programs among inner-city seniors.

    PubMed

    Federman, Alex D; Safran, Dana Gelb; Keyhani, Salomeh; Cole, Helen; Halm, Ethan A; Siu, Albert L

    2009-04-01

    Lack of awareness may be a significant barrier to participation by low- and middle-income seniors in pharmaceutical cost-assistance programs. The goal of this study was to determine whether older adults' awareness of 2 major state and federal pharmaceutical cost-assistance programs was associated with the seniors' ability to access and process information about assistance programs. Data were gathered from a cross-sectional study of independently living, English- or Spanish-speaking adults aged > or =60 years. Participants were interviewed in 30 community-based settings (19 apartment complexes and 11 senior centers) in New York, New York. The analysis focused on adults aged > or =65 years who lacked Medicaid coverage. Multivariable logistic regression was used to model program awareness as a function of information access (family/social support, attendance at senior or community centers and places of worship, viewing of live health insurance presentations, instrumental activities of daily living, site of medical care, computer use, and having a proxy decision maker for health insurance matters) and information-processing ability (education level, English proficiency, health literacy, and cognitive function). The main outcome measure was awareness of New York's state pharmaceutical assistance program (Elderly Pharmaceutical Insurance Coverage [EPIC

  14. Sharing electronic structure and crystallographic data with ETSF_IO

    NASA Astrophysics Data System (ADS)

    Caliste, D.; Pouillon, Y.; Verstraete, M. J.; Olevano, V.; Gonze, X.

    2008-11-01

    We present a library of routines whose main goal is to read and write exchangeable files (NetCDF file format) storing electronic structure and crystallographic information. It is based on the specification agreed inside the European Theoretical Spectroscopy Facility (ETSF). Accordingly, this library is nicknamed ETSF_IO. The purpose of this article is to give both an overview of the ETSF_IO library and a closer look at its usage. ETSF_IO is designed to be robust and easy to use, close to Fortran read and write routines. To facilitate its adoption, a complete documentation of the input and output arguments of the routines is available in the package, as well as six tutorials explaining in detail various possible uses of the library routines. Catalogue identifier: AEBG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEBG_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Gnu Lesser General Public License No. of lines in distributed program, including test data, etc.: 63 156 No. of bytes in distributed program, including test data, etc.: 363 390 Distribution format: tar.gz Programming language: Fortran 95 Computer: All systems with a Fortran95 compiler Operating system: All systems with a Fortran95 compiler Classification: 7.3, 8 External routines: NetCDF, http://www.unidata.ucar.edu/software/netcdf Nature of problem: Store and exchange electronic structure data and crystallographic data independently of the computational platform, language and generating software Solution method: Implement a library based both on NetCDF file format and an open specification (http://etsf.eu/index.php?page=standardization)

  15. Multi-school collaboration to develop and test nutrition computer modules for pediatric residents.

    PubMed

    Roche, Patricia L; Ciccarelli, Mary R; Gupta, Sandeep K; Hayes, Barbara M; Molleston, Jean P

    2007-09-01

    The provision of essential nutrition-related content in US medical education has been deficient, despite efforts of the federal government and multiple professional organizations. Novel and efficient approaches are needed. A multi-department project was developed to create and pilot a computer-based compact disc instructional program covering the nutrition topics of oral rehydration therapy, calcium, and vitamins. Funded by an internal medical school grant, the content of the modules was written by Department of Pediatrics faculty. The modules were built by School of Informatics faculty and students, and were tested on a convenience sampling of 38 pediatric residents in a randomized controlled trial performed by a registered dietitian/School of Health and Rehabilitation Sciences Master's degree candidate. The modules were reviewed for content by the pediatric faculty principal investigator and the registered dietitian/School of Health and Rehabilitation Sciences graduate student. Residents completed a pretest of nutrition knowledge and attitude toward nutrition and Web-based instruction. Half the group was given three programs (oral rehydration therapy, calcium, and vitamins) on compact disc for study over 6 weeks. Both study and control groups completed a posttest. Pre- and postintervention objective test results in study vs control groups and attitudinal survey results before and after intervention in the study group were compared. The experimental group demonstrated significantly better posttrial objective test performance compared to the control group (P=0.0005). The study group tended toward improvement, whereas the control group performance declined substantially between pre- and posttests. Study group resident attitudes toward computer-based instruction improved. Use of these computer modules prompted almost half of the residents in the study group to independently pursue relevant nutrition-related information. This inexpensive, collaborative, multi-department effort to design a computer-based nutrition curriculum positively impacted both resident knowledge and attitudes.

  16. Design of object-oriented distributed simulation classes

    NASA Technical Reports Server (NTRS)

    Schoeffler, James D. (Principal Investigator)

    1995-01-01

    Distributed simulation of aircraft engines as part of a computer aided design package is being developed by NASA Lewis Research Center for the aircraft industry. The project is called NPSS, an acronym for 'Numerical Propulsion Simulation System'. NPSS is a flexible object-oriented simulation of aircraft engines requiring high computing speed. It is desirable to run the simulation on a distributed computer system with multiple processors executing portions of the simulation in parallel. The purpose of this research was to investigate object-oriented structures such that individual objects could be distributed. The set of classes used in the simulation must be designed to facilitate parallel computation. Since the portions of the simulation carried out in parallel are not independent of one another, there is the need for communication among the parallel executing processors which in turn implies need for their synchronization. Communication and synchronization can lead to decreased throughput as parallel processors wait for data or synchronization signals from other processors. As a result of this research, the following have been accomplished. The design and implementation of a set of simulation classes which result in a distributed simulation control program have been completed. The design is based upon MIT 'Actor' model of a concurrent object and uses 'connectors' to structure dynamic connections between simulation components. Connectors may be dynamically created according to the distribution of objects among machines at execution time without any programming changes. Measurements of the basic performance have been carried out with the result that communication overhead of the distributed design is swamped by the computation time of modules unless modules have very short execution times per iteration or time step. An analytical performance model based upon queuing network theory has been designed and implemented. Its application to realistic configurations has not been carried out.

  17. Design of Object-Oriented Distributed Simulation Classes

    NASA Technical Reports Server (NTRS)

    Schoeffler, James D.

    1995-01-01

    Distributed simulation of aircraft engines as part of a computer aided design package being developed by NASA Lewis Research Center for the aircraft industry. The project is called NPSS, an acronym for "Numerical Propulsion Simulation System". NPSS is a flexible object-oriented simulation of aircraft engines requiring high computing speed. It is desirable to run the simulation on a distributed computer system with multiple processors executing portions of the simulation in parallel. The purpose of this research was to investigate object-oriented structures such that individual objects could be distributed. The set of classes used in the simulation must be designed to facilitate parallel computation. Since the portions of the simulation carried out in parallel are not independent of one another, there is the need for communication among the parallel executing processors which in turn implies need for their synchronization. Communication and synchronization can lead to decreased throughput as parallel processors wait for data or synchronization signals from other processors. As a result of this research, the following have been accomplished. The design and implementation of a set of simulation classes which result in a distributed simulation control program have been completed. The design is based upon MIT "Actor" model of a concurrent object and uses "connectors" to structure dynamic connections between simulation components. Connectors may be dynamically created according to the distribution of objects among machines at execution time without any programming changes. Measurements of the basic performance have been carried out with the result that communication overhead of the distributed design is swamped by the computation time of modules unless modules have very short execution times per iteration or time step. An analytical performance model based upon queuing network theory has been designed and implemented. Its application to realistic configurations has not been carried out.

  18. 41 CFR 105-64.110 - When may GSA establish computer matching programs?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... computer matching programs? 105-64.110 Section 105-64.110 Public Contracts and Property Management Federal... GSA establish computer matching programs? (a) System managers will establish computer matching... direction of the GSA Data Integrity Board that will be established when and if computer matching programs...

  19. 41 CFR 105-64.110 - When may GSA establish computer matching programs?

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... computer matching programs? 105-64.110 Section 105-64.110 Public Contracts and Property Management Federal... GSA establish computer matching programs? (a) System managers will establish computer matching... direction of the GSA Data Integrity Board that will be established when and if computer matching programs...

  20. 41 CFR 105-64.110 - When may GSA establish computer matching programs?

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... computer matching programs? 105-64.110 Section 105-64.110 Public Contracts and Property Management Federal... GSA establish computer matching programs? (a) System managers will establish computer matching... direction of the GSA Data Integrity Board that will be established when and if computer matching programs...

  1. 41 CFR 105-64.110 - When may GSA establish computer matching programs?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... computer matching programs? 105-64.110 Section 105-64.110 Public Contracts and Property Management Federal... GSA establish computer matching programs? (a) System managers will establish computer matching... direction of the GSA Data Integrity Board that will be established when and if computer matching programs...

  2. 41 CFR 105-64.110 - When may GSA establish computer matching programs?

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... computer matching programs? 105-64.110 Section 105-64.110 Public Contracts and Property Management Federal... GSA establish computer matching programs? (a) System managers will establish computer matching... direction of the GSA Data Integrity Board that will be established when and if computer matching programs...

  3. Adolescents' physical activity: competition between perceived neighborhood sport facilities and home media resources.

    PubMed

    Wong, Bonny Yee-Man; Cerin, Ester; Ho, Sai-Yin; Mak, Kwok-Kei; Lo, Wing-Sze; Lam, Tai-Hing

    2010-04-01

    To examine the independent, competing, and interactive effects of perceived availability of specific types of media in the home and neighborhood sport facilities on adolescents' leisure-time physical activity (PA). Survey data from 34 369 students in 42 Hong Kong secondary schools were collected (2006-07). Respondents reported moderate-to-vigorous leisure-time PA, presence of sport facilities in the neighborhood and of media equipment in the home. Being sufficiently physically active was defined as engaging in at least 30 minutes of non-school leisure-time PA on a daily basis. Logistic regression and post-estimation linear combinations of regression coefficients were used to examine the independent and competing effects of sport facilities and media equipment on leisure-time PA. Perceived availability of sport facilities was positively (OR(boys) = 1.17; OR(girls) = 1.26), and that of computer/Internet negatively (OR(boys) = 0.48; OR(girls) = 0.41), associated with being sufficiently active. A significant positive association between video game console and being sufficiently active was found in girls (OR(girls) = 1.19) but not in boys. Compared with adolescents without sport facilities and media equipment, those who reported sport facilities only were more likely to be physically active (OR(boys) = 1.26; OR(girls) = 1.34), while those who additionally reported computer/Internet were less likely to be physically active (OR(boys) = 0.60; OR(girls) = 0.54). Perceived availability of sport facilities in the neighborhood may positively impact on adolescents' level of physical activity. However, having computer/Internet may cancel out the effects of active opportunities in the neighborhood. This suggests that physical activity programs for adolescents need to consider limiting the access to computer-mediated communication as an important intervention component.

  4. Embedded Web Technology: Internet Technology Applied to Real-Time System Control

    NASA Technical Reports Server (NTRS)

    Daniele, Carl J.

    1998-01-01

    The NASA Lewis Research Center is developing software tools to bridge the gap between the traditionally non-real-time Internet technology and the real-time, embedded-controls environment for space applications. Internet technology has been expanding at a phenomenal rate. The simple World Wide Web browsers (such as earlier versions of Netscape, Mosaic, and Internet Explorer) that resided on personal computers just a few years ago only enabled users to log into and view a remote computer site. With current browsers, users not only view but also interact with remote sites. In addition, the technology now supports numerous computer platforms (PC's, MAC's, and Unix platforms), thereby providing platform independence.In contrast, the development of software to interact with a microprocessor (embedded controller) that is used to monitor and control a space experiment has generally been a unique development effort. For each experiment, a specific graphical user interface (GUI) has been developed. This procedure works well for a single-user environment. However, the interface for the International Space Station (ISS) Fluids and Combustion Facility will have to enable scientists throughout the world and astronauts onboard the ISS, using different computer platforms, to interact with their experiments in the Fluids and Combustion Facility. Developing a specific GUI for all these users would be cost prohibitive. An innovative solution to this requirement, developed at Lewis, is to use Internet technology, where the general problem of platform independence has already been partially solved, and to leverage this expanding technology as new products are developed. This approach led to the development of the Embedded Web Technology (EWT) program at Lewis, which has the potential to significantly reduce software development costs for both flight and ground software.

  5. Choice or Chance. Planning for Independent College Marketing and Retention. Report on the Admissions and Retention Phase of Northwest Area Foundation's Independent College Program 1973-1975.

    ERIC Educational Resources Information Center

    Hayden, Mary; And Others

    As a result of a study in 1972, in which independent college administrators were asked to assess their growth needs and problems, the Northwest Area Foundation established the Independent College Program to assist colleges in dealing with their needs. The first phase, the Admissions and Retention Program, was designed to assist colleges in coping…

  6. Houston Pre-Freshman Enrichment Program (Houston PREP). Final report, June 9, 1997--July 25, 1997

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1997-10-01

    The 1997 Houston Pre-Freshman Enrichment Program (PREP) was conducted at the campus of the University of Houston-Downtown from June 9 to July 25, 1997. Program participants were recruited from the Greater Houston Area. All participants were identified as high-achieving students with an interest in learning about the engineering and science professions. The goal of the program was to better prepare our pre-college youth prior to entering college as mathematics, science and engineering majors. The program participants were middle school and high school students from the Aldine, Alief, Channel View, Clear Creek, Cypress-Fairbanks, Fort Bend, Galena Park, Houston, Humble, Katy, Klein,more » North Forest, Pasadena, Private, and Spring Branch Independent School Districts. Of the 194 students starting the program, 165 students were from economically and socially disadvantage groups under-represented in the engineering and science professions, and 118 of the 194 were women. Our First Year group for 1997 composed of 96% minority and women students. Second and Third Year students combined were 96% minority or women. With financial support from the Center for Computational Sciences and Advanced Distributed Simulation, the Fourth Year Program was added to PREP this year. Twelve students completed the program (83% minority or women).« less

  7. Perceived barriers to completing an e-learning program on evidence-based medicine.

    PubMed

    Gagnon, Marie-Pierre; Légaré, France; Labrecque, Michel; Frémont, Pierre; Cauchon, Michel; Desmartis, Marie

    2007-01-01

    The Continuing Professional Development Center of the Faculty of Medicine at Laval University offers an internet-based program on evidence-based medicine (EBM). After one year, only three physicians out of the 40 who willingly paid to register had completed the entire program. This descriptive study aimed to identify physicians' beliefs regarding their completion of this online program. Using theoretical concepts from the Theory of Planned Behaviour, a semi-structured telephone interview guide was developed to assess respondents' attitudes, perceived subjective norms, perceived obstacles and facilitating conditions with respect to completing this internet-based program. Three independent reviewers performed content analysis of the interview transcripts to obtain an appropriate level of reliability. Findings were shared and organised according to theoretical categories of beliefs. A total of 35 physicians (88% response rate) were interviewed. Despite perceived advantages to completing the internet-based program, barriers remained, especially those related to physicians' perceptions of time constraints. Lack of personal discipline and unfamiliarity with computers were also perceived as important barriers. This study offers a theoretical basis to understand physicians' beliefs towards completing an internet-based continuing medical education (CME) program on EBM. Based upon respondents' insights, several modifications were carried out to enhance the uptake of the program by physicians and, therefore, its implementation.

  8. Computer program user's manual for FIREFINDER digital topographic data verification library dubbing system

    NASA Astrophysics Data System (ADS)

    Ceres, M.; Heselton, L. R., III

    1981-11-01

    This manual describes the computer programs for the FIREFINDER Digital Topographic Data Verification-Library-Dubbing System (FFDTDVLDS), and will assist in the maintenance of these programs. The manual contains detailed flow diagrams and associated descriptions for each computer program routine and subroutine. Complete computer program listings are also included. This information should be used when changes are made in the computer programs. The operating system has been designed to minimize operator intervention.

  9. From Petascale to Exascale: Eight Focus Areas of R&D Challenges for HPC Simulation Environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Springmeyer, R; Still, C; Schulz, M

    2011-03-17

    Programming models bridge the gap between the underlying hardware architecture and the supporting layers of software available to applications. Programming models are different from both programming languages and application programming interfaces (APIs). Specifically, a programming model is an abstraction of the underlying computer system that allows for the expression of both algorithms and data structures. In comparison, languages and APIs provide implementations of these abstractions and allow the algorithms and data structures to be put into practice - a programming model exists independently of the choice of both the programming language and the supporting APIs. Programming models are typically focusedmore » on achieving increased developer productivity, performance, and portability to other system designs. The rapidly changing nature of processor architectures and the complexity of designing an exascale platform provide significant challenges for these goals. Several other factors are likely to impact the design of future programming models. In particular, the representation and management of increasing levels of parallelism, concurrency and memory hierarchies, combined with the ability to maintain a progressive level of interoperability with today's applications are of significant concern. Overall the design of a programming model is inherently tied not only to the underlying hardware architecture, but also to the requirements of applications and libraries including data analysis, visualization, and uncertainty quantification. Furthermore, the successful implementation of a programming model is dependent on exposed features of the runtime software layers and features of the operating system. Successful use of a programming model also requires effective presentation to the software developer within the context of traditional and new software development tools. Consideration must also be given to the impact of programming models on both languages and the associated compiler infrastructure. Exascale programming models must reflect several, often competing, design goals. These design goals include desirable features such as abstraction and separation of concerns. However, some aspects are unique to large-scale computing. For example, interoperability and composability with existing implementations will prove critical. In particular, performance is the essential underlying goal for large-scale systems. A key evaluation metric for exascale models will be the extent to which they support these goals rather than merely enable them.« less

  10. AV Programs for Computer Know-How.

    ERIC Educational Resources Information Center

    Mandell, Phyllis Levy

    1985-01-01

    Lists 44 audiovisual programs (most released between 1983 and 1984) grouped in seven categories: computers in society, introduction to computers, computer operations, languages and programing, computer graphics, robotics, computer careers. Excerpts from "School Library Journal" reviews, price, and intended grade level are included. Names…

  11. Guidelines for development of NASA (National Aeronautics and Space Administration) computer security training programs

    NASA Technical Reports Server (NTRS)

    Tompkins, F. G.

    1983-01-01

    The report presents guidance for the NASA Computer Security Program Manager and the NASA Center Computer Security Officials as they develop training requirements and implement computer security training programs. NASA audiences are categorized based on the computer security knowledge required to accomplish identified job functions. Training requirements, in terms of training subject areas, are presented for both computer security program management personnel and computer resource providers and users. Sources of computer security training are identified.

  12. Computer programs: Operational and mathematical, a compilation

    NASA Technical Reports Server (NTRS)

    1973-01-01

    Several computer programs which are available through the NASA Technology Utilization Program are outlined. Presented are: (1) Computer operational programs which can be applied to resolve procedural problems swiftly and accurately. (2) Mathematical applications for the resolution of problems encountered in numerous industries. Although the functions which these programs perform are not new and similar programs are available in many large computer center libraries, this collection may be of use to centers with limited systems libraries and for instructional purposes for new computer operators.

  13. DARTAB: a program to combine airborne radionuclide environmental exposure data with dosimetric and health effects data to generate tabulations of predicted health impacts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Begovich, C.L.; Eckerman, K.F.; Schlatter, E.C.

    1981-08-01

    The DARTAB computer code combines radionuclide environmental exposure data with dosimetric and health effects data to generate tabulations of the predicted impact of radioactive airborne effluents. DARTAB is independent of the environmental transport code used to generate the environmental exposure data and the codes used to produce the dosimetric and health effects data. Therefore human dose and risk calculations need not be added to every environmental transport code. Options are included in DARTAB to permit the user to request tabulations by various topics (e.g., cancer site, exposure pathway, etc.) to facilitate characterization of the human health impacts of the effluents.more » The DARTAB code was written at ORNL for the US Environmental Protection Agency, Office of Radiation Programs.« less

  14. Cooperating reduction machines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kluge, W.E.

    1983-11-01

    This paper presents a concept and a system architecture for the concurrent execution of program expressions of a concrete reduction language based on lamda-expressions. If formulated appropriately, these expressions are well-suited for concurrent execution, following a demand-driven model of computation. In particular, recursive program expressions with nonlinear expansion may, at run time, recursively be partitioned into a hierarchy of independent subexpressions which can be reduced by a corresponding hierarchy of virtual reduction machines. This hierarchy unfolds and collapses dynamically, with virtual machines recursively assuming the role of masters that create and eventually terminate, or synchronize with, slaves. The paper alsomore » proposes a nonhierarchically organized system of reduction machines, each featuring a stack architecture, that effectively supports the allocation of virtual machines to the real machines of the system in compliance with their hierarchical order of creation and termination. 25 references.« less

  15. Automating tasks in protein structure determination with the clipper python module.

    PubMed

    McNicholas, Stuart; Croll, Tristan; Burnley, Tom; Palmer, Colin M; Hoh, Soon Wen; Jenkins, Huw T; Dodson, Eleanor; Cowtan, Kevin; Agirre, Jon

    2018-01-01

    Scripting programming languages provide the fastest means of prototyping complex functionality. Those with a syntax and grammar resembling human language also greatly enhance the maintainability of the produced source code. Furthermore, the combination of a powerful, machine-independent scripting language with binary libraries tailored for each computer architecture allows programs to break free from the tight boundaries of efficiency traditionally associated with scripts. In the present work, we describe how an efficient C++ crystallographic library such as Clipper can be wrapped, adapted and generalized for use in both crystallographic and electron cryo-microscopy applications, scripted with the Python language. We shall also place an emphasis on best practices in automation, illustrating how this can be achieved with this new Python module. © 2017 The Authors Protein Science published by Wiley Periodicals, Inc. on behalf of The Protein Society.

  16. Future directions in flight simulation: A user perspective

    NASA Technical Reports Server (NTRS)

    Jackson, Bruce

    1993-01-01

    Langley Research Center was an early leader in simulation technology, including a special emphasis in space vehicle simulations such as the rendezvous and docking simulator for the Gemini program and the lunar landing simulator used before Apollo. In more recent times, Langley operated the first synergistic six degree of freedom motion platform (the Visual Motion Simulator, or VMS) and developed the first dual-dome air combat simulator, the Differential Maneuvering Simulator (DMS). Each Langley simulator was developed more or less independently from one another with different programming support. At present time, the various simulation cockpits, while supported by the same host computer system, run dissimilar software. The majority of recent investments in Langley's simulation facilities have been hardware procurements: host processors, visual systems, and most recently, an improved motion system. Investments in software improvements, however, have not been of the same order.

  17. Evolutionary programming-based univector field navigation method for past mobile robots.

    PubMed

    Kim, Y J; Kim, J H; Kwon, D S

    2001-01-01

    Most of navigation techniques with obstacle avoidance do not consider the robot orientation at the target position. These techniques deal with the robot position only and are independent of its orientation and velocity. To solve these problems this paper proposes a novel univector field method for fast mobile robot navigation which introduces a normalized two dimensional vector field. The method provides fast moving robots with the desired posture at the target position and obstacle avoidance. To obtain the sub-optimal vector field, a function approximator is used and trained by evolutionary programming. Two kinds of vector fields are trained, one for the final posture acquisition and the other for obstacle avoidance. Computer simulations and real experiments are carried out for a fast moving mobile robot to demonstrate the effectiveness of the proposed scheme.

  18. Computer Literacy Project. A General Orientation in Basic Computer Concepts and Applications.

    ERIC Educational Resources Information Center

    Murray, David R.

    This paper proposes a two-part, basic computer literacy program for university faculty, staff, and students with no prior exposure to computers. The program described would introduce basic computer concepts and computing center service programs and resources; provide fundamental preparation for other computer courses; and orient faculty towards…

  19. An automated and reproducible workflow for running and analyzing neural simulations using Lancet and IPython Notebook

    PubMed Central

    Stevens, Jean-Luc R.; Elver, Marco; Bednar, James A.

    2013-01-01

    Lancet is a new, simulator-independent Python utility for succinctly specifying, launching, and collating results from large batches of interrelated computationally demanding program runs. This paper demonstrates how to combine Lancet with IPython Notebook to provide a flexible, lightweight, and agile workflow for fully reproducible scientific research. This informal and pragmatic approach uses IPython Notebook to capture the steps in a scientific computation as it is gradually automated and made ready for publication, without mandating the use of any separate application that can constrain scientific exploration and innovation. The resulting notebook concisely records each step involved in even very complex computational processes that led to a particular figure or numerical result, allowing the complete chain of events to be replicated automatically. Lancet was originally designed to help solve problems in computational neuroscience, such as analyzing the sensitivity of a complex simulation to various parameters, or collecting the results from multiple runs with different random starting points. However, because it is never possible to know in advance what tools might be required in future tasks, Lancet has been designed to be completely general, supporting any type of program as long as it can be launched as a process and can return output in the form of files. For instance, Lancet is also heavily used by one of the authors in a separate research group for launching batches of microprocessor simulations. This general design will allow Lancet to continue supporting a given research project even as the underlying approaches and tools change. PMID:24416014

  20. Online learning tools in an M.Ed. in Earth Sciences program

    NASA Astrophysics Data System (ADS)

    Richardson, E.

    2011-12-01

    Penn State's Master of Education in Earth Sciences program is a fully online 30-credit degree program serving mid-career secondary science teachers. Teachers in the program have a diverse background in science and math, are usually many years removed from their most recent degree, and are often deficient in the same geoscience skills as are beginning undergraduates. For example, they habitually assign incorrect causal relationships to concepts that are taught at the same time (such as sea-floor spreading and magnetic field reversals), and they have trouble with both object and spatial visualization. Program faculty also observe anecdotally that many teachers enter the program lacking the ability to describe their mental model of a given Earth science process, making it difficult to identify teachers' knowledge gaps. We have implemented many technical strategies to enhance program content delivery while trying to minimize the inherent barriers to completing quantitative assignments online and at a distance. These barriers include competence with and access to sophisticated data analysis and plotting programs commonly used by scientists. Here, I demonstrate two technical tools I use frequently to strengthen online content delivery and assessment. The first, Jing, is commercially-available, free, and platform-independent. Jing allows the user to make screencasts with narration and embed them into a web page as a flash movie or as an external link. The second is a set of simple sketching tools I have created using the programming language Processing, which is a free, open source, platform-independent language built on Java. The integration of easy-to-use drawing tools into problem sets and other assessments has enabled faculty to appraise a learner's grasp of the material without the steep technical learning curve and expense inherent in most computer graphics packages. A serendipitous benefit of teaching with these tools is that they are easy to learn and freely available and so the teachers in the program learn to use them, too. Qualitative assessment of feedback from the teachers in the program shows that they find the explanations, screencasts, animations, and discussions arising from these tools not only enhance their own learning but also inspire them to try them in their classrooms.

  1. A memory module for experimental data handling

    NASA Astrophysics Data System (ADS)

    De Blois, J.

    1985-02-01

    A compact CAMAC memory module for experimental data handling was developed to eliminate the need of direct memory access in computer controlled measurements. When using autonomous controllers it also makes measurements more independent of the program and enlarges the available space for programs in the memory of the micro-computer. The memory module has three modes of operation: an increment-, a list- and a fifo mode. This is achieved by connecting the main parts, being: the memory (MEM), the fifo buffer (FIFO), the address buffer (BUF), two counters (AUX and ADDR) and a readout register (ROR), by an internal 24-bit databus. The time needed for databus operations is 1 μs, for measuring cycles as well as for CAMAC cycles. The FIFO provides temporary data storage during CAMAC cycles and separates the memory part from the application part. The memory is variable from 1 to 64K (24 bits) by using different types of memory chips. The application part, which forms 1/3 of the module, will be specially designed for each application and is added to the memory chian internal connector. The memory unit will be used in Mössbauer experiments and in thermal neutron scattering experiments.

  2. Application of NASTRAN for stress analysis of left ventricle of the heart

    NASA Technical Reports Server (NTRS)

    Pao, Y. C.; Ritman, E. L.; Wang, H. C.

    1975-01-01

    Knowing the stress and strain distributions in the left ventricular wall of the heart is a prerequisite for the determination of the muscle elasticity and contractility in the process of assessing the functional status of the heart. NASTRAN was applied for the calculation of these stresses and strains and to help in verifying the results obtained by the computer program FEAMPS which was specifically designed for the plane-strain finite-element analysis of the left ventricular cross sections. Adopted for the analysis are the true shape and dimensions of the cross sections reconstructed from multiplanar X-ray views of a left ventricle which was surgically isolated from a dog's heart but metabolically supported to sustain its beating. A preprocessor was prepared to accommodate both FEAMPS and NASTRAN, and it has also facilitated the application of both the triangular element and isoparameteric quadrilateral element versions of NASTRAN. The stresses in several crucial regions of the left ventricular wall calculated by these two independently developed computer programs are found to be in good agreement. Such confirmation of the results is essential in the development of a method which assesses the heart performance.

  3. The Invar tensor package: Differential invariants of Riemann

    NASA Astrophysics Data System (ADS)

    Martín-García, J. M.; Yllanes, D.; Portugal, R.

    2008-10-01

    The long standing problem of the relations among the scalar invariants of the Riemann tensor is computationally solved for all 6ṡ10 objects with up to 12 derivatives of the metric. This covers cases ranging from products of up to 6 undifferentiated Riemann tensors to cases with up to 10 covariant derivatives of a single Riemann. We extend our computer algebra system Invar to produce within seconds a canonical form for any of those objects in terms of a basis. The process is as follows: (1) an invariant is converted in real time into a canonical form with respect to the permutation symmetries of the Riemann tensor; (2) Invar reads a database of more than 6ṡ10 relations and applies those coming from the cyclic symmetry of the Riemann tensor; (3) then applies the relations coming from the Bianchi identity, (4) the relations coming from commutations of covariant derivatives, (5) the dimensionally-dependent identities for dimension 4, and finally (6) simplifies invariants that can be expressed as product of dual invariants. Invar runs on top of the tensor computer algebra systems xTensor (for Mathematica) and Canon (for Maple). Program summaryProgram title:Invar Tensor Package v2.0 Catalogue identifier:ADZK_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZK_v2_0.html Program obtainable from:CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions:Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.:3 243 249 No. of bytes in distributed program, including test data, etc.:939 Distribution format:tar.gz Programming language:Mathematica and Maple Computer:Any computer running Mathematica versions 5.0 to 6.0 or Maple versions 9 and 11 Operating system:Linux, Unix, Windows XP, MacOS RAM:100 Mb Word size:64 or 32 bits Supplementary material:The new database of relations is much larger than that for the previous version and therefore has not been included in the distribution. To obtain the Mathematica and Maple database files click on this link. Classification:1.5, 5 Does the new version supersede the previous version?:Yes. The previous version (1.0) only handled algebraic invariants. The current version (2.0) has been extended to cover differential invariants as well. Nature of problem:Manipulation and simplification of scalar polynomial expressions formed from the Riemann tensor and its covariant derivatives. Solution method:Algorithms of computational group theory to simplify expressions with tensors that obey permutation symmetries. Tables of syzygies of the scalar invariants of the Riemann tensor. Reasons for new version:With this new version, the user can manipulate differential invariants of the Riemann tensor. Differential invariants are required in many physical problems in classical and quantum gravity. Summary of revisions:The database of syzygies has been expanded by a factor of 30. New commands were added in order to deal with the enlarged database and to manipulate the covariant derivative. Restrictions:The present version only handles scalars, and not expressions with free indices. Additional comments:The distribution file for this program is over 53 Mbytes and therefore is not delivered directly when download or Email is requested. Instead a html file giving details of how the program can be obtained is sent. Running time:One second to fully reduce any monomial of the Riemann tensor up to degree 7 or order 10 in terms of independent invariants. The Mathematica notebook included in the distribution takes approximately 5 minutes to run.

  4. Implementation of a Learning Program To Train Adolescent Mothers To Live Independently.

    ERIC Educational Resources Information Center

    Brown, Kathie

    Because of a lack of training, most adolescent mothers are not prepared to live independently. Accordingly, a learning program was designed to improve training for adolescent mothers to better prepare them for living independently. The learning program, implemented in 10 weeks, consisted of eight training sessions geared to the areas of basic life…

  5. Postsecondary Education Employment and Independent Living Outcomes of Persons with Autism and Intellectual Disability

    ERIC Educational Resources Information Center

    Ross, Jeffrey; Marcell, Jamia; Williams, Paula; Carlson, Dawn

    2013-01-01

    The aim of this study is to report employment and independent living outcomes of 125 graduates from the Taft College Transition to Independent Living (TIL) program. The TIL program has served students with intellectual and developmental disabilities, including autism spectrum disorder, since 1995. The TIL program follows graduates from the time of…

  6. Radiation force on absorbing targets and power measurements of a high intensity focused ultrasound (HIFU) source

    NASA Astrophysics Data System (ADS)

    Qian, Zuwen; Zhu, Zhemin; Ye, Shigong; Jiang, Wenhua; Zhu, Houqing; Yu, Jinshen

    2010-10-01

    Based on the analytic expressions for the radiated field of a circular concave piston given by Hasegawa et al., an integral for calculation of the radiation force on a plane absorbing target in a spherically focused field is derived. A general relation between acoustic power P and normal radiation force F n is obtained under the condition of kr ≫ 1. Numerical computation is carried out by using the symbolic computation program for practically focused sources and absorbing circular targets. The results show that, for a given source, there is a range of target positions where the radiation force is independent of the target’s position under the assumption that the contribution of the acoustic field behind the target to the radiation force can be neglected. The experiments are carried out and confirm that there is a range of target positions where the measured radiation force is basically independent of the target’s position even at high acoustic power (up to 700 W). It is believed that when the radiation force method is used to measure the acoustic power radiated from a focused source, the size of the target must be selected in such a way that no observable sound can be found in the region behind the target.

  7. HBonanza: A Computer Algorithm for Molecular-Dynamics-Trajectory Hydrogen-Bond Analysis

    PubMed Central

    Durrant, Jacob D.; McCammon, J. Andrew

    2011-01-01

    In the current work, we present a hydrogen-bond analysis of 2,673 ligand-receptor complexes that suggests the total number of hydrogen bonds formed between a ligand and its protein receptor is a poor predictor of ligand potency; furthermore, even that poor prediction does not suggest a statistically significant correlation between hydrogen-bond formation and potency. While we are not the first to suggest that hydrogen bonds on average do not generally contribute to ligand binding affinities, this additional evidence is nevertheless interesting. The primary role of hydrogen bonds may instead be to ensure specificity, to correctly position the ligand within the active site, and to hold the protein active site in a ligand-friendly conformation. We also present a new computer program called HBonanza (hydrogen-bond analyzer) that aids the analysis and visualization of hydrogen-bond networks. HBonanza, which can be used to analyze single structures or the many structures of a molecular dynamics trajectory, is open source and python implemented, making it easily editable, customizable, and platform independent. Unlike many other freely available hydrogen-bond analysis tools, HBonanza provides not only a text-based table describing the hydrogen-bond network, but also a Tcl script to facilitate visualization in VMD, a popular molecular visualization program. Visualization in other programs is also possible. A copy of HBonanza can be obtained free of charge from http://www.nbcr.net/hbonanza. PMID:21880522

  8. Information Security: Federal Guidance Needed to Address Control Issues With Implementing Cloud Computing

    DTIC Science & Technology

    2010-05-01

    Figure 2: Cloud Computing Deployment Models 13 Figure 3: NIST Essential Characteristics 14 Figure 4: NASA Nebula Container 37...Access Computing Environment (RACE) program, the National Aeronautics and Space Administration’s (NASA) Nebula program, and the Department of...computing programs: the DOD’s RACE program; NASA’s Nebula program; and Department of Transportation’s CARS program, including lessons learned related

  9. Accessing and distributing EMBL data using CORBA (common object request broker architecture).

    PubMed

    Wang, L; Rodriguez-Tomé, P; Redaschi, N; McNeil, P; Robinson, A; Lijnzaad, P

    2000-01-01

    The EMBL Nucleotide Sequence Database is a comprehensive database of DNA and RNA sequences and related information traditionally made available in flat-file format. Queries through tools such as SRS (Sequence Retrieval System) also return data in flat-file format. Flat files have a number of shortcomings, however, and the resources therefore currently lack a flexible environment to meet individual researchers' needs. The Object Management Group's common object request broker architecture (CORBA) is an industry standard that provides platform-independent programming interfaces and models for portable distributed object-oriented computing applications. Its independence from programming languages, computing platforms and network protocols makes it attractive for developing new applications for querying and distributing biological data. A CORBA infrastructure developed by EMBL-EBI provides an efficient means of accessing and distributing EMBL data. The EMBL object model is defined such that it provides a basis for specifying interfaces in interface definition language (IDL) and thus for developing the CORBA servers. The mapping from the object model to the relational schema in the underlying Oracle database uses the facilities provided by PersistenceTM, an object/relational tool. The techniques of developing loaders and 'live object caching' with persistent objects achieve a smart live object cache where objects are created on demand. The objects are managed by an evictor pattern mechanism. The CORBA interfaces to the EMBL database address some of the problems of traditional flat-file formats and provide an efficient means for accessing and distributing EMBL data. CORBA also provides a flexible environment for users to develop their applications by building clients to our CORBA servers, which can be integrated into existing systems.

  10. Accessing and distributing EMBL data using CORBA (common object request broker architecture)

    PubMed Central

    Wang, Lichun; Rodriguez-Tomé, Patricia; Redaschi, Nicole; McNeil, Phil; Robinson, Alan; Lijnzaad, Philip

    2000-01-01

    Background: The EMBL Nucleotide Sequence Database is a comprehensive database of DNA and RNA sequences and related information traditionally made available in flat-file format. Queries through tools such as SRS (Sequence Retrieval System) also return data in flat-file format. Flat files have a number of shortcomings, however, and the resources therefore currently lack a flexible environment to meet individual researchers' needs. The Object Management Group's common object request broker architecture (CORBA) is an industry standard that provides platform-independent programming interfaces and models for portable distributed object-oriented computing applications. Its independence from programming languages, computing platforms and network protocols makes it attractive for developing new applications for querying and distributing biological data. Results: A CORBA infrastructure developed by EMBL-EBI provides an efficient means of accessing and distributing EMBL data. The EMBL object model is defined such that it provides a basis for specifying interfaces in interface definition language (IDL) and thus for developing the CORBA servers. The mapping from the object model to the relational schema in the underlying Oracle database uses the facilities provided by PersistenceTM, an object/relational tool. The techniques of developing loaders and 'live object caching' with persistent objects achieve a smart live object cache where objects are created on demand. The objects are managed by an evictor pattern mechanism. Conclusions: The CORBA interfaces to the EMBL database address some of the problems of traditional flat-file formats and provide an efficient means for accessing and distributing EMBL data. CORBA also provides a flexible environment for users to develop their applications by building clients to our CORBA servers, which can be integrated into existing systems. PMID:11178259

  11. A model for managing sources of groundwater pollution

    USGS Publications Warehouse

    Gorelick, Steven M.

    1982-01-01

    The waste disposal capacity of a groundwater system can be maximized while maintaining water quality at specified locations by using a groundwater pollutant source management model that is based upon linear programing and numerical simulation. The decision variables of the management model are solute waste disposal rates at various facilities distributed over space. A concentration response matrix is used in the management model to describe transient solute transport and is developed using the U.S. Geological Survey solute transport simulation model. The management model was applied to a complex hypothetical groundwater system. Large-scale management models were formulated as dual linear programing problems to reduce numerical difficulties and computation time. Linear programing problems were solved using a numerically stable, available code. Optimal solutions to problems with successively longer management time horizons indicated that disposal schedules at some sites are relatively independent of the number of disposal periods. Optimal waste disposal schedules exhibited pulsing rather than constant disposal rates. Sensitivity analysis using parametric linear programing showed that a sharp reduction in total waste disposal potential occurs if disposal rates at any site are increased beyond their optimal values.

  12. SCARE: A post-processor program to MSC/NASTRAN for the reliability analysis of structural ceramic components

    NASA Technical Reports Server (NTRS)

    Gyekenyesi, J. P.

    1985-01-01

    A computer program was developed for calculating the statistical fast fracture reliability and failure probability of ceramic components. The program includes the two-parameter Weibull material fracture strength distribution model, using the principle of independent action for polyaxial stress states and Batdorf's shear-sensitive as well as shear-insensitive crack theories, all for volume distributed flaws in macroscopically isotropic solids. Both penny-shaped cracks and Griffith cracks are included in the Batdorf shear-sensitive crack response calculations, using Griffith's maximum tensile stress or critical coplanar strain energy release rate criteria to predict mixed mode fracture. Weibull material parameters can also be calculated from modulus of rupture bar tests, using the least squares method with known specimen geometry and fracture data. The reliability prediction analysis uses MSC/NASTRAN stress, temperature and volume output, obtained from the use of three-dimensional, quadratic, isoparametric, or axisymmetric finite elements. The statistical fast fracture theories employed, along with selected input and output formats and options, are summarized. An example problem to demonstrate various features of the program is included.

  13. Algorithms and programming tools for image processing on the MPP, part 2

    NASA Technical Reports Server (NTRS)

    Reeves, Anthony P.

    1986-01-01

    A number of algorithms were developed for image warping and pyramid image filtering. Techniques were investigated for the parallel processing of a large number of independent irregular shaped regions on the MPP. In addition some utilities for dealing with very long vectors and for sorting were developed. Documentation pages for the algorithms which are available for distribution are given. The performance of the MPP for a number of basic data manipulations was determined. From these results it is possible to predict the efficiency of the MPP for a number of algorithms and applications. The Parallel Pascal development system, which is a portable programming environment for the MPP, was improved and better documentation including a tutorial was written. This environment allows programs for the MPP to be developed on any conventional computer system; it consists of a set of system programs and a library of general purpose Parallel Pascal functions. The algorithms were tested on the MPP and a presentation on the development system was made to the MPP users group. The UNIX version of the Parallel Pascal System was distributed to a number of new sites.

  14. The interactive digital video interface

    NASA Technical Reports Server (NTRS)

    Doyle, Michael D.

    1989-01-01

    A frequent complaint in the computer oriented trade journals is that current hardware technology is progressing so quickly that software developers cannot keep up. A example of this phenomenon can be seen in the field of microcomputer graphics. To exploit the advantages of new mechanisms of information storage and retrieval, new approaches must be made towards incorporating existing programs as well as developing entirely new applications. A particular area of need is the correlation of discrete image elements to textural information. The interactive digital video (IDV) interface embodies a new concept in software design which addresses these needs. The IDV interface is a patented device and language independent process for identifying image features on a digital video display and which allows a number of different processes to be keyed to that identification. Its capabilities include the correlation of discrete image elements to relevant text information and the correlation of these image features to other images as well as to program control mechanisms. Sophisticated interrelationships can be set up between images, text, and program control mechanisms.

  15. Interactive graphical system for small-angle scattering analysis of polydisperse systems

    NASA Astrophysics Data System (ADS)

    Konarev, P. V.; Volkov, V. V.; Svergun, D. I.

    2016-09-01

    A program suite for one-dimensional small-angle scattering analysis of polydisperse systems and multiple data sets is presented. The main program, POLYSAS, has a menu-driven graphical user interface calling computational modules from ATSAS package to perform data treatment and analysis. The graphical menu interface allows one to process multiple (time, concentration or temperature-dependent) data sets and interactively change the parameters for the data modelling using sliders. The graphical representation of the data is done via the Winteracter-based program SASPLOT. The package is designed for the analysis of polydisperse systems and mixtures, and permits one to obtain size distributions and evaluate the volume fractions of the components using linear and non-linear fitting algorithms as well as model-independent singular value decomposition. The use of the POLYSAS package is illustrated by the recent examples of its application to study concentration-dependent oligomeric states of proteins and time kinetics of polymer micelles for anticancer drug delivery.

  16. A High Performance VLSI Computer Architecture For Computer Graphics

    NASA Astrophysics Data System (ADS)

    Chin, Chi-Yuan; Lin, Wen-Tai

    1988-10-01

    A VLSI computer architecture, consisting of multiple processors, is presented in this paper to satisfy the modern computer graphics demands, e.g. high resolution, realistic animation, real-time display etc.. All processors share a global memory which are partitioned into multiple banks. Through a crossbar network, data from one memory bank can be broadcasted to many processors. Processors are physically interconnected through a hyper-crossbar network (a crossbar-like network). By programming the network, the topology of communication links among processors can be reconfigurated to satisfy specific dataflows of different applications. Each processor consists of a controller, arithmetic operators, local memory, a local crossbar network, and I/O ports to communicate with other processors, memory banks, and a system controller. Operations in each processor are characterized into two modes, i.e. object domain and space domain, to fully utilize the data-independency characteristics of graphics processing. Special graphics features such as 3D-to-2D conversion, shadow generation, texturing, and reflection, can be easily handled. With the current high density interconnection (MI) technology, it is feasible to implement a 64-processor system to achieve 2.5 billion operations per second, a performance needed in most advanced graphics applications.

  17. Automating quantum experiment control

    NASA Astrophysics Data System (ADS)

    Stevens, Kelly E.; Amini, Jason M.; Doret, S. Charles; Mohler, Greg; Volin, Curtis; Harter, Alexa W.

    2017-03-01

    The field of quantum information processing is rapidly advancing. As the control of quantum systems approaches the level needed for useful computation, the physical hardware underlying the quantum systems is becoming increasingly complex. It is already becoming impractical to manually code control for the larger hardware implementations. In this chapter, we will employ an approach to the problem of system control that parallels compiler design for a classical computer. We will start with a candidate quantum computing technology, the surface electrode ion trap, and build a system instruction language which can be generated from a simple machine-independent programming language via compilation. We incorporate compile time generation of ion routing that separates the algorithm description from the physical geometry of the hardware. Extending this approach to automatic routing at run time allows for automated initialization of qubit number and placement and additionally allows for automated recovery after catastrophic events such as qubit loss. To show that these systems can handle real hardware, we present a simple demonstration system that routes two ions around a multi-zone ion trap and handles ion loss and ion placement. While we will mainly use examples from transport-based ion trap quantum computing, many of the issues and solutions are applicable to other architectures.

  18. OMPC: an Open-Source MATLAB®-to-Python Compiler

    PubMed Central

    Jurica, Peter; van Leeuwen, Cees

    2008-01-01

    Free access to scientific information facilitates scientific progress. Open-access scientific journals are a first step in this direction; a further step is to make auxiliary and supplementary materials that accompany scientific publications, such as methodological procedures and data-analysis tools, open and accessible to the scientific community. To this purpose it is instrumental to establish a software base, which will grow toward a comprehensive free and open-source language of technical and scientific computing. Endeavors in this direction are met with an important obstacle. MATLAB®, the predominant computation tool in many fields of research, is a closed-source commercial product. To facilitate the transition to an open computation platform, we propose Open-source MATLAB®-to-Python Compiler (OMPC), a platform that uses syntax adaptation and emulation to allow transparent import of existing MATLAB® functions into Python programs. The imported MATLAB® modules will run independently of MATLAB®, relying on Python's numerical and scientific libraries. Python offers a stable and mature open source platform that, in many respects, surpasses commonly used, expensive commercial closed source packages. The proposed software will therefore facilitate the transparent transition towards a free and general open-source lingua franca for scientific computation, while enabling access to the existing methods and algorithms of technical computing already available in MATLAB®. OMPC is available at http://ompc.juricap.com. PMID:19225577

  19. Accelerating Monte Carlo simulations with an NVIDIA ® graphics processor

    NASA Astrophysics Data System (ADS)

    Martinsen, Paul; Blaschke, Johannes; Künnemeyer, Rainer; Jordan, Robert

    2009-10-01

    Modern graphics cards, commonly used in desktop computers, have evolved beyond a simple interface between processor and display to incorporate sophisticated calculation engines that can be applied to general purpose computing. The Monte Carlo algorithm for modelling photon transport in turbid media has been implemented on an NVIDIA ® 8800 GT graphics card using the CUDA toolkit. The Monte Carlo method relies on following the trajectory of millions of photons through the sample, often taking hours or days to complete. The graphics-processor implementation, processing roughly 110 million scattering events per second, was found to run more than 70 times faster than a similar, single-threaded implementation on a 2.67 GHz desktop computer. Program summaryProgram title: Phoogle-C/Phoogle-G Catalogue identifier: AEEB_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEB_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 51 264 No. of bytes in distributed program, including test data, etc.: 2 238 805 Distribution format: tar.gz Programming language: C++ Computer: Designed for Intel PCs. Phoogle-G requires a NVIDIA graphics card with support for CUDA 1.1 Operating system: Windows XP Has the code been vectorised or parallelized?: Phoogle-G is written for SIMD architectures RAM: 1 GB Classification: 21.1 External routines: Charles Karney Random number library. Microsoft Foundation Class library. NVIDA CUDA library [1]. Nature of problem: The Monte Carlo technique is an effective algorithm for exploring the propagation of light in turbid media. However, accurate results require tracing the path of many photons within the media. The independence of photons naturally lends the Monte Carlo technique to implementation on parallel architectures. Generally, parallel computing can be expensive, but recent advances in consumer grade graphics cards have opened the possibility of high-performance desktop parallel-computing. Solution method: In this pair of programmes we have implemented the Monte Carlo algorithm described by Prahl et al. [2] for photon transport in infinite scattering media to compare the performance of two readily accessible architectures: a standard desktop PC and a consumer grade graphics card from NVIDIA. Restrictions: The graphics card implementation uses single precision floating point numbers for all calculations. Only photon transport from an isotropic point-source is supported. The graphics-card version has no user interface. The simulation parameters must be set in the source code. The desktop version has a simple user interface; however some properties can only be accessed through an ActiveX client (such as Matlab). Additional comments: The random number library used has a LGPL ( http://www.gnu.org/copyleft/lesser.html) licence. Running time: Runtime can range from minutes to months depending on the number of photons simulated and the optical properties of the medium. References:http://www.nvidia.com/object/cuda_home.html. S. Prahl, M. Keijzer, Sl. Jacques, A. Welch, SPIE Institute Series 5 (1989) 102.

  20. Evaluation of Mobile Authoring and Tutoring in Medical Issues

    ERIC Educational Resources Information Center

    Alepis, Efthymios; Virvou, Maria

    2010-01-01

    Mobile computing facilities may provide many assets to the educational process. Mobile technology provides software access from anywhere and at any time, as well as computer equipment independence. The need for time and place independence is even greater for medical instructors and medical students. Medical instructors are usually doctors that…

Top