NASA Technical Reports Server (NTRS)
Lamar, J. E.; Herbert, H. E.
1982-01-01
The latest production version, MARK IV, of the NASA-Langley vortex lattice computer program is summarized. All viable subcritical aerodynamic features of previous versions were retained. This version extends the previously documented program capabilities to four planforms, 400 panels, and enables the user to obtain vortex-flow aerodynamics on cambered planforms, flowfield properties off the configuration in attached flow, and planform longitudinal load distributions.
NASA Technical Reports Server (NTRS)
Hockney, George; Lee, Seungwon
2008-01-01
A computer program known as PyPele, originally written as a Pythonlanguage extension module of a C++ language program, has been rewritten in pure Python language. The original version of PyPele dispatches and coordinates parallel-processing tasks on cluster computers and provides a conceptual framework for spacecraft-mission- design and -analysis software tools to run in an embarrassingly parallel mode. The original version of PyPele uses SSH (Secure Shell a set of standards and an associated network protocol for establishing a secure channel between a local and a remote computer) to coordinate parallel processing. Instead of SSH, the present Python version of PyPele uses Message Passing Interface (MPI) [an unofficial de-facto standard language-independent application programming interface for message- passing on a parallel computer] while keeping the same user interface. The use of MPI instead of SSH and the preservation of the original PyPele user interface make it possible for parallel application programs written previously for the original version of PyPele to run on MPI-based cluster computers. As a result, engineers using the previously written application programs can take advantage of embarrassing parallelism without need to rewrite those programs.
John R. Mills
1989-01-01
The timber resource inventory model (TRIM) has been adapted to run on person al computers. The personal computer version of TRIM (PC-TRIM) is more widely used than its mainframe parent. Errors that existed in previous versions of TRIM have been corrected. Information is presented to help users with program input and output management in the DOS environment, to...
TSARINA: A Computer Model for Assessing Conventional and Chemical Attacks on Airbases
1990-09-01
IV, and has been updated to FORTRAN 77; it has been adapted to various computer systems, as was the widely used AIDA model and the previous versions of...conventional and chemical attacks on sortie generation. In the first version of TSARINA [1 2], several key additions were made to the AIDA model so that (1...various on-base resources, in addition to the estimates of hits and facility damage that are generated by the original AIDA model . The second version
Development and application of GASP 2.0
NASA Technical Reports Server (NTRS)
Mcgrory, W. D.; Huebner, L. D.; Slack, D. C.; Walters, R. W.
1992-01-01
GASP 2.0 represents a major new release of the computational fluid dynamics code in wide use by the aerospace community. The authors have spent the last two years analyzing the strengths and weaknesses of the previous version of the finite-rate chemistry, Navier Stokes solution algorithm. What has resulted is a completely redesigned computer code that offers two to four times the performance of previous versions while requiring as little as one quarter of the memory requirements. In addition to the improvements in efficiency over the original code, Version 2.0 contains many new features. A brief discussion of the improvements made to GASP, and an application using GASP 2.0 which demonstrates some of the new features are presented.
NASA Technical Reports Server (NTRS)
Jacob, Joseph; Katz, Daniel; Prince, Thomas; Berriman, Graham; Good, John; Laity, Anastasia
2006-01-01
The final version (3.0) of the Montage software has been released. To recapitulate from previous NASA Tech Briefs articles about Montage: This software generates custom, science-grade mosaics of astronomical images on demand from input files that comply with the Flexible Image Transport System (FITS) standard and contain image data registered on projections that comply with the World Coordinate System (WCS) standards. This software can be executed on single-processor computers, multi-processor computers, and such networks of geographically dispersed computers as the National Science Foundation s TeraGrid or NASA s Information Power Grid. The primary advantage of running Montage in a grid environment is that computations can be done on a remote supercomputer for efficiency. Multiple computers at different sites can be used for different parts of a computation a significant advantage in cases of computations for large mosaics that demand more processor time than is available at any one site. Version 3.0 incorporates several improvements over prior versions. The most significant improvement is that this version is accessible to scientists located anywhere, through operational Web services that provide access to data from several large astronomical surveys and construct mosaics on either local workstations or remote computational grids as needed.
NASA Astrophysics Data System (ADS)
Bytev, Vladimir V.; Kniehl, Bernd A.
2016-09-01
We present a further extension of the HYPERDIRE project, which is devoted to the creation of a set of Mathematica-based program packages for manipulations with Horn-type hypergeometric functions on the basis of differential equations. Specifically, we present the implementation of the differential reduction for the Lauricella function FC of three variables. Catalogue identifier: AEPP_v4_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEPP_v4_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 243461 No. of bytes in distributed program, including test data, etc.: 61610782 Distribution format: tar.gz Programming language: Mathematica. Computer: All computers running Mathematica. Operating system: Operating systems running Mathematica. Classification: 4.4. Does the new version supersede the previous version?: No, it significantly extends the previous version. Nature of problem: Reduction of hypergeometric function FC of three variables to a set of basis functions. Solution method: Differential reduction. Reasons for new version: The extension package allows the user to handle the Lauricella function FC of three variables. Summary of revisions: The previous version goes unchanged. Running time: Depends on the complexity of the problem.
Low Cost Ways to Keep Software Current.
ERIC Educational Resources Information Center
Schultheis, Robert A.
1992-01-01
Discusses strategies for providing students with current computer software technology including acquiring previous versions of software, obtaining demonstration software, using student versions, getting examination software, buying from mail order firms, buying few copies, exploring site licenses, acquiring shareware or freeware, and applying for…
CFL3D Version 6.4-General Usage and Aeroelastic Analysis
NASA Technical Reports Server (NTRS)
Bartels, Robert E.; Rumsey, Christopher L.; Biedron, Robert T.
2006-01-01
This document contains the course notes on the computational fluid dynamics code CFL3D version 6.4. It is intended to provide from basic to advanced users the information necessary to successfully use the code for a broad range of cases. Much of the course covers capability that has been a part of previous versions of the code, with material compiled from a CFL3D v5.0 manual and from the CFL3D v6 web site prior to the current release. This part of the material is presented to users of the code not familiar with computational fluid dynamics. There is new capability in CFL3D version 6.4 presented here that has not previously been published. There are also outdated features no longer used or recommended in recent releases of the code. The information offered here supersedes earlier manuals and updates outdated usage. Where current usage supersedes older versions, notation of that is made. These course notes also provides hints for usage, code installation and examples not found elsewhere.
A massively parallel computational approach to coupled thermoelastic/porous gas flow problems
NASA Technical Reports Server (NTRS)
Shia, David; Mcmanus, Hugh L.
1995-01-01
A new computational scheme for coupled thermoelastic/porous gas flow problems is presented. Heat transfer, gas flow, and dynamic thermoelastic governing equations are expressed in fully explicit form, and solved on a massively parallel computer. The transpiration cooling problem is used as an example problem. The numerical solutions have been verified by comparison to available analytical solutions. Transient temperature, pressure, and stress distributions have been obtained. Small spatial oscillations in pressure and stress have been observed, which would be impractical to predict with previously available schemes. Comparisons between serial and massively parallel versions of the scheme have also been made. The results indicate that for small scale problems the serial and parallel versions use practically the same amount of CPU time. However, as the problem size increases the parallel version becomes more efficient than the serial version.
A new version of Visual tool for estimating the fractal dimension of images
NASA Astrophysics Data System (ADS)
Grossu, I. V.; Felea, D.; Besliu, C.; Jipa, Al.; Bordeianu, C. C.; Stan, E.; Esanu, T.
2010-04-01
This work presents a new version of a Visual Basic 6.0 application for estimating the fractal dimension of images (Grossu et al., 2009 [1]). The earlier version was limited to bi-dimensional sets of points, stored in bitmap files. The application was extended for working also with comma separated values files and three-dimensional images. New version program summaryProgram title: Fractal Analysis v02 Catalogue identifier: AEEG_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEG_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 9999 No. of bytes in distributed program, including test data, etc.: 4 366 783 Distribution format: tar.gz Programming language: MS Visual Basic 6.0 Computer: PC Operating system: MS Windows 98 or later RAM: 30 M Classification: 14 Catalogue identifier of previous version: AEEG_v1_0 Journal reference of previous version: Comput. Phys. Comm. 180 (2009) 1999 Does the new version supersede the previous version?: Yes Nature of problem: Estimating the fractal dimension of 2D and 3D images. Solution method: Optimized implementation of the box-counting algorithm. Reasons for new version:The previous version was limited to bitmap image files. The new application was extended in order to work with objects stored in comma separated values (csv) files. The main advantages are: Easier integration with other applications (csv is a widely used, simple text file format); Less resources consumed and improved performance (only the information of interest, the "black points", are stored); Higher resolution (the points coordinates are loaded into Visual Basic double variables [2]); Possibility of storing three-dimensional objects (e.g. the 3D Sierpinski gasket). In this version the optimized box-counting algorithm [1] was extended to the three-dimensional case. Summary of revisions:The application interface was changed from SDI (single document interface) to MDI (multi-document interface). One form was added in order to provide a graphical user interface for the new functionalities (fractal analysis of 2D and 3D images stored in csv files). Additional comments: User friendly graphical interface; Easy deployment mechanism. Running time: In the first approximation, the algorithm is linear. References:[1] I.V. Grossu, C. Besliu, M.V. Rusu, Al. Jipa, C.C. Bordeianu, D. Felea, Comput. Phys. Comm. 180 (2009) 1999-2001.[2] F. Balena, Programming Microsoft Visual Basic 6.0, Microsoft Press, US, 1999.
Multithreaded transactions in scientific computing. The Growth06_v2 program
NASA Astrophysics Data System (ADS)
Daniluk, Andrzej
2009-07-01
Writing a concurrent program can be more difficult than writing a sequential program. Programmer needs to think about synchronization, race conditions and shared variables. Transactions help reduce the inconvenience of using threads. A transaction is an abstraction, which allows programmers to group a sequence of actions on the program into a logical, higher-level computation unit. This paper presents a new version of the GROWTHGr and GROWTH06 programs. New version program summaryProgram title: GROWTH06_v2 Catalogue identifier: ADVL_v2_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVL_v2_1.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 65 255 No. of bytes in distributed program, including test data, etc.: 865 985 Distribution format: tar.gz Programming language: Object Pascal Computer: Pentium-based PC Operating system: Windows 9x, XP, NT, Vista RAM: more than 1 MB Classification: 4.3, 7.2, 6.2, 8, 14 Catalogue identifier of previous version: ADVL_v2_0 Journal reference of previous version: Comput. Phys. Comm. 175 (2006) 678 Does the new version supersede the previous version?: Yes Nature of problem: The programs compute the RHEED intensities during the growth of thin epitaxial structures prepared using the molecular beam epitaxy (MBE). The computations are based on the use of kinematical diffraction theory. Solution method: Epitaxial growth of thin films is modelled by a set of non-linear differential equations [1]. The Runge-Kutta method with adaptive stepsize control was used for solving initial value problem for non-linear differential equations [2]. Reasons for new version: According to the users' suggestions functionality of the program has been improved. Moreover, new use cases have been added which make the handling of the program easier and more efficient than the previous ones [3]. Summary of revisions:The design pattern (See Fig. 2 of Ref. [3]) has been modified according to the scheme shown on Fig. 1. A graphical user interface (GUI) for the program has been reconstructed. Fig. 2 presents a hybrid diagram of a GUI that shows how onscreen objects connect to use cases. The program has been compiled with English/USA regional and language options. Note: The figures mentioned above are contained in the program distribution file. Unusual features: The program is distributed in the form of source project GROWTH06_v2.dpr with associated files, and should be compiled using Borland Delphi compilers versions 6 or latter (including Borland Developer Studio 2006 and Code Gear compilers for Delphi). Additional comments: Two figures are included in the program distribution file. These are captioned Static classes model for Transaction design pattern. A model of a window that shows how onscreen objects connect to use cases. Running time: The typical running time is machine and user-parameters dependent. References: [1] A. Daniluk, Comput. Phys. Comm. 170 (2005) 265. [2] W.H. Press, B.P. Flannery, S.A. Teukolsky, W.T. Vetterling, Numerical Recipes in Pascal: The Art of Scientific Computing, first ed., Cambridge University Press, 1989. [3] M. Brzuszek, A. Daniluk, Comput. Phys. Comm. 175 (2006) 678.
NASA Astrophysics Data System (ADS)
Bauschlicher, Charles W., Jr.; Ricca, A.; Boersma, C.; Allamandola, L. J.
2018-02-01
Version 3.00 of the library of computed spectra in the NASA Ames PAH IR Spectroscopic Database (PAHdb) is described. Version 3.00 introduces the use of multiple scale factors, instead of the single scaling factor used previously, to align the theoretical harmonic frequencies with the experimental fundamentals. The use of multiple scale factors permits the use of a variety of basis sets; this allows new PAH species to be included in the database, such as those containing oxygen, and yields an improved treatment of strained species and those containing nitrogen. In addition, the computed spectra of 2439 new PAH species have been added. The impact of these changes on the analysis of an astronomical spectrum through database-fitting is considered and compared with a fit using Version 2.00 of the library of computed spectra. Finally, astronomical constraints are defined for the PAH spectral libraries in PAHdb.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perez, R. Navarro; Schunck, N.; Lasseri, R.
2017-03-09
HFBTHO is a physics computer code that is used to model the structure of the nucleus. It is an implementation of the nuclear energy Density Functional Theory (DFT), where the energy of the nucleus is obtained by integration over space of some phenomenological energy density, which is itself a functional of the neutron and proton densities. In HFBTHO, the energy density derives either from the zero-range Dkyrme or the finite-range Gogny effective two-body interaction between nucleons. Nuclear superfluidity is treated at the Hartree-Fock-Bogoliubov (HFB) approximation, and axial-symmetry of the nuclear shape is assumed. This version is the 3rd release ofmore » the program; the two previous versions were published in Computer Physics Communications [1,2]. The previous version was released at LLNL under GPL 3 Open Source License and was given release code LLNL-CODE-573953.« less
User Manual for the NASA Glenn Ice Accretion Code LEWICE: Version 2.0
NASA Technical Reports Server (NTRS)
Wright, William B.
1999-01-01
A research project is underway at NASA Glenn to produce a computer code which can accurately predict ice growth under a wide range of meteorological conditions for any aircraft surface. This report will present a description of the code inputs and outputs from version 2.0 of this code, which is called LEWICE. This version differs from previous releases due to its robustness and its ability to reproduce results accurately for different spacing and time step criteria across computing platform. It also differs in the extensive effort undertaken to compare the results against the database of ice shapes which have been generated in the NASA Glenn Icing Research Tunnel (IRT) 1. This report will only describe the features of the code related to the use of the program. The report will not describe the inner working of the code or the physical models used. This information is available in the form of several unpublished documents which will be collectively referred to as a Programmers Manual for LEWICE 2 in this report. These reports are intended as an update/replacement for all previous user manuals of LEWICE. In addition to describing the changes and improvements made for this version, information from previous manuals may be duplicated so that the user will not need to consult previous manuals to use this code.
Updated System-Availability and Resource-Allocation Program
NASA Technical Reports Server (NTRS)
Viterna, Larry
2004-01-01
A second version of the Availability, Cost and Resource Allocation (ACARA) computer program has become available. The first version was reported in an earlier tech brief. To recapitulate: ACARA analyzes the availability, mean-time-between-failures of components, life-cycle costs, and scheduling of resources of a complex system of equipment. ACARA uses a statistical Monte Carlo method to simulate the failure and repair of components while complying with user-specified constraints on spare parts and resources. ACARA evaluates the performance of the system on the basis of a mathematical model developed from a block-diagram representation. The previous version utilized the MS-DOS operating system and could not be run by use of the most recent versions of the Windows operating system. The current version incorporates the algorithms of the previous version but is compatible with Windows and utilizes menus and a file-management approach typical of Windows-based software.
Application of a Scalable, Parallel, Unstructured-Grid-Based Navier-Stokes Solver
NASA Technical Reports Server (NTRS)
Parikh, Paresh
2001-01-01
A parallel version of an unstructured-grid based Navier-Stokes solver, USM3Dns, previously developed for efficient operation on a variety of parallel computers, has been enhanced to incorporate upgrades made to the serial version. The resultant parallel code has been extensively tested on a variety of problems of aerospace interest and on two sets of parallel computers to understand and document its characteristics. An innovative grid renumbering construct and use of non-blocking communication are shown to produce superlinear computing performance. Preliminary results from parallelization of a recently introduced "porous surface" boundary condition are also presented.
QDENSITY—A Mathematica quantum computer simulation
NASA Astrophysics Data System (ADS)
Juliá-Díaz, Bruno; Burdis, Joseph M.; Tabakin, Frank
2009-03-01
This Mathematica 6.0 package is a simulation of a Quantum Computer. The program provides a modular, instructive approach for generating the basic elements that make up a quantum circuit. The main emphasis is on using the density matrix, although an approach using state vectors is also implemented in the package. The package commands are defined in Qdensity.m which contains the tools needed in quantum circuits, e.g., multiqubit kets, projectors, gates, etc. New version program summaryProgram title: QDENSITY 2.0 Catalogue identifier: ADXH_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXH_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 26 055 No. of bytes in distributed program, including test data, etc.: 227 540 Distribution format: tar.gz Programming language: Mathematica 6.0 Operating system: Any which supports Mathematica; tested under Microsoft Windows XP, Macintosh OS X, and Linux FC4 Catalogue identifier of previous version: ADXH_v1_0 Journal reference of previous version: Comput. Phys. Comm. 174 (2006) 914 Classification: 4.15 Does the new version supersede the previous version?: Offers an alternative, more up to date, implementation Nature of problem: Analysis and design of quantum circuits, quantum algorithms and quantum clusters. Solution method: A Mathematica package is provided which contains commands to create and analyze quantum circuits. Several Mathematica notebooks containing relevant examples: Teleportation, Shor's Algorithm and Grover's search are explained in detail. A tutorial, Tutorial.nb is also enclosed. Reasons for new version: The package has been updated to make it fully compatible with Mathematica 6.0 Summary of revisions: The package has been updated to make it fully compatible with Mathematica 6.0 Running time: Most examples included in the package, e.g., the tutorial, Shor's examples, Teleportation examples and Grover's search, run in less than a minute on a Pentium 4 processor (2.6 GHz). The running time for a quantum computation depends crucially on the number of qubits employed.
NASA Technical Reports Server (NTRS)
Yeh, Hue-Hsia; Brown, Cheryl; Jeng, Frank
2012-01-01
Advanced Life Support Sizing Analysis Tool (ALSSAT) at the time of this reporting has been updated to version 6.0. A previous version was described in Tool for Sizing Analysis of the Advanced Life Support System (MSC- 23506), NASA Tech Briefs, Vol. 29, No. 12 (December 2005), page 43. To recapitulate: ALSSAT is a computer program for sizing and analyzing designs of environmental-control and life-support systems for spacecraft and surface habitats to be involved in exploration of Mars and the Moon. Of particular interest for analysis by ALSSAT are conceptual designs of advanced life-support (ALS) subsystems that utilize physicochemical and biological processes to recycle air and water and process human wastes to reduce the need of resource resupply. ALSSAT is a means of investigating combinations of such subsystems technologies featuring various alternative conceptual designs and thereby assisting in determining which combination is most cost-effective. ALSSAT version 6.0 has been improved over previous versions in several respects, including the following additions: an interface for reading sizing data from an ALS database, computational models of a redundant regenerative CO2 and Moisture Removal Amine Swing Beds (CAMRAS) for CO2 removal, upgrade of the Temperature & Humidity Control's Common Cabin Air Assembly to a detailed sizing model, and upgrade of the Food-management subsystem.
QuTiP 2: A Python framework for the dynamics of open quantum systems
NASA Astrophysics Data System (ADS)
Johansson, J. R.; Nation, P. D.; Nori, Franco
2013-04-01
We present version 2 of QuTiP, the Quantum Toolbox in Python. Compared to the preceding version [J.R. Johansson, P.D. Nation, F. Nori, Comput. Phys. Commun. 183 (2012) 1760.], we have introduced numerous new features, enhanced performance, and made changes in the Application Programming Interface (API) for improved functionality and consistency within the package, as well as increased compatibility with existing conventions used in other scientific software packages for Python. The most significant new features include efficient solvers for arbitrary time-dependent Hamiltonians and collapse operators, support for the Floquet formalism, and new solvers for Bloch-Redfield and Floquet-Markov master equations. Here we introduce these new features, demonstrate their use, and give a summary of the important backward-incompatible API changes introduced in this version. Catalog identifier: AEMB_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEMB_v2_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 33625 No. of bytes in distributed program, including test data, etc.: 410064 Distribution format: tar.gz Programming language: Python. Computer: i386, x86-64. Operating system: Linux, Mac OSX. RAM: 2+ Gigabytes Classification: 7. External routines: NumPy, SciPy, Matplotlib, Cython Catalog identifier of previous version: AEMB_v1_0 Journal reference of previous version: Comput. Phys. Comm. 183 (2012) 1760 Does the new version supercede the previous version?: Yes Nature of problem: Dynamics of open quantum systems Solution method: Numerical solutions to Lindblad, Floquet-Markov, and Bloch-Redfield master equations, as well as the Monte Carlo wave function method. Reasons for new version: Compared to the preceding version we have introduced numerous new features, enhanced performance, and made changes in the Application Programming Interface (API) for improved functionality and consistency within the package, as well as increased compatibility with existing conventions used in other scientific software packages for Python. The most significant new features include efficient solvers for arbitrary time-dependent Hamiltonians and collapse operators, support for the Floquet formalism, and new solvers for Bloch-Redfield and Floquet-Markov master equations. Restrictions: Problems must meet the criteria for using the master equation in Lindblad, Floquet-Markov, or Bloch-Redfield form. Running time: A few seconds up to several tens of hours, depending on size of the underlying Hilbert space.
The Invar tensor package: Differential invariants of Riemann
NASA Astrophysics Data System (ADS)
Martín-García, J. M.; Yllanes, D.; Portugal, R.
2008-10-01
The long standing problem of the relations among the scalar invariants of the Riemann tensor is computationally solved for all 6ṡ10 objects with up to 12 derivatives of the metric. This covers cases ranging from products of up to 6 undifferentiated Riemann tensors to cases with up to 10 covariant derivatives of a single Riemann. We extend our computer algebra system Invar to produce within seconds a canonical form for any of those objects in terms of a basis. The process is as follows: (1) an invariant is converted in real time into a canonical form with respect to the permutation symmetries of the Riemann tensor; (2) Invar reads a database of more than 6ṡ10 relations and applies those coming from the cyclic symmetry of the Riemann tensor; (3) then applies the relations coming from the Bianchi identity, (4) the relations coming from commutations of covariant derivatives, (5) the dimensionally-dependent identities for dimension 4, and finally (6) simplifies invariants that can be expressed as product of dual invariants. Invar runs on top of the tensor computer algebra systems xTensor (for Mathematica) and Canon (for Maple). Program summaryProgram title:Invar Tensor Package v2.0 Catalogue identifier:ADZK_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZK_v2_0.html Program obtainable from:CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions:Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.:3 243 249 No. of bytes in distributed program, including test data, etc.:939 Distribution format:tar.gz Programming language:Mathematica and Maple Computer:Any computer running Mathematica versions 5.0 to 6.0 or Maple versions 9 and 11 Operating system:Linux, Unix, Windows XP, MacOS RAM:100 Mb Word size:64 or 32 bits Supplementary material:The new database of relations is much larger than that for the previous version and therefore has not been included in the distribution. To obtain the Mathematica and Maple database files click on this link. Classification:1.5, 5 Does the new version supersede the previous version?:Yes. The previous version (1.0) only handled algebraic invariants. The current version (2.0) has been extended to cover differential invariants as well. Nature of problem:Manipulation and simplification of scalar polynomial expressions formed from the Riemann tensor and its covariant derivatives. Solution method:Algorithms of computational group theory to simplify expressions with tensors that obey permutation symmetries. Tables of syzygies of the scalar invariants of the Riemann tensor. Reasons for new version:With this new version, the user can manipulate differential invariants of the Riemann tensor. Differential invariants are required in many physical problems in classical and quantum gravity. Summary of revisions:The database of syzygies has been expanded by a factor of 30. New commands were added in order to deal with the enlarged database and to manipulate the covariant derivative. Restrictions:The present version only handles scalars, and not expressions with free indices. Additional comments:The distribution file for this program is over 53 Mbytes and therefore is not delivered directly when download or Email is requested. Instead a html file giving details of how the program can be obtained is sent. Running time:One second to fully reduce any monomial of the Riemann tensor up to degree 7 or order 10 in terms of independent invariants. The Mathematica notebook included in the distribution takes approximately 5 minutes to run.
HELAC-PHEGAS: A generator for all parton level processes
NASA Astrophysics Data System (ADS)
Cafarella, Alessandro; Papadopoulos, Costas G.; Worek, Malgorzata
2009-10-01
The updated version of the HELAC-PHEGAS event generator is presented. The matrix elements are calculated through Dyson-Schwinger recursive equations using color connection representation. Phase-space generation is based on a multichannel approach, including optimization. HELAC-PHEGAS generates parton level events with all necessary information, in the most recent Les Houches Accord format, for the study of any process within the Standard Model in hadron and lepton colliders. New version program summaryProgram title: HELAC-PHEGAS Catalogue identifier: ADMS_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADMS_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 35 986 No. of bytes in distributed program, including test data, etc.: 380 214 Distribution format: tar.gz Programming language: Fortran Computer: All Operating system: Linux Classification: 11.1, 11.2 External routines: Optionally Les Houches Accord (LHA) PDF Interface library ( http://projects.hepforge.org/lhapdf/) Catalogue identifier of previous version: ADMS_v1_0 Journal reference of previous version: Comput. Phys. Comm. 132 (2000) 306 Does the new version supersede the previous version?: Yes, partly Nature of problem: One of the most striking features of final states in current and future colliders is the large number of events with several jets. Being able to predict their features is essential. To achieve this, the calculations need to describe as accurately as possible the full matrix elements for the underlying hard processes. Even at leading order, perturbation theory based on Feynman graphs runs into computational problems, since the number of graphs contributing to the amplitude grows as n!. Solution method: Recursive algorithms based on Dyson-Schwinger equations have been developed recently in order to overcome the computational obstacles. The calculation of the amplitude, using Dyson-Schwinger recursive equations, results in a computational cost growing asymptotically as 3 n, where n is the number of particles involved in the process. Off-shell subamplitudes are introduced, for which a recursion relation has been obtained allowing to express an n-particle amplitude in terms of subamplitudes, with 1-, 2-, … up to (n-1) particles. The color connection representation is used in order to treat amplitudes involving colored particles. In the present version HELAC-PHEGAS can be used to efficiently obtain helicity amplitudes, total cross sections, parton-level event samples in LHA format, for arbitrary multiparticle processes in the Standard Model in leptonic, pp¯ and pp collisions. Reasons for new version: Substantial improvements, major functionality upgrade. Summary of revisions: Color connection representation, efficient integration over PDF via the PARNI algorithm, interface to LHAPDF, parton level events generated in the most recent LHA format, k reweighting for Parton Shower matching, numerical predictions for amplitudes for arbitrary processes for phase-space points provided by the user, new user interface and the possibility to run over computer clusters. Running time: Depending on the process studied. Usually from seconds to hours. References:A. Kanaki, C.G. Papadopoulos, Comput. Phys. Comm. 132 (2000) 306. C.G. Papadopoulos, Comput. Phys. Comm. 137 (2001) 247. URL: http://www.cern.ch/helac-phegas.
Simulating electron energy loss spectroscopy with the MNPBEM toolbox
NASA Astrophysics Data System (ADS)
Hohenester, Ulrich
2014-03-01
Within the MNPBEM toolbox, we show how to simulate electron energy loss spectroscopy (EELS) of plasmonic nanoparticles using a boundary element method approach. The methodology underlying our approach closely follows the concepts developed by García de Abajo and coworkers (Garcia de Abajo, 2010). We introduce two classes eelsret and eelsstat that allow in combination with our recently developed MNPBEM toolbox for a simple, robust, and efficient computation of EEL spectra and maps. The classes are accompanied by a number of demo programs for EELS simulation of metallic nanospheres, nanodisks, and nanotriangles, and for electron trajectories passing by or penetrating through the metallic nanoparticles. We also discuss how to compute electric fields induced by the electron beam and cathodoluminescence. Catalogue identifier: AEKJ_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKJ_v2_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 38886 No. of bytes in distributed program, including test data, etc.: 1222650 Distribution format: tar.gz Programming language: Matlab 7.11.0 (R2010b). Computer: Any which supports Matlab 7.11.0 (R2010b). Operating system: Any which supports Matlab 7.11.0 (R2010b). RAM:≥1 GB Classification: 18. Catalogue identifier of previous version: AEKJ_v1_0 Journal reference of previous version: Comput. Phys. Comm. 183 (2012) 370 External routines: MESH2D available at www.mathworks.com Does the new version supersede the previous version?: Yes Nature of problem: Simulation of electron energy loss spectroscopy (EELS) for plasmonic nanoparticles. Solution method: Boundary element method using electromagnetic potentials. Reasons for new version: The new version of the toolbox includes two additional classes for the simulation of electron energy loss spectroscopy (EELS) of plasmonic nanoparticles, and corrects a few minor bugs and inconsistencies. Summary of revisions: New classes “eelsstat” and “eelsret” for the simulation of electron energy loss spectroscopy (EELS) of plasmonic nanoparticles have been added. A few minor errors in the implementation of dipole excitation have been corrected. Running time: Depending on surface discretization between seconds and hours.
NASA Astrophysics Data System (ADS)
Angeli, C.; Cimiraglia, R.
2013-02-01
A symbolic program performing the Formal Reduction of Density Operators (FRODO), formerly developed in the MuPAD computer algebra system with the purpose of evaluating the matrix elements of the electronic Hamiltonian between internally contracted functions in a complete active space (CAS) scheme, has been rewritten in Mathematica. New version : A program summaryProgram title: FRODO Catalogue identifier: ADV Y _v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVY_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 3878 No. of bytes in distributed program, including test data, etc.: 170729 Distribution format: tar.gz Programming language: Mathematica Computer: Any computer on which the Mathematica computer algebra system can be installed Operating system: Linux Classification: 5 Catalogue identifier of previous version: ADV Y _v1_0 Journal reference of previous version: Comput. Phys. Comm. 171(2005)63 Does the new version supersede the previous version?: No Nature of problem. In order to improve on the CAS-SCF wavefunction one can resort to multireference perturbation theory or configuration interaction based on internally contracted functions (ICFs) which are obtained by application of the excitation operators to the reference CAS-SCF wavefunction. The previous formulation of such matrix elements in the MuPAD computer algebra system, has been rewritten using Mathematica. Solution method: The method adopted consists in successively eliminating all occurrences of inactive orbital indices (core and virtual) from the products of excitation operators which appear in the definition of the ICFs and in the electronic Hamiltonian expressed in the second quantization formalism. Reasons for new version: Some years ago we published in this journal a couple of papers [1, 2] hereafter to be referred to as papers I and II, respectively dedicated to the automated evaluation of the matrix elements of the molecular electronic Hamiltonian between internally contracted functions [3] (ICFs). In paper II the program FRODO (after Formal Reduction Of Density Operators) was presented with the purpose of providing working formulas for each occurrence of the ICFs. The original FRODO program was written in the MuPAD computer algebra system [4] and was actively used in our group for the generation of the matrix elements to be employed in the third-order n-electron valence state perturbation theory (NEVPT) [5-8] as well as in the internally contracted configuration interaction (IC-CI) [9]. We present a new version of the program FRODO written in the Mathematica system [10]. The reason for the rewriting of the program lies in the fact that, on the one hand, MuPAD does not seem to be any longer available as a stand-alone system and, on the other hand, Mathematica, due to its ubiquitousness, appears to be increasingly the computer algebra system most widely used nowadays. Restrictions: The program is limited to no more than doubly excited ICFs. Running time: The examples described in the Readme file take a few seconds to run. References: [1] C. Angeli, R. Cimiraglia, Comp. Phys. Comm. 166 (2005) 53. [2] C. Angeli, R. Cimiraglia, Comp. Phys. Comm. 171 (2005) 63. [3] H.-J. Werner, P. J. Knowles, Adv. Chem. Phys. 89 (1988) 5803. [4] B. Fuchssteiner, W. Oevel: http://www.mupad.de Mupad research group, university of Paderborn. Mupad version 2.5.3 for Linux. [5] C. Angeli, R. Cimiraglia, S. Evangelisti, T. Leininger, J.-P. Malrieu, J. Chem. Phys. 114 (2001) 10252. [6] C. Angeli, R. Cimiraglia, J.-P. Malrieu, J. Chem. Phys. 117 (2002) 9138. [7] C. Angeli, B. Bories, A. Cavallini, R. Cimiraglia, J. Chem. Phys. 124 (2006) 054108. [8] C. Angeli, M. Pastore, R. Cimiraglia, Theor. Chem. Acc. 117 (2007) 743. [9] C. Angeli, R. Cimiraglia, Mol. Phys. in press, DOI:10.1080/00268976.2012.689872 [10] http://www.wolfram.com/Mathematica. Mathematica version 8 for Linux.
CADNA_C: A version of CADNA for use with C or C++ programs
NASA Astrophysics Data System (ADS)
Lamotte, Jean-Luc; Chesneaux, Jean-Marie; Jézéquel, Fabienne
2010-11-01
The CADNA library enables one to estimate round-off error propagation using a probabilistic approach. The CADNA_C version enables this estimation in C or C++ programs, while the previous version had been developed for Fortran programs. The CADNA_C version has the same features as the previous one: with CADNA the numerical quality of any simulation program can be controlled. Furthermore by detecting all the instabilities which may occur at run time, a numerical debugging of the user code can be performed. CADNA provides new numerical types on which round-off errors can be estimated. Slight modifications are required to control a code with CADNA, mainly changes in variable declarations, input and output. New version program summaryProgram title: CADNA_C Catalogue identifier: AEGQ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGQ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 60 075 No. of bytes in distributed program, including test data, etc.: 710 781 Distribution format: tar.gz Programming language: C++ Computer: PC running LINUX with an i686 or an ia64 processor, UNIX workstations including SUN, IBM Operating system: LINUX, UNIX Classification: 6.5 Catalogue identifier of previous version: AEAT_v1_0 Journal reference of previous version: Comput. Phys. Comm. 178 (2008) 933 Does the new version supersede the previous version?: No Nature of problem: A simulation program which uses floating-point arithmetic generates round-off errors, due to the rounding performed at each assignment and at each arithmetic operation. Round-off error propagation may invalidate the result of a program. The CADNA library enables one to estimate round-off error propagation in any simulation program and to detect all numerical instabilities that may occur at run time. Solution method: The CADNA library [1-3] implements Discrete Stochastic Arithmetic [4,5] which is based on a probabilistic model of round-off errors. The program is run several times with a random rounding mode generating different results each time. From this set of results, CADNA estimates the number of exact significant digits in the result that would have been computed with standard floating-point arithmetic. Reasons for new version: The previous version (AEAT_v1_0) enables the estimation of round-off error propagation in Fortran programs [2]. The new version has been developed to enable this estimation in C or C++ programs. Summary of revisions: The CADNA_C source code consists of one assembly language file (cadna_rounding.s) and twenty-three C++ language files (including three header files). cadna_rounding.s is a symbolic link to the assembly file corresponding to the processor and the C++ compiler used. This assembly file contains routines which are frequently called in the CADNA_C C++ files to change the rounding mode. The C++ language files contain the definition of the stochastic types on which the control of accuracy can be performed, CADNA_C specific functions (for instance to enable or disable the detection of numerical instabilities), the definition of arithmetic and relational operators which are overloaded for stochastic variables and the definition of mathematical functions which can be used with stochastic arguments. As a remark, on 64-bit processors, the mathematical library associated with the GNU C++ compiler may provide incorrect results or generate severe bugs with rounding towards -∞ and +∞, which the random rounding mode is based on. Therefore, if CADNA_C is used on a 64-bit processor with the GNU C++ compiler, mathematical functions are computed with rounding to the nearest, otherwise they are computed with the random rounding mode. It must be pointed out that the knowledge of the accuracy of the argument of a mathematical function is never lost. Additional comments: In the library archive, users are advised to read the INSTALL file first. The doc directory contains a user guide named ug.cadna.pdf and a reference guide named, ref_cadna.pdf. The user guide shows how to control the numerical accuracy of a program using CADNA, provides installation instructions and describes test runs.The reference guide briefly describes each function of the library. The source code (which consists of C++ and assembly files) is located in the src directory. The examples directory contains seven test runs which illustrate the use of the CADNA library and the benefits of Discrete Stochastic Arithmetic. Running time: The version of a code which uses CADNA runs at least three times slower than its floating-point version. This cost depends on the computer architecture and can be higher if the detection of numerical instabilities is enabled. In this case, the cost may be related to the number of instabilities detected.
Hyper-Fractal Analysis: A visual tool for estimating the fractal dimension of 4D objects
NASA Astrophysics Data System (ADS)
Grossu, I. V.; Grossu, I.; Felea, D.; Besliu, C.; Jipa, Al.; Esanu, T.; Bordeianu, C. C.; Stan, E.
2013-04-01
This work presents a new version of a Visual Basic 6.0 application for estimating the fractal dimension of images and 3D objects (Grossu et al. (2010) [1]). The program was extended for working with four-dimensional objects stored in comma separated values files. This might be of interest in biomedicine, for analyzing the evolution in time of three-dimensional images. New version program summaryProgram title: Hyper-Fractal Analysis (Fractal Analysis v03) Catalogue identifier: AEEG_v3_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEG_v3_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 745761 No. of bytes in distributed program, including test data, etc.: 12544491 Distribution format: tar.gz Programming language: MS Visual Basic 6.0 Computer: PC Operating system: MS Windows 98 or later RAM: 100M Classification: 14 Catalogue identifier of previous version: AEEG_v2_0 Journal reference of previous version: Comput. Phys. Comm. 181 (2010) 831-832 Does the new version supersede the previous version? Yes Nature of problem: Estimating the fractal dimension of 4D images. Solution method: Optimized implementation of the 4D box-counting algorithm. Reasons for new version: Inspired by existing applications of 3D fractals in biomedicine [3], we extended the optimized version of the box-counting algorithm [1, 2] to the four-dimensional case. This might be of interest in analyzing the evolution in time of 3D images. The box-counting algorithm was extended in order to support 4D objects, stored in comma separated values files. A new form was added for generating 2D, 3D, and 4D test data. The application was tested on 4D objects with known dimension, e.g. the Sierpinski hypertetrahedron gasket, Df=ln(5)/ln(2) (Fig. 1). The algorithm could be extended, with minimum effort, to higher number of dimensions. Easy integration with other applications by using the very simple comma separated values file format for storing multi-dimensional images. Implementation of χ2 test as a criterion for deciding whether an object is fractal or not. User friendly graphical interface. Hyper-Fractal Analysis-Test on the Sierpinski hypertetrahedron 4D gasket (Df=ln(5)/ln(2)≅2.32). Running time: In a first approximation, the algorithm is linear [2]. References: [1] V. Grossu, D. Felea, C. Besliu, Al. Jipa, C.C. Bordeianu, E. Stan, T. Esanu, Computer Physics Communications, 181 (2010) 831-832. [2] I.V. Grossu, C. Besliu, M.V. Rusu, Al. Jipa, C. C. Bordeianu, D. Felea, Computer Physics Communications, 180 (2009) 1999-2001. [3] J. Ruiz de Miras, J. Navas, P. Villoslada, F.J. Esteban, Computer Methods and Programs in Biomedicine, 104 Issue 3 (2011) 452-460.
New version: GRASP2K relativistic atomic structure package
NASA Astrophysics Data System (ADS)
Jönsson, P.; Gaigalas, G.; Bieroń, J.; Fischer, C. Froese; Grant, I. P.
2013-09-01
A revised version of GRASP2K [P. Jönsson, X. He, C. Froese Fischer, I.P. Grant, Comput. Phys. Commun. 177 (2007) 597] is presented. It supports earlier non-block and block versions of codes as well as a new block version in which the njgraf library module [A. Bar-Shalom, M. Klapisch, Comput. Phys. Commun. 50 (1988) 375] has been replaced by the librang angular package developed by Gaigalas based on the theory of [G. Gaigalas, Z.B. Rudzikas, C. Froese Fischer, J. Phys. B: At. Mol. Phys. 30 (1997) 3747, G. Gaigalas, S. Fritzsche, I.P. Grant, Comput. Phys. Commun. 139 (2001) 263]. Tests have shown that errors encountered by njgraf do not occur with the new angular package. The three versions are denoted v1, v2, and v3, respectively. In addition, in v3, the coefficients of fractional parentage have been extended to j=9/2, making calculations feasible for the lanthanides and actinides. Changes in v2 include minor improvements. For example, the new version of rci2 may be used to compute quantum electrodynamic (QED) corrections only from selected orbitals. In v3, a new program, jj2lsj, reports the percentage composition of the wave function in LSJ and the program rlevels has been modified to report the configuration state function (CSF) with the largest coefficient of an LSJ expansion. The bioscl2 and bioscl3 application programs have been modified to produce a file of transition data with one record for each transition in the same format as in ATSP2K [C. Froese Fischer, G. Tachiev, G. Gaigalas, M.R. Godefroid, Comput. Phys. Commun. 176 (2007) 559], which identifies each atomic state by the total energy and a label for the CSF with the largest expansion coefficient in LSJ intermediate coupling. All versions of the codes have been adapted for 64-bit computer architecture. Program SummaryProgram title: GRASP2K, version 1_1 Catalogue identifier: ADZL_v1_1 Program summary URL: http://cpc.cs.qub.ac.uk/summaries/ADZL_v1_1.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 730252 No. of bytes in distributed program, including test data, etc.: 14808872 Distribution format: tar.gz Programming language: Fortran. Computer: Intel Xeon, 2.66 GHz. Operating system: Suse, Ubuntu, and Debian Linux 64-bit. RAM: 500 MB or more Classification: 2.1. Catalogue identifier of previous version: ADZL_v1_0 Journal reference of previous version: Comput. Phys. Comm. 177 (2007) 597 Does the new version supersede the previous version?: Yes Nature of problem: Prediction of atomic properties — atomic energy levels, oscillator strengths, radiative decay rates, hyperfine structure parameters, Landé gJ-factors, and specific mass shift parameters — using a multiconfiguration Dirac-Hartree-Fock approach. Solution method: The computational method is the same as in the previous GRASP2K [1] version except that for v3 codes the njgraf library module [2] for recoupling has been replaced by librang [3,4]. Reasons for new version: New angular libraries with improved performance are available. Also methodology for transforming from jj- to LSJ-coupling has been developed. Summary of revisions: New angular libraries where the coefficients of fractional parentage have been extended to j=9/2, making calculations feasible for the lanthanides and actinides. Inclusion of a new program jj2lsj, which reports the percentage composition of the wave function in LSJ. Transition programs have been modified to produce a file of transition data with one record for each transition in the same format as Atsp2K [C. Froese Fischer, G. Tachiev, G. Gaigalas and M.R. Godefroid, Comput. Phys. Commun. 176 (2007) 559], which identifies each atomic state by the total energy and a label for the CSF with the largest expansion coefficient in LSJ intermediate coupling. Updated to 64-bit architecture. A comprehensive user manual in pdf format for the program package has been added. Restrictions: The packing algorithm restricts the maximum number of orbitals to be ≤214. The tables of reduced coefficients of fractional parentage used in this version are limited to subshells with j≤9/2 [5]; occupied subshells with j>9/2 are, therefore, restricted to a maximum of two electrons. Some other parameters, such as the maximum number of subshells of a CSF outside a common set of closed shells are determined by a parameter.def file that can be modified prior to compile time. Unusual features: The bioscl3 program reports transition data in the same format as in Atsp2K [6], and the data processing program tables of the latter package can be used. The tables program takes a name.lsj file, usually a concatenated file of all the .lsj transition files for a given atom or ion, and finds the energy structure of the levels and the multiplet transition arrays. The tables posted at the website http://atoms.vuse.vanderbilt.edu are examples of tables produced by the tables program. With the extension of coefficients of fractional parentage to j=9/2, calculations for the lanthanides and actinides become possible. Running time: CPU time required to execute test cases: 70.5 s.
NASA Astrophysics Data System (ADS)
Brzuszek, Marcin; Daniluk, Andrzej
2006-11-01
Writing a concurrent program can be more difficult than writing a sequential program. Programmer needs to think about synchronisation, race conditions and shared variables. Transactions help reduce the inconvenience of using threads. A transaction is an abstraction, which allows programmers to group a sequence of actions on the program into a logical, higher-level computation unit. This paper presents multithreaded versions of the GROWTH program, which allow to calculate the layer coverages during the growth of thin epitaxial films and the corresponding RHEED intensities according to the kinematical approximation. The presented programs also contain graphical user interfaces, which enable displaying program data at run-time. New version program summaryTitles of programs:GROWTHGr, GROWTH06 Catalogue identifier:ADVL_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVL_v2_0 Program obtainable from:CPC Program Library, Queen's University of Belfast, N. Ireland Catalogue identifier of previous version:ADVL Does the new version supersede the original program:No Computer for which the new version is designed and others on which it has been tested: Pentium-based PC Operating systems or monitors under which the new version has been tested: Windows 9x, XP, NT Programming language used:Object Pascal Memory required to execute with typical data:More than 1 MB Number of bits in a word:64 bits Number of processors used:1 No. of lines in distributed program, including test data, etc.:20 931 Number of bytes in distributed program, including test data, etc.: 1 311 268 Distribution format:tar.gz Nature of physical problem: The programs compute the RHEED intensities during the growth of thin epitaxial structures prepared using the molecular beam epitaxy (MBE). The computations are based on the use of kinematical diffraction theory [P.I. Cohen, G.S. Petrich, P.R. Pukite, G.J. Whaley, A.S. Arrott, Surf. Sci. 216 (1989) 222. [1
Sharma, C; Gallagher, R R
1994-01-01
Improvements are implemented (Version 4) in a Computer-Based Respiratory Measurement System (CBRMS) identified as Version 3. The programming language has been changed from Pascal to C. A Gateway 2000 desktop computer with 486 DX2/50MHz CPU and a plug-in data I/O board (KEITHLEY METRABYTE/ASYST/DAC's DAS-HRES 16-bit Analog and Digital I/O board) replaces an HP 9836 system used in Version 3. The breath-by-breath system consists of a mass spectrometer for measuring fractional concentrations of oxygen and carbon dioxide and the accommodation of a turbine or pneumotachometer for measuring inspiratory and expiratory flows. The temperature of the inspiratory and expiratory gases can be monitored if temperature corrections are necessary for the flow measurement device. These signals are presented to the PC via the data acquisition module. To compare the two Versions, ten significant respiratory parameters were investigated and compared for physiological resting states and steady states obtained during an exercise forcing. Both graphical and statistical (analysis of variance, regression, and correlation) tests were carried out on the data. The results from the two versions compared well for all ten parameters. Also, no evidence of a statistically significant difference was found between the resting and steady-state results of the present CBRMS (Version 4) and the previous CBRMS (Version 3). This evidence suggests that Version 3 (Pascal) has been successfully converted to Version 4 (C). Implementation of the CBRMS in C on a PC has several advantages.(ABSTRACT TRUNCATED AT 250 WORDS)
Program package for multicanonical simulations of U(1) lattice gauge theory-Second version
NASA Astrophysics Data System (ADS)
Bazavov, Alexei; Berg, Bernd A.
2013-03-01
A new version STMCMUCA_V1_1 of our program package is available. It eliminates compatibility problems of our Fortran 77 code, originally developed for the g77 compiler, with Fortran 90 and 95 compilers. New version program summaryProgram title: STMC_U1MUCA_v1_1 Catalogue identifier: AEET_v1_1 Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html Programming language: Fortran 77 compatible with Fortran 90 and 95 Computers: Any capable of compiling and executing Fortran code Operating systems: Any capable of compiling and executing Fortran code RAM: 10 MB and up depending on lattice size used No. of lines in distributed program, including test data, etc.: 15059 No. of bytes in distributed program, including test data, etc.: 215733 Keywords: Markov chain Monte Carlo, multicanonical, Wang-Landau recursion, Fortran, lattice gauge theory, U(1) gauge group, phase transitions of continuous systems Classification: 11.5 Catalogue identifier of previous version: AEET_v1_0 Journal Reference of previous version: Computer Physics Communications 180 (2009) 2339-2347 Does the new version supersede the previous version?: Yes Nature of problem: Efficient Markov chain Monte Carlo simulation of U(1) lattice gauge theory (or other continuous systems) close to its phase transition. Measurements and analysis of the action per plaquette, the specific heat, Polyakov loops and their structure factors. Solution method: Multicanonical simulations with an initial Wang-Landau recursion to determine suitable weight factors. Reweighting to physical values using logarithmic coding and calculating jackknife error bars. Reasons for the new version: The previous version was developed for the g77 compiler Fortran 77 version. Compiler errors were encountered with Fortran 90 and Fortran 95 compilers (specified below). Summary of revisions: epsilon=one/10**10 is replaced by epsilon/10.0D10 in the parameter statements of the subroutines u1_bmha.f, u1_mucabmha.f, u1wl_backup.f, u1wlread_backup.f of the folder Libs/U1_par. For the tested compilers script files are added in the folder ExampleRuns and readme.txt files are now provided in all subfolders of ExampleRuns. The gnuplot driver files produced by the routine hist_gnu.f of Libs/Fortran are adapted to syntax required by gnuplot version 4.0 and higher. Restrictions: Due to the use of explicit real*8 initialization the conversion into real*4 will require extra changes besides replacing the implicit.sta file by its real*4 version. Unusual features: The programs have to be compiled the script files like those contained in the folder ExampleRuns as explained in the original paper. Running time: The prepared test runs took up to 74 minutes to execute on a 2 GHz PC.
High-Performance Java Codes for Computational Fluid Dynamics
NASA Technical Reports Server (NTRS)
Riley, Christopher; Chatterjee, Siddhartha; Biswas, Rupak; Biegel, Bryan (Technical Monitor)
2001-01-01
The computational science community is reluctant to write large-scale computationally -intensive applications in Java due to concerns over Java's poor performance, despite the claimed software engineering advantages of its object-oriented features. Naive Java implementations of numerical algorithms can perform poorly compared to corresponding Fortran or C implementations. To achieve high performance, Java applications must be designed with good performance as a primary goal. This paper presents the object-oriented design and implementation of two real-world applications from the field of Computational Fluid Dynamics (CFD): a finite-volume fluid flow solver (LAURA, from NASA Langley Research Center), and an unstructured mesh adaptation algorithm (2D_TAG, from NASA Ames Research Center). This work builds on our previous experience with the design of high-performance numerical libraries in Java. We examine the performance of the applications using the currently available Java infrastructure and show that the Java version of the flow solver LAURA performs almost within a factor of 2 of the original procedural version. Our Java version of the mesh adaptation algorithm 2D_TAG performs within a factor of 1.5 of its original procedural version on certain platforms. Our results demonstrate that object-oriented software design principles are not necessarily inimical to high performance.
Dr TIM: Ray-tracer TIM, with additional specialist scientific capabilities
NASA Astrophysics Data System (ADS)
Oxburgh, Stephen; Tyc, Tomáš; Courtial, Johannes
2014-03-01
We describe several extensions to TIM, a raytracing program for ray-optics research. These include relativistic raytracing; simulation of the external appearance of Eaton lenses, Luneburg lenses and generalised focusing gradient-index lens (GGRIN) lenses, which are types of perfect imaging devices; raytracing through interfaces between spaces with different optical metrics; and refraction with generalised confocal lenslet arrays, which are particularly versatile METATOYs. Catalogue identifier: AEKY_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKY_v2_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licencing provisions: GNU General Public License No. of lines in distributed program, including test data, etc.: 106905 No. of bytes in distributed program, including test data, etc.: 6327715 Distribution format: tar.gz Programming language: Java. Computer: Any computer capable of running the Java Virtual Machine (JVM) 1.6. Operating system: Any, developed under Mac OS X Version 10.6 and 10.8.3. RAM: Typically 130 MB (interactive version running under Mac OS X Version 10.8.3) Classification: 14, 18. Catalogue identifier of previous version: AEKY_v1_0 Journal reference of previous version: Comput. Phys. Comm. 183(2012)711 External routines: JAMA [1] (source code included) Does the new version supersede the previous version?: Yes Nature of problem: Visualisation of scenes that include scene objects that create wave-optically forbidden light-ray fields. Solution method: Ray tracing. Reasons for new version: Significant extension of the capabilities (see Summary of revisions), as demanded by our research. Summary of revisions: Added capabilities include the simulation of different types of camera moving at relativistic speeds relative to the scene; visualisation of the external appearance of generalised focusing gradient-index (GGRIN) lenses, including Maxwell fisheye, Eaton and Luneburg lenses; calculation of refraction at the interface between spaces with different optical metrics; and handling of generalised confocal lenslet arrays (gCLAs), a new type of METATOY. Unusual features: Specifically designed to visualise wave-optically forbidden light-ray fields; can visualise ray trajectories and geometric optic transformations; can simulate photos taken with different types of camera moving at relativistic speeds, interfaces between spaces with different optical metrics, the view through METATOYs and generalised focusing gradient-index lenses; can create anaglyphs (for viewing with coloured “3D glasses”), HDMI-1.4a standard 3D images, and random-dot autostereograms of the scene; integrable into web pages. Running time: Problem-dependent; typically seconds for a simple scene. References: [1] JAMA: A Java Matrix Package, http://math.nist.gov/javanumerics/jama/
The 1991 version of the plume impingement computer program. Volume 2: User's input guide
NASA Technical Reports Server (NTRS)
Bender, Robert L.; Somers, Richard E.; Prendergast, Maurice J.; Clayton, Joseph P.; Smith, Sheldon D.
1991-01-01
The Plume Impingement Program (PLIMP) is a computer code used to predict impact pressures, forces, moments, heating rates, and contamination on surfaces due to direct impingement flowfields. Typically, it has been used to analyze the effects of rocket exhaust plumes on nearby structures from ground level to the vacuum of space. The program normally uses flowfields generated by the MOC, RAMP2, SPF/2, or SFPGEN computer programs. It is capable of analyzing gaseous and gas/particle flows. A number of simple subshapes are available to model the surfaces of any structure. The original PLIMP program has been modified many times of the last 20 years. The theoretical bases for the referenced major changes, and additional undocumented changes and enhancements since 1988 are summarized in volume 1 of this report. This volume is the User's Input Guide and should be substituted for all previous guides when running the latest version of the program. This version can operate on VAX and UNIX machines with NCAR graphics ability.
A general spectral method for the numerical simulation of one-dimensional interacting fermions
NASA Astrophysics Data System (ADS)
Clason, Christian; von Winckel, Gregory
2012-08-01
This software implements a general framework for the direct numerical simulation of systems of interacting fermions in one spatial dimension. The approach is based on a specially adapted nodal spectral Galerkin method, where the basis functions are constructed to obey the antisymmetry relations of fermionic wave functions. An efficient Matlab program for the assembly of the stiffness and potential matrices is presented, which exploits the combinatorial structure of the sparsity pattern arising from this discretization to achieve optimal run-time complexity. This program allows the accurate discretization of systems with multiple fermions subject to arbitrary potentials, e.g., for verifying the accuracy of multi-particle approximations such as Hartree-Fock in the few-particle limit. It can be used for eigenvalue computations or numerical solutions of the time-dependent Schrödinger equation. The new version includes a Python implementation of the presented approach. New version program summaryProgram title: assembleFermiMatrix Catalogue identifier: AEKO_v1_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKO_v1_1.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 332 No. of bytes in distributed program, including test data, etc.: 5418 Distribution format: tar.gz Programming language: MATLAB/GNU Octave, Python Computer: Any architecture supported by MATLAB, GNU Octave or Python Operating system: Any supported by MATLAB, GNU Octave or Python RAM: Depends on the data Classification: 4.3, 2.2. External routines: Python 2.7+, NumPy 1.3+, SciPy 0.10+ Catalogue identifier of previous version: AEKO_v1_0 Journal reference of previous version: Comput. Phys. Commun. 183 (2012) 405 Does the new version supersede the previous version?: Yes Nature of problem: The direct numerical solution of the multi-particle one-dimensional Schrödinger equation in a quantum well is challenging due to the exponential growth in the number of degrees of freedom with increasing particles. Solution method: A nodal spectral Galerkin scheme is used where the basis functions are constructed to obey the antisymmetry relations of the fermionic wave function. The assembly of these matrices is performed efficiently by exploiting the combinatorial structure of the sparsity patterns. Reasons for new version: A Python implementation is now included. Summary of revisions: Added a Python implementation; small documentation fixes in Matlab implementation. No change in features of the package. Restrictions: Only one-dimensional computational domains with homogeneous Dirichlet or periodic boundary conditions are supported. Running time: Seconds to minutes.
Galactic cosmic radiation exposure of pregnant aircrew members II
DOT National Transportation Integrated Search
2000-10-01
This report is an updated version of a previously published Technical Note in the journal Aviation, Space, and Environmental Medicine. The main change is that improved computer programs were used to estimate galactic cosmic radiation. The calculation...
User's Guide for TOUGH2-MP - A Massively Parallel Version of the TOUGH2 Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Earth Sciences Division; Zhang, Keni; Zhang, Keni
TOUGH2-MP is a massively parallel (MP) version of the TOUGH2 code, designed for computationally efficient parallel simulation of isothermal and nonisothermal flows of multicomponent, multiphase fluids in one, two, and three-dimensional porous and fractured media. In recent years, computational requirements have become increasingly intensive in large or highly nonlinear problems for applications in areas such as radioactive waste disposal, CO2 geological sequestration, environmental assessment and remediation, reservoir engineering, and groundwater hydrology. The primary objective of developing the parallel-simulation capability is to significantly improve the computational performance of the TOUGH2 family of codes. The particular goal for the parallel simulator ismore » to achieve orders-of-magnitude improvement in computational time for models with ever-increasing complexity. TOUGH2-MP is designed to perform parallel simulation on multi-CPU computational platforms. An earlier version of TOUGH2-MP (V1.0) was based on the TOUGH2 Version 1.4 with EOS3, EOS9, and T2R3D modules, a software previously qualified for applications in the Yucca Mountain project, and was designed for execution on CRAY T3E and IBM SP supercomputers. The current version of TOUGH2-MP (V2.0) includes all fluid property modules of the standard version TOUGH2 V2.0. It provides computationally efficient capabilities using supercomputers, Linux clusters, or multi-core PCs, and also offers many user-friendly features. The parallel simulator inherits all process capabilities from V2.0 together with additional capabilities for handling fractured media from V1.4. This report provides a quick starting guide on how to set up and run the TOUGH2-MP program for users with a basic knowledge of running the (standard) version TOUGH2 code, The report also gives a brief technical description of the code, including a discussion of parallel methodology, code structure, as well as mathematical and numerical methods used. To familiarize users with the parallel code, illustrative sample problems are presented.« less
Production Experiences with the Cray-Enabled TORQUE Resource Manager
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ezell, Matthew A; Maxwell, Don E; Beer, David
High performance computing resources utilize batch systems to manage the user workload. Cray systems are uniquely different from typical clusters due to Cray s Application Level Placement Scheduler (ALPS). ALPS manages binary transfer, job launch and monitoring, and error handling. Batch systems require special support to integrate with ALPS using an XML protocol called BASIL. Previous versions of Adaptive Computing s TORQUE and Moab batch suite integrated with ALPS from within Moab, using PERL scripts to interface with BASIL. This would occasionally lead to problems when all the components would become unsynchronized. Version 4.1 of the TORQUE Resource Manager introducedmore » new features that allow it to directly integrate with ALPS using BASIL. This paper describes production experiences at Oak Ridge National Laboratory using the new TORQUE software versions, as well as ongoing and future work to improve TORQUE.« less
NASA Astrophysics Data System (ADS)
Chang, Chao-Hsi; Wang, Jian-Xiong; Wu, Xing-Gang
2010-06-01
An upgraded (second) version of the package GENXICC (A Generator for Hadronic Production of the Double Heavy Baryons Ξ, Ξ and Ξ by C.H. Chang, J.X. Wang and X.G. Wu [its first version in: Comput. Phys. Comm. 177 (2007) 467]) is presented. Users, with this version being implemented in PYTHIA and a GNU C compiler, may simulate full events of these processes in various experimental environments conveniently. In comparison with the previous version, in order to implement it in PYTHIA properly, a subprogram for the fragmentation of the produced double heavy diquark to the relevant baryon is supplied and the interface of the generator to PYTHIA is changed accordingly. In the subprogram, with explanation, certain necessary assumptions (approximations) are made in order to conserve the momenta and the QCD 'color' flow for the fragmentation. Program summaryProgram title: GENXICC2.0 Catalogue identifier: ADZJ_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZJ_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 102 482 No. of bytes in distributed program, including test data, etc.: 1 469 519 Distribution format: tar.gz Programming language: Fortran 77/90 Computer: Any LINUX based on PC with FORTRAN 77 or FORTRAN 90 and GNU C compiler as well Operating system: Linux RAM: About 2.0 MByte Classification: 11.2 Catalogue identifier of previous version: ADZJ_v1_0 Journal reference of previous version: Comput. Phys. Comm. 177 (2007) 467 Does the new version supersede the previous version?: No Nature of problem: Hadronic production of double heavy baryons Ξ, Ξ and Ξ Solution method: The code is based on NRQCD framework. With proper options, it can generate weighted and un-weighted events of hadronic double heavy baryon production. When the hadronizations of the produced jets and double heavy diquark are taken into account in the production, the upgraded version with proper interface to PYTHIA can generate full events. Reasons for new version: Responding to the feedback from users, we improve the generator mainly by carefully completing the 'final non-perturbative process', i.e. the formulation of the double heavy baryon from relevant intermediate diquark. In the present version, the information for fragmentation about momentum-flow and the color-flow, that is necessary for PYTHIA to generate full events, is retained although reasonable approximations are made. In comparison with the original version, the upgraded one can implement it in PYTHIA properly to do the full event simulation of the double heavy baryon production. Summary of revisions:We try to explain the treatment of the momentum distribution of the process more clearly than the original version, and show how the final baryon is generated through the typical intermediate diquark precisely. We present color flow of the involved processes precisely and the corresponding changes for the program are made. The corresponding changes of the program are explained in the paper. Restrictions: The color flow, particularly, in the piece of code programming of the fragmentation from the produced colorful double heavy diquark into a relevant double heavy baryon, is treated carefully so as to implement it in PYTHIA properly. Running time: It depends on which option is chosen to configure PYTHIA when generating full events and also on which mechanism is chosen to generate the events. Typically, for the most complicated case with gluon-gluon fusion mechanism to generate the mixed events via the intermediate diquark in (cc)[ and (cc)[ states, under the option, IDWTUP=1, to generate 1000 events, takes about 20 hours on a 1.8 GHz Intel P4-processor machine, whereas under the option, IDWTUP=3, even to generate 106 events takes about 40 minutes on the same machine.
Modular reweighting software for statistical mechanical analysis of biased equilibrium data
NASA Astrophysics Data System (ADS)
Sindhikara, Daniel J.
2012-07-01
Here a simple, useful, modular approach and software suite designed for statistical reweighting and analysis of equilibrium ensembles is presented. Statistical reweighting is useful and sometimes necessary for analysis of equilibrium enhanced sampling methods, such as umbrella sampling or replica exchange, and also in experimental cases where biasing factors are explicitly known. Essentially, statistical reweighting allows extrapolation of data from one or more equilibrium ensembles to another. Here, the fundamental separable steps of statistical reweighting are broken up into modules - allowing for application to the general case and avoiding the black-box nature of some “all-inclusive” reweighting programs. Additionally, the programs included are, by-design, written with little dependencies. The compilers required are either pre-installed on most systems, or freely available for download with minimal trouble. Examples of the use of this suite applied to umbrella sampling and replica exchange molecular dynamics simulations will be shown along with advice on how to apply it in the general case. New version program summaryProgram title: Modular reweighting version 2 Catalogue identifier: AEJH_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJH_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 179 118 No. of bytes in distributed program, including test data, etc.: 8 518 178 Distribution format: tar.gz Programming language: C++, Python 2.6+, Perl 5+ Computer: Any Operating system: Any RAM: 50-500 MB Supplementary material: An updated version of the original manuscript (Comput. Phys. Commun. 182 (2011) 2227) is available Classification: 4.13 Catalogue identifier of previous version: AEJH_v1_0 Journal reference of previous version: Comput. Phys. Commun. 182 (2011) 2227 Does the new version supersede the previous version?: Yes Nature of problem: While equilibrium reweighting is ubiquitous, there are no public programs available to perform the reweighting in the general case. Further, specific programs often suffer from many library dependencies and numerical instability. Solution method: This package is written in a modular format that allows for easy applicability of reweighting in the general case. Modules are small, numerically stable, and require minimal libraries. Reasons for new version: Some minor bugs, some upgrades needed, error analysis added. analyzeweight.py/analyzeweight.py2 has been replaced by “multihist.py”. This new program performs all the functions of its predecessor while being versatile enough to handle other types of histograms and probability analysis. “bootstrap.py” was added. This script performs basic bootstrap resampling allowing for error analysis of data. “avg_dev_distribution.py” was added. This program computes the averages and standard deviations of multiple distributions, making error analysis (e.g. from bootstrap resampling) easier to visualize. WRE.cpp was slightly modified purely for cosmetic reasons. The manual was updated for clarity and to reflect version updates. Examples were removed from the manual in favor of online tutorials (packaged examples remain). Examples were updated to reflect the new format. An additional example is included to demonstrate error analysis. Running time: Preprocessing scripts 1-5 minutes, WHAM engine <1 minute, postprocess script ∼1-5 minutes.
NASA Technical Reports Server (NTRS)
Craidon, C. B.
1983-01-01
A computer program was developed to extend the geometry input capabilities of previous versions of a supersonic zero lift wave drag computer program. The arbitrary geometry input description is flexible enough to describe almost any complex aircraft concept, so that highly accurate wave drag analysis can now be performed because complex geometries can be represented accurately and do not have to be modified to meet the requirements of a restricted input format.
Comprehensive Micromechanics-Analysis Code - Version 4.0
NASA Technical Reports Server (NTRS)
Arnold, S. M.; Bednarcyk, B. A.
2005-01-01
Version 4.0 of the Micromechanics Analysis Code With Generalized Method of Cells (MAC/GMC) has been developed as an improved means of computational simulation of advanced composite materials. The previous version of MAC/GMC was described in "Comprehensive Micromechanics-Analysis Code" (LEW-16870), NASA Tech Briefs, Vol. 24, No. 6 (June 2000), page 38. To recapitulate: MAC/GMC is a computer program that predicts the elastic and inelastic thermomechanical responses of continuous and discontinuous composite materials with arbitrary internal microstructures and reinforcement shapes. The predictive capability of MAC/GMC rests on a model known as the generalized method of cells (GMC) - a continuum-based model of micromechanics that provides closed-form expressions for the macroscopic response of a composite material in terms of the properties, sizes, shapes, and responses of the individual constituents or phases that make up the material. Enhancements in version 4.0 include a capability for modeling thermomechanically and electromagnetically coupled ("smart") materials; a more-accurate (high-fidelity) version of the GMC; a capability to simulate discontinuous plies within a laminate; additional constitutive models of materials; expanded yield-surface-analysis capabilities; and expanded failure-analysis and life-prediction capabilities on both the microscopic and macroscopic scales.
NASA Astrophysics Data System (ADS)
Miszczak, Jarosław Adam
2013-01-01
The presented package for the Mathematica computing system allows the harnessing of quantum random number generators (QRNG) for investigating the statistical properties of quantum states. The described package implements a number of functions for generating random states. The new version of the package adds the ability to use the on-line quantum random number generator service and implements new functions for retrieving lists of random numbers. Thanks to the introduced improvements, the new version provides faster access to high-quality sources of random numbers and can be used in simulations requiring large amount of random data. New version program summaryProgram title: TRQS Catalogue identifier: AEKA_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKA_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 18 134 No. of bytes in distributed program, including test data, etc.: 2 520 49 Distribution format: tar.gz Programming language: Mathematica, C. Computer: Any supporting Mathematica in version 7 or higher. Operating system: Any platform supporting Mathematica; tested with GNU/Linux (32 and 64 bit). RAM: Case-dependent Supplementary material: Fig. 1 mentioned below can be downloaded. Classification: 4.15. External routines: Quantis software library (http://www.idquantique.com/support/quantis-trng.html) Catalogue identifier of previous version: AEKA_v1_0 Journal reference of previous version: Comput. Phys. Comm. 183(2012)118 Does the new version supersede the previous version?: Yes Nature of problem: Generation of random density matrices and utilization of high-quality random numbers for the purpose of computer simulation. Solution method: Use of a physical quantum random number generator and an on-line service providing access to the source of true random numbers generated by quantum real number generator. Reasons for new version: Added support for the high-speed on-line quantum random number generator and improved methods for retrieving lists of random numbers. Summary of revisions: The presented version provides two signicant improvements. The first one is the ability to use the on-line Quantum Random Number Generation service developed by PicoQuant GmbH and the Nano-Optics groups at the Department of Physics of Humboldt University. The on-line service supported in the version 2.0 of the TRQS package provides faster access to true randomness sources constructed using the laws of quantum physics. The service is freely available at https://qrng.physik.hu-berlin.de/. The use of this service allows using the presented package with the need of a physical quantum random number generator. The second improvement introduced in this version is the ability to retrieve arrays of random data directly for the used source. This increases the speed of the random number generation, especially in the case of an on-line service, where it reduces the time necessary to establish the connection. Thanks to the speed improvement of the presented version, the package can now be used in simulations requiring larger amounts of random data. Moreover, the functions for generating random numbers provided by the current version of the package more closely follow the pattern of functions for generating pseudo- random numbers provided in Mathematica. Additional comments: Speed comparison: The implementation of the support for the QRNG on-line service provides a noticeable improvement in the speed of random number generation. For the samples of real numbers of size 101; 102,…,107 the times required to generate these samples using Quantis USB device and QRNG service are compared in Fig. 1. The presented results show that the use of the on-line service provides faster access to random numbers. One should note, however, that the speed gain can increase or decrease depending on the connection speed between the computer and the server providing random numbers. Running time: Depends on the used source of randomness and the amount of random data used in the experiment. References: [1] M. Wahl, M. Leifgen, M. Berlin, T. Röhlicke, H.-J. Rahn, O. Benson., An ultrafast quantum random number generator with provably bounded output bias based on photon arrival time measurements, Applied Physics Letters, Vol. 098, 171105 (2011). http://dx.doi.org/10.1063/1.3578456.
Revised and extended UTILITIES for the RATIP package
NASA Astrophysics Data System (ADS)
Nikkinen, J.; Fritzsche, S.; Heinäsmäki, S.
2006-09-01
During the last years, the RATIP package has been found useful for calculating the excitation and decay properties of free atoms. Based on the (relativistic) multiconfiguration Dirac-Fock method, this program is used to obtain accurate predictions of atomic properties and to analyze many recent experiments. The daily work with this package made an extension of its UTILITIES [S. Fritzsche, Comput. Phys. Comm. 141 (2001) 163] desirable in order to facilitate the data handling and interpretation of complex spectra. For this purpose, we make available an enlarged version of the UTILITIES which mainly supports the comparison with experiment as well as large Auger computations. Altogether 13 additional tasks have been appended to the program together with a new menu structure to improve the interactive control of the program. Program summaryTitle of program: RATIP Catalogue identifier: ADPD_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADPD_v2_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing provisions: none Reference in CPC to previous version: S. Fritzsche, Comput. Phys. Comm. 141 (2001) 163 Catalogue identifier of previous version: ADPD Authors of previous version: S. Fritzsche, Department of Physics, University of Kassel, Heinrich-Plett-Strasse 40, D-34132 Kassel, Germany Does the new version supersede the original program?: yes Computer for which the new version is designed and others on which it has been tested: IBM RS 6000, PC Pentium II-IV Installations: University of Kassel (Germany), University of Oulu (Finland) Operating systems: IBM AIX, Linux, Unix Program language used in the new version: ANSI standard Fortran 90/95 Memory required to execute with typical data: 300 kB No. of bits in a word: All real variables are parameterized by a selected kind parameter and, thus, can be adapted to any required precision if supported by the compiler. Currently, the kind parameter is set to double precision (two 32-bit words) as used also for other components of the RATIP package [S. Fritzsche, C.F. Fischer, C.Z. Dong, Comput. Phys. Comm. 124 (2000) 341; G. Gaigalas, S. Fritzsche, Comput. Phys. Comm. 134 (2001) 86; S. Fritzsche, Comput. Phys. Comm. 141 (2001) 163; S. Fritzsche, J. Elec. Spec. Rel. Phen. 114-116 (2001) 1155] No. of lines in distributed program, including test data, etc.:231 813 No. of bytes in distributed program, including test data, etc.: 3 977 387 Distribution format: tar.gzip file Nature of the physical problem: In order to describe atomic excitation and decay properties also quantitatively, large-scale computations are often needed. In the framework of the RATIP package, the UTILITIES support a variety of (small) tasks. For example, these tasks facilitate the file and data handling in large-scale applications or in the interpretation of complex spectra. Method of solution: The revised UTILITIES now support a total of 29 subtasks which are mainly concerned with the manipulation of output data as obtained from other components of the RATIP package. Each of these tasks are realized by one or several subprocedures which have access to the corresponding modules of the main components. While the main menu defines seven groups of subtasks for data manipulations and computations, a particular task is selected from one of these group menus. This allows to enlarge the program later if technical support for further tasks will become necessary. For each selected task, an interactive dialog about the required input and output data as well as a few additional information are printed during the execution of the program. Reasons for the new version: The requirement for enlarging the previous version of the UTILITIES [S. Fritzsche, Comput. Phys. Comm. 141 (2001) 163] arose from the recent application of the RATIP package for large-scale radiative and Auger computations. A number of new subtasks now refer to the handling of Auger amplitudes and their proper combination in order to facilitate the interpretation of complex spectra. A few further tasks, such as the direct access to the one-electron matrix elements for some given set of orbital functions, have been found useful also in the analysis of data. Summary of revisions: extraction and handling of atomic data within the framework of RATIP. With the revised version, we now 'add' another 13 tasks which refer to the manipulation of data files, the generation and interpretation of Auger spectra, the computation of various one- and two-electron matrix elements as well as the evaluation of momentum densities and grid parameters. Owing to the rather large number of subtasks, the main menu has been divided into seven groups from which the individual tasks can be selected very similarly as before. Typical running time: The program responds promptly for most of the tasks. The responding time for some tasks, such as the generation of a relativistic momentum density, strongly depends on the size of the corresponding data files and the number of grid points. Unusual features of the program: A total of 29 different tasks are supported by the program. Starting from the main menu, the user is guided interactively through the program by a dialog and a few additional explanations. For each task, a short summary about its function is displayed before the program prompts for all the required input data.
Planetary Mission Entry Vehicles Quick Reference Guide. Version 3.0
NASA Technical Reports Server (NTRS)
Davies, Carol; Arcadi, Marla
2006-01-01
This is Version 3.0 of the planetary mission entry vehicle document. Three new missions, Re-entry F, Hayabusa, and ARD have been added to t he previously published edition (Version 2.1). In addition, the Huyge ns mission has been significantly updated and some Apollo data correc ted. Due to the changing nature of planetary vehicles during the desi gn, manufacture and mission phases, and to the variables involved in measurement and computation, please be aware that the data provided h erein cannot be guaranteed. Contact Carol Davies at cdavies@mail.arc. nasa.gov to correct or update the current data, or to suggest other missions.
NASA Astrophysics Data System (ADS)
Cipolla, Sam J.
2009-09-01
New version program summaryProgram title: ISICS2008 Catalogue identifier: ADDS_v4_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADDS_v4_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 5420 No. of bytes in distributed program, including test data, etc.: 107 669 Distribution format: tar.gz Programming language: C Computer: 80 486 or higher level PCs Operating system: Windows XP and all earlier operating systems Classification: 16.7 Catalogue identifier of previous version: ADDS_v3_0 Journal reference of previous version: Comput. Phys. Comm. 179 (2008) 616 Does the new version supersede the previous version?: Yes Nature of problem: Ionization and X-ray production cross section calculations for ion-atom collisions. Solution method: Numerical integration of form factor using a logarithmic transform and Gaussian quadrature, plus exact integration limits. Reasons for new version: Addition of relativistic treatment of both projectile and K-shell electrons. Summary of revisions: A new addition to ISICS is the option (R) to calculate ECPSSR cross sections that account for the relativistic treatment of both projectile and K-shell electron, as proposed recently by Lapicki [1], accordingly as σKRECPSSR=Cṡ(1+0.07(()ṡσ(√{(mKRυ1R)}/Z,ςθ), where υ1R is the relativistic projectile velocity. The option can also be invoked in calculating ECPSShsR, where hsR stands for the Hartree-Slater description of the K-shell electron, which was already incorporated into ISICS2006 [2,3], and is now expressed in this option as, σKRECPSShsR=CṡhsR((2υ1R)/(Zςθ),Z/137)ṡ(1+0.07(()ṡσ(υ1R/Z,ςθ) using the function hsR that is already incorporated into ISICS2006. It should be noted that these expressions are corrected versions [4] from the ones published in Ref. [1]. In this new version, ISICS2008, the option line in the main menu that read "Use Relativistic Proj. velocity" has been replaced by "R option for K-shell … Uses Rel. Proj. vel.". As before, various combinations of options can be utilized and each is denoted in the output. Restrictions: The consumed CPU time increases with the atomic shell (K,L,M), but execution is still very fast. Additional comments: A revised User Manual is included in the distribution file. Running time: This depends on which shell and the number of different energies to be used in the calculation. The running time is not significantly changed from the previous version. As before, to calculate K-shell cross sections for protons striking carbon for 19 different proton energies it took less than 10 s; to calculate M-shell cross sections for protons on gold for 21 proton energies it took 4.2 min. References:G. Lapicki, J. Phys. B: At. Mol. Op. Phys. 41 (2008) 115201. S. Cipolla, Comput. Phys. Comm. 176 (2007) 157. S. Cipolla, Nucl. Instrum. Methods Phys. Res. B 261 (2007) 142. G. Lapicki, private communication.
NASA Astrophysics Data System (ADS)
McConnell, Sean; Fritzsche, Stephan; Surzhykov, Andrey
2010-03-01
During recent years, the DIRAC package has proved to be an efficient tool for studying the structural properties and dynamic behavior of hydrogen-like ions. Originally designed as a set of MAPLE procedures, this package provides interactive access to the wave and Green's functions in the non-relativistic and relativistic frameworks and supports analytical evaluation of a large number of radial integrals that are required for the construction of transition amplitudes and interaction cross sections. We provide here a new version of the DIRAC program which is developed within the framework of MATHEMATICA (version 6.0). This new version aims to cater to a wider community of researchers that use the MATHEMATICA platform and to take advantage of the generally faster processing times therein. Moreover, the addition of new procedures, a more convenient and detailed help system, as well as source code revisions to overcome identified shortcomings should ensure expanded use of the new DIRAC program over its predecessor. New version program summaryProgram title: DIRAC Catalogue identifier: ADUQ_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADUQ_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 45 073 No. of bytes in distributed program, including test data, etc.: 285 828 Distribution format: tar.gz Programming language: Mathematica 6.0 or higher Computer: All computers with a license for the computer algebra package Mathematica (version 6.0 or higher) Operating system: Mathematica is O/S independent Classification: 2.1 Catalogue identifier of previous version: ADUQ_v1_0 Journal reference of previous version: Comput. Phys. Comm. 165 (2005) 139 Does the new version supersede the previous version?: Yes Nature of problem: Since the early days of quantum mechanics, the "hydrogen atom" has served as one of the key models for studying the structure and dynamics of various quantum systems. Its analytic solutions are frequently used in case studies in atomic and molecular physics, quantum optics, plasma physics, or even in the field of quantum information and computation. Fast and reliable access to functions and properties of the hydrogenic systems are frequently required, in both the non-relativistic and relativistic frameworks. Despite all the knowledge about one-electron ions, providing such an access is not a simple task, owing to the rather complicated mathematical structure of the Schrödinger and especially Dirac equations. Moreover, for analyzing experimental results as well as for performing advanced theoretical studies one often needs (apart from the detailed information on atomic wave- and Green's functions) to be able to calculate a number of integrals involving these functions. Although for many types of transition operators these integrals can be evaluated analytically in terms of special mathematical functions, such an evaluation is usually rather involved and prone to mistakes. Solution method: A set of Mathematica procedures is developed which provides both the non-relativistic and relativistic solutions of the "Hydrogen atom model". It facilitates, moreover, the symbolic evaluation of integrals involved in the calculations of cross sections and transition amplitudes. These procedures are based on a large number of relations among special mathematical functions, information about their integral representations, recurrence formulae and series expansions. Based on this knowledge, the DIRAC tools provide a fast and reliable algebraic (and if necessary, numeric) manipulation of functions and properties of one-electron systems, thus helping to obtain further insight into the behavior of quantum physical systems. Reasons for new version: The original version of the DIRAC program was developed as a toolbox of Maple procedures and was submitted to the CPC library in 2004 (cf. Ref. [1]). Since then DIRAC has found its niche in advanced theoretical studies carried out in realm of heavy ion physics. With the help of this program detailed analysis has been performed, in particular, for the various excitation and ionization processes occurring in relativistic ion-atom collisions [2], the polarization of the characteristic X-ray radiation following radiative electron capture [3], the correlation properties of the two-photon emission from few-electron heavy ions [4], the spin entanglement phenomena in atomic photoionization [5] and even for exploring the vibrational excitations of the heavy nuclei [6]. Although these studies have conclusively proven the potential of the program, they have also illuminated routes for its further enhancement. Apart from certain source code revisions, demand has grown for a new version of DIRAC compatible with the Mathematica platform. The version presented here includes a wider ranging and more user friendly interactive help system, a number of new procedures and reprogramming for greater computational efficiency. Summary of revisions: The most important new capabilities of the DIRAC program since the previous version are: The utilization of the Mathematica (version 6.0) platform. The addition of a number of new procedures. Since the complete list of the new (and updated) procedures can be found in the interactive help library of the program, we mention here only the most important ones: DiracGlobal[] - Displays a list of the current global settings which specify the framework, nuclear charge and the units which are to be used by the DIRAC program. DiracRadialOrbitalMomentum[] - Returns a non-relativistic radial orbital in momentum space for both, the bound and free electron states. DiracSlaterRadial[] - Evaluates the radial Slater integral both, with the non-relativistic and relativistic wavefunctions. In the previous version of the program this procedure was restricted to the non-relativistic framework only. DiracGreensIntegralRadial[] - Evaluates the two-dimensional radial integrals with the wave- and Green's functions both in non-relativistic and relativistic frameworks. DiracAngularMatrixElement[] - Calculates the angular matrix elements for various irreducible tensor operators. The elimination of some redundant procedures. In particular, the previous version supported evaluation of the spherical Bessel functions, Wigner 3j symbols, Clebsch-Gordan coefficients and spherical harmonics functions. These tools are now superseded by in-built procedures of Mathematica. The development of a full featured interactive help system which follows the style of the Mathematica Help Pages. Extensive revision of the source code in order to correct a number of bugs and inconsistencies that have been identified during use of the previous version of Dirac. The DIRAC package is distributed as a compressed tar file from which the DIRAC root directory can be (re-)generated. The root directory contains the source code and help libraries, a "Readme" file, Dirac_Installation_Instructions, as well as the notebook DemonstrationNotebook.nb that includes a number of test cases to illustrate the use of the program. These test cases, which concern the theoretical analysis of wavefunctions and the fine-structure of hydrogen-like ions, has already been discussed in detail in Ref. [1] and are provided here in order to underline the continuity between the previous (Maple) and new (Mathematica) versions of the DIRAC program. Unusual features: Even though all basic features of the previous Maple version have been retained in as close to the original form as possible, some small syntax changes became necessary in the new version of DIRAC in order to follow Mathematica standards. First of all, these changes concern naming conventions for DIRAC's procedures. As was discussed in Ref. [1], previously rather long names were employed in which each word was separated by an underscore. For example, when running the Maple version of the program one had to call the procedure Dirac_Slater_radial() in order to evaluate the Slater integral. Such a naming convention however, cannot be used in the Mathematica framework where the underscore character is reserved to represent Blank, a built-in symbol. In the new version of DIRAC we therefore follow the Mathematica convention of delimiting each word in a procedure's name by capitalization. Evaluation of the Slater determinant can be accomplished now simply by entering DiracSlaterRadial[]. Besides procedure names, a new convention is introduced to represent fundamental physical constants. In this version of DIRAC the group of (preset) global variables has changed to resemble their conventional symbols, specifically α, a, e, m, c and ℏ, being the fine structure constant, Bohr radius, electron charge, electron mass, speed of light and the Planck constant respectively. If the numerical evaluator N is wrapped around any of these constants, their numerical values are returned. Running time: Although the program replies promptly upon most requests, the running time also depends on the particular task. For example, computation of (radial) matrix elements involving components of relativistic wavefunctions might require a few seconds of a runtime. A number of test calculations performed regarding this and other tasks clearly indicate that the new version of Dirac requires up to 90% less evaluation time compared to its predecessor. References:A. Surzhykov, P. Koval, S. Fritzsche, Comput. Phys. Comm. 165 (2005) 139. H. Ogawa, et al., Phys. Rev. A 75 (2007) 1. A.V. Maiorova, et al., J. Phys. B: At. Mol. Opt. Phys. 42 (2009) 125003. L. Borowska, A. Surzhykov, Th. Stöhlker, S. Fritzsche, Phys. Rev. A 74 (2006) 062516. T. Radtke, S. Fritzsche, A. Surzhykov, Phys. Rev. A 74 (2006) 032709. A. Pálffy, Z. Harman, A. Surzhykov, U.D. Jentschura, Phys. Rev. A 75 (2007) 012712.
User Manual for the NASA Glenn Ice Accretion Code LEWICE. Version 2.2.2
NASA Technical Reports Server (NTRS)
Wright, William B.
2002-01-01
A research project is underway at NASA Glenn to produce a computer code which can accurately predict ice growth under a wide range of meteorological conditions for any aircraft surface. This report will present a description of the code inputs and outputs from version 2.2.2 of this code, which is called LEWICE. This version differs from release 2.0 due to the addition of advanced thermal analysis capabilities for de-icing and anti-icing applications using electrothermal heaters or bleed air applications. An extensive effort was also undertaken to compare the results against the database of electrothermal results which have been generated in the NASA Glenn Icing Research Tunnel (IRT) as was performed for the validation effort for version 2.0. This report will primarily describe the features of the software related to the use of the program. Appendix A of this report has been included to list some of the inner workings of the software or the physical models used. This information is also available in the form of several unpublished documents internal to NASA. This report is intended as a replacement for all previous user manuals of LEWICE. In addition to describing the changes and improvements made for this version, information from previous manuals may be duplicated so that the user will not need to consult previous manuals to use this code.
Computational models for the viscous/inviscid analysis of jet aircraft exhaust plumes
NASA Astrophysics Data System (ADS)
Dash, S. M.; Pergament, H. S.; Thorpe, R. D.
1980-05-01
Computational models which analyze viscous/inviscid flow processes in jet aircraft exhaust plumes are discussed. These models are component parts of an NASA-LaRC method for the prediction of nozzle afterbody drag. Inviscid/shock processes are analyzed by the SCIPAC code which is a compact version of a generalized shock capturing, inviscid plume code (SCIPPY). The SCIPAC code analyzes underexpanded jet exhaust gas mixtures with a self-contained thermodynamic package for hydrocarbon exhaust products and air. A detailed and automated treatment of the embedded subsonic zones behind Mach discs is provided in this analysis. Mixing processes along the plume interface are analyzed by two upgraded versions of an overlaid, turbulent mixing code (BOAT) developed previously for calculating nearfield jet entrainment. The BOATAC program is a frozen chemistry version of BOAT containing the aircraft thermodynamic package as SCIPAC; BOATAB is an afterburning version with a self-contained aircraft (hydrocarbon/air) finite-rate chemistry package. The coupling of viscous and inviscid flow processes is achieved by an overlaid procedure with interactive effects accounted for by a displacement thickness type correction to the inviscid plume interface.
NASA Technical Reports Server (NTRS)
Dash, S. M.; Pergament, H. S.; Thorpe, R. D.
1980-01-01
Computational models which analyze viscous/inviscid flow processes in jet aircraft exhaust plumes are discussed. These models are component parts of an NASA-LaRC method for the prediction of nozzle afterbody drag. Inviscid/shock processes are analyzed by the SCIPAC code which is a compact version of a generalized shock capturing, inviscid plume code (SCIPPY). The SCIPAC code analyzes underexpanded jet exhaust gas mixtures with a self-contained thermodynamic package for hydrocarbon exhaust products and air. A detailed and automated treatment of the embedded subsonic zones behind Mach discs is provided in this analysis. Mixing processes along the plume interface are analyzed by two upgraded versions of an overlaid, turbulent mixing code (BOAT) developed previously for calculating nearfield jet entrainment. The BOATAC program is a frozen chemistry version of BOAT containing the aircraft thermodynamic package as SCIPAC; BOATAB is an afterburning version with a self-contained aircraft (hydrocarbon/air) finite-rate chemistry package. The coupling of viscous and inviscid flow processes is achieved by an overlaid procedure with interactive effects accounted for by a displacement thickness type correction to the inviscid plume interface.
Maxdose-SR and popdose-SR routine release atmospheric dose models used at SRS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jannik, G. T.; Trimor, P. P.
MAXDOSE-SR and POPDOSE-SR are used to calculate dose to the offsite Reference Person and to the surrounding Savannah River Site (SRS) population respectively following routine releases of atmospheric radioactivity. These models are currently accessed through the Dose Model Version 2014 graphical user interface (GUI). MAXDOSE-SR and POPDOSE-SR are personal computer (PC) versions of MAXIGASP and POPGASP, which both resided on the SRS IBM Mainframe. These two codes follow U.S. Nuclear Regulatory Commission (USNRC) Regulatory Guides 1.109 and 1.111 (1977a, 1977b). The basis for MAXDOSE-SR and POPDOSE-SR are USNRC developed codes XOQDOQ (Sagendorf et. al 1982) and GASPAR (Eckerman et. almore » 1980). Both of these codes have previously been verified for use at SRS (Simpkins 1999 and 2000). The revisions incorporated into MAXDOSE-SR and POPDOSE-SR Version 2014 (hereafter referred to as MAXDOSE-SR and POPDOSE-SR unless otherwise noted) were made per Computer Program Modification Tracker (CPMT) number Q-CMT-A-00016 (Appendix D). Version 2014 was verified for use at SRS in Dixon (2014).« less
New version of PLNoise: a package for exact numerical simulation of power-law noises
NASA Astrophysics Data System (ADS)
Milotti, Edoardo
2007-08-01
In a recent paper I have introduced a package for the exact simulation of power-law noises and other colored noises [E. Milotti, Comput. Phys. Comm. 175 (2006) 212]: in particular, the algorithm generates 1/f noises with 0<α⩽2. Here I extend the algorithm to generate 1/f noises with 2<α⩽4 (black noises). The method is exact in the sense that it produces a sampled process with a theoretically guaranteed range-limited power-law spectrum for any arbitrary sequence of sampling intervals, i.e. the sampling times may be unevenly spaced. Program summaryTitle of program: PLNoise Catalogue identifier:ADXV_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXV_v2_0.html Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Programming language used: ANSI C Computer: Any computer with an ANSI C compiler: the package has been tested with gcc version 3.2.3 on Red Hat Linux 3.2.3-52 and gcc version 4.0.0 and 4.0.1 on Apple Mac OS X-10.4 Operating system: All operating systems capable of running an ANSI C compiler RAM: The code of the test program is very compact (about 60 Kbytes), but the program works with list management and allocates memory dynamically; in a typical run with average list length 2ṡ10, the RAM taken by the list is 200 Kbytes External routines: The package needs external routines to generate uniform and exponential deviates. The implementation described here uses the random number generation library ranlib freely available from Netlib [B.W. Brown, J. Lovato, K. Russell: ranlib, available from Netlib, http://www.netlib.org/random/index.html, select the C version ranlib.c], but it has also been successfully tested with the random number routines in Numerical Recipes [W.H. Press, S.A. Teulkolsky, W.T. Vetterling, B.P. Flannery, Numerical Recipes in C: The Art of Scientific Computing, second ed., Cambridge Univ. Press., Cambridge, 1992, pp. 274-290]. Notice that ranlib requires a pair of routines from the linear algebra package LINPACK, and that the distribution of ranlib includes the C source of these routines, in case LINPACK is not installed on the target machine. No. of lines in distributed program, including test data, etc.:2975 No. of bytes in distributed program, including test data, etc.:194 588 Distribution format:tar.gz Catalogue identifier of previous version: ADXV_v1_0 Journal reference of previous version: Comput. Phys. Comm. 175 (2006) 212 Does the new version supersede the previous version?: Yes Nature of problem: Exact generation of different types of colored noise. Solution method: Random superposition of relaxation processes [E. Milotti, Phys. Rev. E 72 (2005) 056701], possibly followed by an integration step to produce noise with spectral index >2. Reasons for the new version: Extension to 1/f noises with spectral index 2<α⩽4: the new version generates both noises with spectral with spectral index 0<α⩽2 and with 2<α⩽4. Summary of revisions: Although the overall structure remains the same, one routine has been added and several changes have been made throughout the code to include the new integration step. Unusual features: The algorithm is theoretically guaranteed to be exact, and unlike all other existing generators it can generate samples with uneven spacing. Additional comments: The program requires an initialization step; for some parameter sets this may become rather heavy. Running time: Running time varies widely with different input parameters, however in a test run like the one in Section 3 in the long write-up, the generation routine took on average about 75 μs for each sample.
Implementing a frame representation in CLIPS/COOL
NASA Technical Reports Server (NTRS)
Myers, Leonard; Snyder, James
1991-01-01
An implementation is described and evaluated of frames in COOL. The test case is a frame based semantic network previously implemented in CLIPS (C Language Integrated Production System) Version 4.3 as part of the Intelligent Computer Aided Design System (ICADS) and reported at the first CLIPS conference.
BLESS 2: accurate, memory-efficient and fast error correction method.
Heo, Yun; Ramachandran, Anand; Hwu, Wen-Mei; Ma, Jian; Chen, Deming
2016-08-01
The most important features of error correction tools for sequencing data are accuracy, memory efficiency and fast runtime. The previous version of BLESS was highly memory-efficient and accurate, but it was too slow to handle reads from large genomes. We have developed a new version of BLESS to improve runtime and accuracy while maintaining a small memory usage. The new version, called BLESS 2, has an error correction algorithm that is more accurate than BLESS, and the algorithm has been parallelized using hybrid MPI and OpenMP programming. BLESS 2 was compared with five top-performing tools, and it was found to be the fastest when it was executed on two computing nodes using MPI, with each node containing twelve cores. Also, BLESS 2 showed at least 11% higher gain while retaining the memory efficiency of the previous version for large genomes. Freely available at https://sourceforge.net/projects/bless-ec dchen@illinois.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
User guide for MODPATH Version 7—A particle-tracking model for MODFLOW
Pollock, David W.
2016-09-26
MODPATH is a particle-tracking post-processing program designed to work with MODFLOW, the U.S. Geological Survey (USGS) finite-difference groundwater flow model. MODPATH version 7 is the fourth major release since its original publication. Previous versions were documented in USGS Open-File Reports 89–381 and 94–464 and in USGS Techniques and Methods 6–A41.MODPATH version 7 works with MODFLOW-2005 and MODFLOW–USG. Support for unstructured grids in MODFLOW–USG is limited to smoothed, rectangular-based quadtree and quadpatch grids.A software distribution package containing the computer program and supporting documentation, such as input instructions, output file descriptions, and example problems, is available from the USGS over the Internet (http://water.usgs.gov/ogw/modpath/).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hadgu, Teklu; Appel, Gordon John
Sandia National Laboratories (SNL) continued evaluation of total system performance assessment (TSPA) computing systems for the previously considered Yucca Mountain Project (YMP). This was done to maintain the operational readiness of the computing infrastructure (computer hardware and software) and knowledge capability for total system performance assessment (TSPA) type analysis, as directed by the National Nuclear Security Administration (NNSA), DOE 2010. This work is a continuation of the ongoing readiness evaluation reported in Lee and Hadgu (2014) and Hadgu et al. (2015). The TSPA computing hardware (CL2014) and storage system described in Hadgu et al. (2015) were used for the currentmore » analysis. One floating license of GoldSim with Versions 9.60.300, 10.5 and 11.1.6 was installed on the cluster head node, and its distributed processing capability was mapped on the cluster processors. Other supporting software were tested and installed to support the TSPA-type analysis on the server cluster. The current tasks included verification of the TSPA-LA uncertainty and sensitivity analyses, and preliminary upgrade of the TSPA-LA from Version 9.60.300 to the latest version 11.1. All the TSPA-LA uncertainty and sensitivity analyses modeling cases were successfully tested and verified for the model reproducibility on the upgraded 2014 server cluster (CL2014). The uncertainty and sensitivity analyses used TSPA-LA modeling cases output generated in FY15 based on GoldSim Version 9.60.300 documented in Hadgu et al. (2015). The model upgrade task successfully converted the Nominal Modeling case to GoldSim Version 11.1. Upgrade of the remaining of the modeling cases and distributed processing tasks will continue. The 2014 server cluster and supporting software systems are fully operational to support TSPA-LA type analysis.« less
MetAlign 3.0: performance enhancement by efficient use of advances in computer hardware.
Lommen, Arjen; Kools, Harrie J
2012-08-01
A new, multi-threaded version of the GC-MS and LC-MS data processing software, metAlign, has been developed which is able to utilize multiple cores on one PC. This new version was tested using three different multi-core PCs with different operating systems. The performance of noise reduction, baseline correction and peak-picking was 8-19 fold faster compared to the previous version on a single core machine from 2008. The alignment was 5-10 fold faster. Factors influencing the performance enhancement are discussed. Our observations show that performance scales with the increase in processor core numbers we currently see in consumer PC hardware development.
Effect of Coannular Flow on Linearized Euler Equation Predictions of Jet Noise
NASA Technical Reports Server (NTRS)
Hixon, R.; Shih, S.-H.; Mankbadi, Reda R.
1997-01-01
An improved version of a previously validated linearized Euler equation solver is used to compute the noise generated by coannular supersonic jets. Results for a single supersonic jet are compared to the results from both a normal velocity profile and an inverted velocity profile supersonic jet.
GR@PPA 2.8: Initial-state jet matching for weak-boson production processes at hadron collisions
NASA Astrophysics Data System (ADS)
Odaka, Shigeru; Kurihara, Yoshimasa
2012-04-01
The initial-state jet matching method introduced in our previous studies has been applied to the event generation of single W and Z production processes and diboson (WW, WZ and ZZ) production processes at hadron collisions in the framework of the GR@PPA event generator. The generated events reproduce the transverse momentum spectra of weak bosons continuously in the entire kinematical region. The matrix elements (ME) for hard interactions are still at the tree level. As in previous versions, the decays of weak bosons are included in the matrix elements. Therefore, spin correlations and phase-space effects in the decay of weak bosons are exact at the tree level. The program package includes custom-made parton shower programs as well as ME-based hard interaction generators in order to achieve self-consistent jet matching. The generated events can be passed to general-purpose event generators to make the simulation proceed down to the hadron level. Catalogue identifier: ADRH_v3_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADRH_v3_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 112 146 No. of bytes in distributed program, including test data, etc.: 596 667 Distribution format: tar.gz Programming language: Fortran; with some included libraries coded in C and C++ Computer: All Operating system: Any UNIX-like system RAM: 1.6 Mega bytes at minimum Classification: 11.2 Catalogue identifier of previous version: ADRH_v2_0 Journal reference of previous version: Comput. Phys. Comm. 175 (2006) 665 External routines: Bash and Perl for the setup, and CERNLIB, ROOT, LHAPDF, PYTHIA according to the user's choice. Does the new version supersede the previous version?: No, this version supports only a part of the processes included in the previous versions. Nature of problem: We need to combine those processes including 0 jet and 1 jet in the matrix elements using an appropriate matching method, in order to simulate weak-boson production processes in the entire kinematical region. Solution method: The leading logarithmic components to be included in parton distribution functions and parton showers are subtracted from 1-jet matrix elements. Custom-made parton shower programs are provided to ensure satisfactory performance of the matching method. Reasons for new version: An initial-state jet matching method has been implemented. Summary of revisions: Weak-boson production processes associated with 0 jet and 1 jet can be consistently merged using the matching method. Restrictions: The built-in parton showers are not compatible with the PYTHIA new PS and the HERWIG PS. Unusual features: A large number of particles may be produced by the parton showers and passed to general-purpose event generators. Running time: About 10 min for initialization plus 25 s for every 1k-event generation for W production in the LHC condition, on a 3.0-GHz Intel Xeon processor with the default setting.
User's Manual for LEWICE Version 3.2
NASA Technical Reports Server (NTRS)
Wright, William
2008-01-01
A research project is underway at NASA Glenn to produce a computer code which can accurately predict ice growth under a wide range of meteorological conditions for any aircraft surface. This report will present a description of the code inputs and outputs from version 3.2 of this software, which is called LEWICE. This version differs from release 2.0 due to the addition of advanced thermal analysis capabilities for de-icing and anti-icing applications using electrothermal heaters or bleed air applications, the addition of automated Navier-Stokes analysis, an empirical model for supercooled large droplets (SLD) and a pneumatic boot option. An extensive effort was also undertaken to compare the results against the database of electrothermal results which have been generated in the NASA Glenn Icing Research Tunnel (IRT) as was performed for the validation effort for version 2.0. This report will primarily describe the features of the software related to the use of the program. Appendix A has been included to list some of the inner workings of the software or the physical models used. This information is also available in the form of several unpublished documents internal to NASA. This report is intended as a replacement for all previous user manuals of LEWICE. In addition to describing the changes and improvements made for this version, information from previous manuals may be duplicated so that the user will not need to consult previous manuals to use this software.
GRASP92: a package for large-scale relativistic atomic structure calculations
NASA Astrophysics Data System (ADS)
Parpia, F. A.; Froese Fischer, C.; Grant, I. P.
2006-12-01
Program summaryTitle of program: GRASP92 Catalogue identifier: ADCU_v1_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADCU_v1_1 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing provisions: no Programming language used: Fortran Computer: IBM POWERstation 320H Operating system: IBM AIX 3.2.5+ RAM: 64M words No. of lines in distributed program, including test data, etc.: 65 224 No of bytes in distributed program, including test data, etc.: 409 198 Distribution format: tar.gz Catalogue identifier of previous version: ADCU_v1_0 Journal reference of previous version: Comput. Phys. Comm. 94 (1996) 249 Does the new version supersede the previous version?: Yes Nature of problem: Prediction of atomic spectra—atomic energy levels, oscillator strengths, and radiative decay rates—using a 'fully relativistic' approach. Solution method: Atomic orbitals are assumed to be four-component spinor eigenstates of the angular momentum operator, j=l+s, and the parity operator Π=βπ. Configuration state functions (CSFs) are linear combinations of Slater determinants of atomic orbitals, and are simultaneous eigenfunctions of the atomic electronic angular momentum operator, J, and the atomic parity operator, P. Lists of CSFs are either explicitly prescribed by the user or generated from a set of reference CSFs, a set of subshells, and rules for deriving other CSFs from these. Approximate atomic state functions (ASFs) are linear combinations of CSFs. A variational functional may be constructed by combining expressions for the energies of one or more ASFs. Average level (AL) functionals are weighted sums of energies of all possible ASFs that may be constructed from a set of CSFs; the number of ASFs is then the same as the number, n, of CSFs. Optimal level (OL) functionals are weighted sums of energies of some subset of ASFs; the GRASP92 package is optimized for this latter class of functionals. The composition of an ASF in terms of CSFs sharing the same quantum numbers is determined using the configuration-interaction (CI) procedure that results upon varying the expansion coefficients to determine the extremum of a variational functional. Radial functions may be determined by numerically solving the multiconfiguration Dirac-Fock (MCDF) equations that result upon varying the orbital radial functions or some subset thereof so as to obtain an extremum of the variational functional. Radial wavefunctions may also be determined using a screened hydrogenic or Thomas-Fermi model, although these schemes generally provide initial estimates for MCDF self-consistent-field (SCF) calculations. Transition properties for pairs of ASFs are computed from matrix elements of multipole operators of the electromagnetic field. All matrix elements of CSFs are evaluated using the Racah algebra. Reasons for the new version: During recent studies using the general relativistic atomic structure package (GRASP92), several errors were found, some of which might have been present already in the earlier GRASP92 version (program ABJN_v1_0, Comput. Phys. Comm. 55 (1989) 425). These errors were reported and discussed by Froese Fischer, Gaigalas, and Ralchenko in a separate publication [C. Froese Fischer, G. Gaigalas, Y. Ralchenko, Comput. Phys. Comm. 175 (2006) 738-744. [7
FY17 Status Report on the Computing Systems for the Yucca Mountain Project TSPA-LA Models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Appel, Gordon John; Hadgu, Teklu; Appel, Gordon John
Sandia National Laboratories (SNL) continued evaluation of total system performance assessment (TSPA) computing systems for the previously considered Yucca Mountain Project (YMP). This was done to maintain the operational readiness of the computing infrastructure (computer hardware and software) and knowledge capability for total system performance assessment (TSPA) type analysis, as directed by the National Nuclear Security Administration (NNSA), DOE 2010. This work is a continuation of the ongoing readiness evaluation reported in Lee and Hadgu (2014), Hadgu et al. (2015) and Hadgu and Appel (2016). The TSPA computing hardware (CL2014) and storage system described in Hadgu et al. (2015) weremore » used for the current analysis. One floating license of GoldSim with Versions 9.60.300, 10.5, 11.1 and 12.0 was installed on the cluster head node, and its distributed processing capability was mapped on the cluster processors. Other supporting software were tested and installed to support the TSPA- type analysis on the server cluster. The current tasks included preliminary upgrade of the TSPA-LA from Version 9.60.300 to the latest version 12.0 and address DLL-related issues observed in the FY16 work. The model upgrade task successfully converted the Nominal Modeling case to GoldSim Versions 11.1/12. Conversions of the rest of the TSPA models were also attempted but program and operational difficulties precluded this. Upgrade of the remaining of the modeling cases and distributed processing tasks is expected to continue. The 2014 server cluster and supporting software systems are fully operational to support TSPA-LA type analysis.« less
NASA Technical Reports Server (NTRS)
Chan, Gordon C.; Turner, Horace Q.
1990-01-01
COSMIC/NASTRAN, as it is supported and maintained by COSMIC, runs on four main-frame computers - CDC, VAX, IBM and UNIVAC. COSMIC/NASTRAN on other computers, such as CRAY, AMDAHL, PRIME, CONVEX, etc., is available commercially from a number of third party organizations. All these computers, with their own one-of-a-kind operating systems, make NASTRAN machine dependent. The job control language (JCL), the file management, and the program execution procedure of these computers are vastly different, although 95 percent of NASTRAN source code was written in standard ANSI FORTRAN 77. The advantage of the UNIX operating system is that it has no machine boundary. UNIX is becoming widely used in many workstations, mini's, super-PC's, and even some main-frame computers. NASTRAN for the UNIX operating system is definitely the way to go in the future, and makes NASTRAN available to a host of computers, big and small. Since 1985, many NASTRAN improvements and enhancements were made to conform to the ANSI FORTRAN 77 standards. A major UNIX migration effort was incorporated into COSMIC NASTRAN 1990 release. As a pioneer work for the UNIX environment, a version of COSMIC 89 NASTRAN was officially released in October 1989 for DEC ULTRIX VAXstation 3100 (with VMS extensions). A COSMIC 90 NASTRAN version for DEC ULTRIX DECstation 3100 (with RISC) is planned for April 1990 release. Both workstations are UNIX based computers. The COSMIC 90 NASTRAN will be made available on a TK50 tape for the DEC ULTRIX workstations. Previously in 1988, an 88 NASTRAN version was tested successfully on a SiliconGraphics workstation.
Statistical Analysis of CFD Solutions from the Drag Prediction Workshop
NASA Technical Reports Server (NTRS)
Hemsch, Michael J.
2002-01-01
A simple, graphical framework is presented for robust statistical evaluation of results obtained from N-Version testing of a series of RANS CFD codes. The solutions were obtained by a variety of code developers and users for the June 2001 Drag Prediction Workshop sponsored by the AIAA Applied Aerodynamics Technical Committee. The aerodynamic configuration used for the computational tests is the DLR-F4 wing-body combination previously tested in several European wind tunnels and for which a previous N-Version test had been conducted. The statistical framework is used to evaluate code results for (1) a single cruise design point, (2) drag polars and (3) drag rise. The paper concludes with a discussion of the meaning of the results, especially with respect to predictability, Validation, and reporting of solutions.
Efficient conjugate gradient algorithms for computation of the manipulator forward dynamics
NASA Technical Reports Server (NTRS)
Fijany, Amir; Scheid, Robert E.
1989-01-01
The applicability of conjugate gradient algorithms for computation of the manipulator forward dynamics is investigated. The redundancies in the previously proposed conjugate gradient algorithm are analyzed. A new version is developed which, by avoiding these redundancies, achieves a significantly greater efficiency. A preconditioned conjugate gradient algorithm is also presented. A diagonal matrix whose elements are the diagonal elements of the inertia matrix is proposed as the preconditioner. In order to increase the computational efficiency, an algorithm is developed which exploits the synergism between the computation of the diagonal elements of the inertia matrix and that required by the conjugate gradient algorithm.
Local electron tomography using angular variations of surface tangents: Stomo version 2
NASA Astrophysics Data System (ADS)
Petersen, T. C.; Ringer, S. P.
2012-03-01
In a recent publication, we investigated the prospect of measuring the outer three-dimensional (3D) shapes of nano-scale atom probe specimens from tilt-series of images collected in the transmission electron microscope. For this purpose alone, an algorithm and simplified reconstruction theory were developed to circumvent issues that arise in commercial "back-projection" computations in this context. In our approach, we give up the difficult task of computing the complete 3D continuum structure and instead seek only the 3D morphology of internal and external scattering interfaces. These interfaces can be described as embedded 2D surfaces projected onto each image in a tilt series. Curves and other features in the images are interpreted as inscribed sets of tangent lines, which intersect the scattering interfaces at unknown locations along the direction of the incident electron beam. Smooth angular variations of the tangent line abscissa are used to compute the surface tangent intersections and hence the 3D morphology as a "point cloud". We have published the explicit details of our alternative algorithm along with the source code entitled "stomo_version_1". For this work, we have further modified the code to efficiently handle rectangular image sets, perform much faster tangent-line "edge detection" and smoother tilt-axis image alignment using simple bi-linear interpolation. We have also adapted the algorithm to detect tangent lines as "ridges", based upon 2nd order partial derivatives of the image intensity; the magnitude and orientation of which is described by a Hessian matrix. Ridges are more appropriate descriptors for tangent-line curves in phase contrast images outlined by Fresnel fringes or absorption contrast data from fine-scale objects. Improved accuracy, efficiency and speed for "stomo_version_2" is demonstrated in this paper using both high resolution electron tomography data of a nano-sized atom probe tip and simulated absorption-contrast images. Program summaryProgram title: STOMO version 2 Catalogue identifier: AEFS_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFS_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 2854 No. of bytes in distributed program, including test data, etc.: 23 559 Distribution format: tar.gz Programming language: C/C++ Computer: PC Operating system: Windows XP RAM: Scales as the product of experimental image dimensions multiplied by the number of points chosen by the user in polynomial fitting. Typical runs require between 50 Mb and 100 Mb of RAM. Supplementary material: Sample output files, for the test run provided, are available. Classification: 7.4, 14 Catalogue identifier of previous version: AEFS_v1_0 Journal reference of previous version: Comput. Phys. Comm. 181 (2010) 676 Does the new version supersede the previous version?: Yes Nature of problem: A local electron tomography algorithm of specimens for which conventional back projection may fail and or data for which there is a limited angular range (which would otherwise cause significant 'missing-wedge' artefacts). The algorithm does not solve the tomography back projection problem but rather locally reconstructs the 3D morphology of surfaces defined by varied scattering densities. Solution method: Local reconstruction is effected using image-analysis edge and ridge detection computations on experimental tilt series to measure smooth angular variations of surface tangent-line intersections, which generate point clouds decorating the embedded and or external scattering surfaces of a specimen. Reasons for new version: The new version was coded to cater for rectangular images in experimental tilt-series, ensure smoother image rotations, provide ridge detection (suitable for sensing phase-contrast Fresnel fringes and other fine-scale structures), faster/larger kernel edge detection and also greatly reduce RAM usage. Specimen surface normals are also explicitly computed from tangent-line and edge intersections, providing new information for potential use in point cloud rendering. Hysteresis thresholding implemented in the version 1 edge-detection algorithm provided only sparse edge-linking. Version 2 now implements edge tracking using recursion to fully link the edges during hysteresis thresholding. Furthermore in version 1 the minimum number of fitted polynomial points (specified in the input file) was not correctly imposed, which has been fixed for version 2. Most of these changes increase the accuracy of 3d morphology surface-tomography reconstructions by facilitating the use of more/finer tilt angles and experimental images of increased spatial-resolution. The ridge detection was incorporated to specifically improve the reconstruction of internal specimen morphology. Summary of revisions: Included Hessian() function to compute 2nd order spatial derivatives of image intensities (operates in the same fashion as the previous and existing Sobel() function). Changed convolve_Gaussian() function to alternatively use successive 1D convolutions (rather than cumbersome 2D summations implemented in version 1), resulting in a large increase in computational speed without any loss in accuracy. The convolution kernel size was hence widened to three times the full width half maximum of the Gaussian filter to improve scale-space selection accuracy. A ridge detection option was included to compute edge maps sensitive to ridges, rather than edges, using elements from a Hessian matrix; the eigenvalues of which were used to define ridge direction for Canny-type hysteresis thresholding. Function edge_detect_Canny() was also altered to pass the gradient-direction maps (from either Hessian or Sobel based operators) in and out of scope for computation of surface normals; thereby enabling the output of both point-cloud and corresponding unstructured vector-field surface descriptors. Function rotate_imgs() was changed to incorporate basic bi-linear interpolation for improved tilt-axis alignment of the entire tilt series in exp_data.dat. Smoother and more accurate edge maps are thereby produced. Algorithm convert_point_cloud_to_tomogram() was created to output the tomogram 3d_imgs.dat in a more memory efficient manner. The function shell_sort(), adapted from numerical recipes in C, was also coded for this purpose. The new function compute_xyz() was coded to calculate point-clouds and tomogram surface normals using information from single tilt images, as opposed to the entire stack. This function is hence used iteratively throughout the reconstruction as each tilt image is analysed in succession. The new function reconstruct_local() is the heart of stomo_version_2.cpp. the main() source code in stomo_version_1.cpp has been rewritten here to process experimental images and edge maps one at a time, using a buffered 3d array of dimensions dictated solely by the number of tilt images required for the local SVD fit of the angular variations. These changes (along with similar iterative file writing) have been made to vastly reduce memory usage and hence allow higher spatial and angular resolution data sets to be analysed without recourse to high performance computing resources. The input file has been simplified by removing the 'slices' and 'channels' settings (used in version 1 for crude image binning), which are now equal to the respective numbers of image rows and columns. Every summation over image rows and columns has been checked to enable the analysis of rectangular images without error. For images of specimens with high aspect-ratios, such as narrow tips, these fixes allow significant reductions in computation time and memory usage. Some arrays in the source code were not appropriately zeroed in version 1, causing reconstruction artefacts in some cases. These problems have now been fixed. Fixed an if-statement to correctly impose the minimum number of fitted polynomial points, thereby reducing noise in the reconstructed data. Implemented proper edge linking in the hysteresis thresholding code for Canny edge detection. Restrictions: The input experimental tilt-series of images must be registered with respect to a common single tilt axis with known orientation and position. Running time: For high quality reconstruction, 2-5 min.
The Ruggedized STD Bus Microcomputer - A low cost computer suitable for Space Shuttle experiments
NASA Technical Reports Server (NTRS)
Budney, T. J.; Stone, R. W.
1982-01-01
Previous space flight computers have been costly in terms of both hardware and software. The Ruggedized STD Bus Microcomputer is based on the commercial Mostek/Pro-Log STD Bus. Ruggedized PC cards can be based on commercial cards from more than 60 manufacturers, reducing hardware cost and design time. Software costs are minimized by using standard 8-bit microprocessors and by debugging code using commercial versions of the ruggedized flight boards while the flight hardware is being fabricated.
NASA Astrophysics Data System (ADS)
Hutton, Christopher; Wagener, Thorsten; Freer, Jim; Han, Dawei; Duffy, Chris; Arheimer, Berit
2017-03-01
In this article, we reply to a comment made on our previous commentary regarding reproducibility in computational hydrology. Software licensing and version control of code are important technical aspects of making code and workflows of scientific experiments open and reproducible. However, in our view, it is the cultural change that is the greatest challenge to overcome to achieve reproducible scientific research in computational hydrology. We believe that from changing the culture and attitude among hydrological scientists, details will evolve to cover more (technical) aspects over time.
NASA Astrophysics Data System (ADS)
Perez, R. Navarro; Schunck, N.; Lasseri, R.-D.; Zhang, C.; Sarich, J.
2017-11-01
We describe the new version 3.00 of the code HFBTHO that solves the nuclear Hartree-Fock (HF) or Hartree-Fock-Bogolyubov (HFB) problem by using the cylindrical transformed deformed harmonic oscillator basis. In the new version, we have implemented the following features: (i) the full Gogny force in both particle-hole and particle-particle channels, (ii) the calculation of the nuclear collective inertia at the perturbative cranking approximation, (iii) the calculation of fission fragment charge, mass and deformations based on the determination of the neck, (iv) the regularization of zero-range pairing forces, (v) the calculation of localization functions, (vi) a MPI interface for large-scale mass table calculations. Program Files doi:http://dx.doi.org/10.17632/c5g2f92by3.1 Licensing provisions: GPL v3 Programming language: FORTRAN-95 Journal reference of previous version: M.V. Stoitsov, N. Schunck, M. Kortelainen, N. Michel, H. Nam, E. Olsen, J. Sarich, and S. Wild, Comput. Phys. Commun. 184 (2013). Does the new version supersede the previous one: Yes Summary of revisions: 1. the Gogny force in both particle-hole and particle-particle channels was implemented; 2. the nuclear collective inertia at the perturbative cranking approximation was implemented; 3. fission fragment charge, mass and deformations were implemented based on the determination of the position of the neck between nascent fragments; 4. the regularization method of zero-range pairing forces was implemented; 5. the localization functions of the HFB solution were implemented; 6. a MPI interface for large-scale mass table calculations was implemented. Nature of problem:HFBTHO is a physics computer code that is used to model the structure of the nucleus. It is an implementation of the energy density functional (EDF) approach to atomic nuclei, where the energy of the nucleus is obtained by integration over space of some phenomenological energy density, which is itself a functional of the neutron and proton intrinsic densities. In the present version of HFBTHO, the energy density derives either from the zero-range Skyrme or the finite-range Gogny effective two-body interaction between nucleons. Nuclear super-fluidity is treated at the Hartree-Fock-Bogolyubov (HFB) approximation. Constraints on the nuclear shape allows probing the potential energy surface of the nucleus as needed e.g., for the description of shape isomers or fission. The implementation of a local scale transformation of the single-particle basis in which the HFB solutions are expanded provide a tool to properly compute the structure of weakly-bound nuclei. Solution method: The program uses the axial Transformed Harmonic Oscillator (THO) single-particle basis to expand quasiparticle wave functions. It iteratively diagonalizes the Hartree-Fock-Bogolyubov Hamiltonian based on generalized Skyrme-like energy densities and zero-range pairing interactions or the finite-range Gogny force until a self-consistent solution is found. A previous version of the program was presented in M.V. Stoitsov, N. Schunck, M. Kortelainen, N. Michel, H. Nam, E. Olsen, J. Sarich, and S. Wild, Comput. Phys. Commun. 184 (2013) 1592-1604 with much of the formalism presented in the original paper M.V. Stoitsov, J. Dobaczewski, W. Nazarewicz, P. Ring, Comput. Phys. Commun. 167 (2005) 43-63. Additional comments: The user must have access to (i) the LAPACK subroutines DSYEEVR, DSYEVD, DSYTRF and DSYTRI, and their dependencies, which compute eigenvalues and eigenfunctions of real symmetric matrices, (ii) the LAPACK subroutines DGETRI and DGETRF, which invert arbitrary real matrices, and (iii) the BLAS routines DCOPY, DSCAL, DGEMM and DGEMV for double-precision linear algebra (or provide another set of subroutines that can perform such tasks). The BLAS and LAPACK subroutines can be obtained from the Netlib Repository at the University of Tennessee, Knoxville: http://netlib2.cs.utk.edu/.
Computer program for the computation of total sediment discharge by the modified Einstein procedure
Stevens, H.H.
1985-01-01
Two versions of a computer program to compute total sediment discharge by the modified Einstein procedure are presented. The FORTRAN 77 language version is for use on the PRIME computer, and the BASIC language version is for use on most microcomputers. The program contains built-in limitations and input-output options that closely follow the original modified Einstein procedure. Program documentation and listings of both versions of the program are included. (USGS)
Fleischner Society: glossary of terms for thoracic imaging.
Hansell, David M; Bankier, Alexander A; MacMahon, Heber; McLoud, Theresa C; Müller, Nestor L; Remy, Jacques
2008-03-01
Members of the Fleischner Society compiled a glossary of terms for thoracic imaging that replaces previous glossaries published in 1984 and 1996 for thoracic radiography and computed tomography (CT), respectively. The need to update the previous versions came from the recognition that new words have emerged, others have become obsolete, and the meaning of some terms has changed. Brief descriptions of some diseases are included, and pictorial examples (chest radiographs and CT scans) are provided for the majority of terms. (c) RSNA, 2008.
Examples of Data Analysis with SPSS/PC+ Studentware.
ERIC Educational Resources Information Center
MacFarland, Thomas W.
Intended for classroom use only, these unpublished notes contain computer lessons on descriptive statistics with files previously created in WordPerfect 4.2 and Lotus 1-2-3 Version 1.A for the IBM PC+. The statistical measures covered include Student's t-test with two independent samples; Student's t-test with a paired sample; Chi-square analysis;…
NASA Astrophysics Data System (ADS)
Stoitsov, M. V.; Schunck, N.; Kortelainen, M.; Michel, N.; Nam, H.; Olsen, E.; Sarich, J.; Wild, S.
2013-06-01
We describe the new version 2.00d of the code HFBTHO that solves the nuclear Skyrme-Hartree-Fock (HF) or Skyrme-Hartree-Fock-Bogoliubov (HFB) problem by using the cylindrical transformed deformed harmonic oscillator basis. In the new version, we have implemented the following features: (i) the modified Broyden method for non-linear problems, (ii) optional breaking of reflection symmetry, (iii) calculation of axial multipole moments, (iv) finite temperature formalism for the HFB method, (v) linear constraint method based on the approximation of the Random Phase Approximation (RPA) matrix for multi-constraint calculations, (vi) blocking of quasi-particles in the Equal Filling Approximation (EFA), (vii) framework for generalized energy density with arbitrary density-dependences, and (viii) shared memory parallelism via OpenMP pragmas. Program summaryProgram title: HFBTHO v2.00d Catalog identifier: ADUI_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADUI_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License version 3 No. of lines in distributed program, including test data, etc.: 167228 No. of bytes in distributed program, including test data, etc.: 2672156 Distribution format: tar.gz Programming language: FORTRAN-95. Computer: Intel Pentium-III, Intel Xeon, AMD-Athlon, AMD-Opteron, Cray XT5, Cray XE6. Operating system: UNIX, LINUX, WindowsXP. RAM: 200 Mwords Word size: 8 bits Classification: 17.22. Does the new version supercede the previous version?: Yes Catalog identifier of previous version: ADUI_v1_0 Journal reference of previous version: Comput. Phys. Comm. 167 (2005) 43 Nature of problem: The solution of self-consistent mean-field equations for weakly-bound paired nuclei requires a correct description of the asymptotic properties of nuclear quasi-particle wave functions. In the present implementation, this is achieved by using the single-particle wave functions of the transformed harmonic oscillator, which allows for an accurate description of deformation effects and pairing correlations in nuclei arbitrarily close to the particle drip lines. Solution method: The program uses the axial Transformed Harmonic Oscillator (THO) single- particle basis to expand quasi-particle wave functions. It iteratively diagonalizes the Hartree-Fock-Bogoliubov Hamiltonian based on generalized Skyrme-like energy densities and zero-range pairing interactions until a self-consistent solution is found. A previous version of the program was presented in: M.V. Stoitsov, J. Dobaczewski, W. Nazarewicz, P. Ring, Comput. Phys. Commun. 167 (2005) 43-63. Reasons for new version: Version 2.00d of HFBTHO provides a number of new options such as the optional breaking of reflection symmetry, the calculation of axial multipole moments, the finite temperature formalism for the HFB method, optimized multi-constraint calculations, the treatment of odd-even and odd-odd nuclei in the blocking approximation, and the framework for generalized energy density with arbitrary density-dependences. It is also the first version of HFBTHO to contain threading capabilities. Summary of revisions: The modified Broyden method has been implemented, Optional breaking of reflection symmetry has been implemented, The calculation of all axial multipole moments up to λ=8 has been implemented, The finite temperature formalism for the HFB method has been implemented, The linear constraint method based on the approximation of the Random Phase Approximation (RPA) matrix for multi-constraint calculations has been implemented, The blocking of quasi-particles in the Equal Filling Approximation (EFA) has been implemented, The framework for generalized energy density functionals with arbitrary density-dependence has been implemented, Shared memory parallelism via OpenMP pragmas has been implemented. Restrictions: Axial- and time-reversal symmetries are assumed. Unusual features: The user must have access to the LAPACK subroutines DSYEVD, DSYTRF and DSYTRI, and their dependences, which compute eigenvalues and eigenfunctions of real symmetric matrices, the LAPACK subroutines DGETRI and DGETRF, which invert arbitrary real matrices, and the BLAS routines DCOPY, DSCAL, DGEMM and DGEMV for double-precision linear algebra (or provide another set of subroutines that can perform such tasks). The BLAS and LAPACK subroutines can be obtained from the Netlib Repository at the University of Tennessee, Knoxville: http://netlib2.cs.utk.edu/. Running time: Highly variable, as it depends on the nucleus, size of the basis, requested accuracy, requested configuration, compiler and libraries, and hardware architecture. An order of magnitude would be a few seconds for ground-state configurations in small bases N≈8-12, to a few minutes in very deformed configuration of a heavy nucleus with a large basis N>20.
APINetworks Java. A Java approach to the efficient treatment of large-scale complex networks
NASA Astrophysics Data System (ADS)
Muñoz-Caro, Camelia; Niño, Alfonso; Reyes, Sebastián; Castillo, Miriam
2016-10-01
We present a new version of the core structural package of our Application Programming Interface, APINetworks, for the treatment of complex networks in arbitrary computational environments. The new version is written in Java and presents several advantages over the previous C++ version: the portability of the Java code, the easiness of object-oriented design implementations, and the simplicity of memory management. In addition, some additional data structures are introduced for storing the sets of nodes and edges. Also, by resorting to the different garbage collectors currently available in the JVM the Java version is much more efficient than the C++ one with respect to memory management. In particular, the G1 collector is the most efficient one because of the parallel execution of G1 and the Java application. Using G1, APINetworks Java outperforms the C++ version and the well-known NetworkX and JGraphT packages in the building and BFS traversal of linear and complete networks. The better memory management of the present version allows for the modeling of much larger networks.
NASCAP simulation of PIX 2 experiments
NASA Technical Reports Server (NTRS)
Roche, J. C.; Mandell, M. J.
1985-01-01
The latest version of the NASCAP/LEO digital computer code used to simulate the PIX 2 experiment is discussed. NASCAP is a finite-element code and previous versions were restricted to a single fixed mesh size. As a consequence the resolution was dictated by the largest physical dimension to be modeled. The latest version of NASCAP/LEO can subdivide selected regions. This permitted the modeling of the overall Delta launch vehicle in the primary computational grid at a coarse resolution, with subdivided regions at finer resolution being used to pick up the details of the experiment module configuration. Langmuir probe data from the flight were used to estimate the space plasma density and temperature and the Delta ground potential relative to the space plasma. This information is needed for input to NASCAP. Because of the uncertainty or variability in the values of these parameters, it was necessary to explore a range around the nominal value in order to determine the variation in current collection. The flight data from PIX 2 were also compared with the results of the NASCAP simulation.
Paul, Jijo; Jacobi, Volkmar; Farhang, Mohammad; Bazrafshan, Babak; Vogl, Thomas J; Mbalisike, Emmanuel C
2013-06-01
Radiation dose and image quality estimation of three X-ray volume imaging (XVI) systems. A total of 126 patients were examined using three XVI systems (groups 1-3) and their data were retrospectively analysed from 2007 to 2012. Each group consisted of 42 patients and each patient was examined using cone-beam computed tomography (CBCT), digital subtraction angiography (DSA) and digital fluoroscopy (DF). Dose parameters such as dose-area product (DAP), skin entry dose (SED) and image quality parameters such as Hounsfield unit (HU), noise, signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) were estimated and compared using appropriate statistical tests. Mean DAP and SED were lower in recent XVI than its previous counterparts in CBCT, DSA and DF. HU of all measured locations was non-significant between the groups except the hepatic artery. Noise showed significant difference among groups (P < 0.05). Regarding CNR and SNR, the recent XVI showed a higher and significant difference compared to its previous versions. Qualitatively, CBCT showed significance between versions unlike the DSA and DF which showed non-significance. A reduction of radiation dose was obtained for the recent-generation XVI system in CBCT, DSA and DF. Image noise was significantly lower; SNR and CNR were higher than in previous versions. The technological advancements and the reduction in the number of frames led to a significant dose reduction and improved image quality with the recent-generation XVI system. • X-ray volume imaging (XVI) systems are increasingly used for interventional radiological procedures. • More modern XVI systems use lower radiation doses compared with earlier counterparts. • Furthermore more modern XVI systems provide higher image quality. • Technological advances reduce radiation dose and improve image quality.
NASA Technical Reports Server (NTRS)
Neiner, G. H.; Cole, G. L.; Arpasi, D. J.
1972-01-01
Digital computer control of a mixed-compression inlet is discussed. The inlet was terminated with a choked orifice at the compressor face station to dynamically simulate a turbojet engine. Inlet diffuser exit airflow disturbances were used. A digital version of a previously tested analog control system was used for both normal shock and restart control. Digital computer algorithms were derived using z-transform and finite difference methods. Using a sample rate of 1000 samples per second, the digital normal shock and restart controls essentially duplicated the inlet analog computer control results. At a sample rate of 100 samples per second, the control system performed adequately but was less stable.
Computer programs for computing particle-size statistics of fluvial sediments
Stevens, H.H.; Hubbell, D.W.
1986-01-01
Two versions of computer programs for inputing data and computing particle-size statistics of fluvial sediments are presented. The FORTRAN 77 language versions are for use on the Prime computer, and the BASIC language versions are for use on microcomputers. The size-statistics program compute Inman, Trask , and Folk statistical parameters from phi values and sizes determined for 10 specified percent-finer values from inputed size and percent-finer data. The program also determines the percentage gravel, sand, silt, and clay, and the Meyer-Peter effective diameter. Documentation and listings for both versions of the programs are included. (Author 's abstract)
NASA Technical Reports Server (NTRS)
Pearson, Don; Hamm, Dustin; Kubena, Brian; Weaver, Jonathan K.
2010-01-01
An updated version of the Platform Independent Software Components for the Exploration of Space (PISCES) software library is available. A previous version was reported in Library for Developing Spacecraft-Mission-Planning Software (MSC-22983), NASA Tech Briefs, Vol. 25, No. 7 (July 2001), page 52. To recapitulate: This software provides for Web-based, collaborative development of computer programs for planning trajectories and trajectory- related aspects of spacecraft-mission design. The library was built using state-of-the-art object-oriented concepts and software-development methodologies. The components of PISCES include Java-language application programs arranged in a hierarchy of classes that facilitates the reuse of the components. As its full name suggests, the PISCES library affords platform-independence: The Java language makes it possible to use the classes and application programs with a Java virtual machine, which is available in most Web-browser programs. Another advantage is expandability: Object orientation facilitates expansion of the library through creation of a new class. Improvements in the library since the previous version include development of orbital-maneuver- planning and rendezvous-launch-window application programs, enhancement of capabilities for propagation of orbits, and development of a desktop user interface.
A new version of Scilab software package for the study of dynamical systems
NASA Astrophysics Data System (ADS)
Bordeianu, C. C.; Felea, D.; Beşliu, C.; Jipa, Al.; Grossu, I. V.
2009-11-01
This work presents a new version of a software package for the study of chaotic flows, maps and fractals [1]. The codes were written using Scilab, a software package for numerical computations providing a powerful open computing environment for engineering and scientific applications. It was found that Scilab provides various functions for ordinary differential equation solving, Fast Fourier Transform, autocorrelation, and excellent 2D and 3D graphical capabilities. The chaotic behaviors of the nonlinear dynamics systems were analyzed using phase-space maps, autocorrelation functions, power spectra, Lyapunov exponents and Kolmogorov-Sinai entropy. Various well-known examples are implemented, with the capability of the users inserting their own ODE or iterative equations. New version program summaryProgram title: Chaos v2.0 Catalogue identifier: AEAP_v2_0 Program summary URL:
NASA Astrophysics Data System (ADS)
Daniluk, Andrzej
2010-03-01
Scientific computing is the field of study concerned with constructing mathematical models, numerical solution techniques and with using computers to analyse and solve scientific and engineering problems. Model-Driven Development (MDD) has been proposed as a means to support the software development process through the use of a model-centric approach. This paper surveys the core MDD technology that was used to develop an application that allows computation of the RHEED intensities dynamically for a disordered surface. New version program summaryProgram title: RHEED1DProcess Catalogue identifier: ADUY_v4_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADUY_v4_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 31 971 No. of bytes in distributed program, including test data, etc.: 3 039 820 Distribution format: tar.gz Programming language: Embarcadero C++ Builder Computer: Intel Core Duo-based PC Operating system: Windows XP, Vista, 7 RAM: more than 1 GB Classification: 4.3, 7.2, 6.2, 8, 14 Catalogue identifier of previous version: ADUY_v3_0 Journal reference of previous version: Comput. Phys. Comm. 180 (2009) 2394 Does the new version supersede the previous version?: No Nature of problem: An application that implements numerical simulations should be constructed according to the CSFAR rules: clear and well-documented, simple, fast, accurate, and robust. A clearly written, externally and internally documented program is much easier to understand and modify. A simple program is much less prone to error and is more easily modified than one that is complicated. Simplicity and clarity also help make the program flexible. Making the program fast has economic benefits. It also allows flexibility because some of the features that make a program efficient can be traded off for greater accuracy. Making the program fast also has the benefit of allowing longer calculations with better resolution. The compromise between speed and accuracy has always posted one of the most troublesome challenges for the programmer. Almost all advances in numerical analysis have come about trying to reach these twin goals. Change in the basic algorithms will give greater improvements in accuracy and speed than using special numerical tricks or changing programming language. A robust program works correctly over a broad spectrum of input data. Solution method: The computational model of the program is based on the use of a dynamical diffraction theory in which the electrons are taken to be diffracted by a potential, which is periodic in the dimension perpendicular to the surface. In the case of a disordered surface we can use the proportional model of the scattering potential, in which the potential of a partially filled layer is taken to be the product of the coverage of this layer and the potential of a fully filled layer: U(θ,z)=∑ θ(t/τ)U(1,z), where U(1,z) stands for the potential for the full nth layer, and U(θ,z) the potential of the growing layer. Reasons for new version: Responding to the user feedback the RHEEDGr_09 program has been upgraded to a standard that allows carrying out computations of the RHEED intensities for a disordered surface. Also, functionality and documentation of the program have been improved. Summary of revisions:The logical structure of the Platform-Specific Model of the RHEEDGr_09 program has been modified according to the scheme showed in Fig. 1*. The class diagram in Fig. 1* is a static view of the main platform-specific elements of the RHEED1DProcess architecture. Fig. 2* provides a dynamic view by showing the creation and destruction simplistic sequence diagram for the process. Fig. 3* shows the RHEED1DProcess use case model. As can be seen in Figs. 2-3* the RHEED1DProcess has been designed as a slave process that runs as a separate thread inside each transaction generated by the master Growth09 program (see pii:S0010-4655(09)00386-5 A. Daniluk, Model-Driven Development for scientific computing. Computations of RHEED intensities for a disordered surface. Part II The RHEED1DProcess requires the user to provide the appropriate parameters for the crystal structure under investigation. These parameters are loaded from the parameters.ini file at run-time. Instructions on the preparation of the .ini files can be found in the new distribution. The RHEED1DProcess requires the user to provide the appropriate values of the layers of coverage profiles. The CoverageProfiles.dat file (generated by Growth09 master application) at run-time loads these values. The RHEED1DProcess enables carrying out one-dimensional dynamical calculations for the fcc lattice, with a two-atoms basis and fcc lattice, with one atom basis but yet the zeroth Fourier component of the scattering potential in the TRHEED1D::crystPotUg() function can be modified according to users' specific application requirements. * The figures mentioned can be downloaded, see "Supplementary material" below. Unusual features: The program is distributed in the form of main projects RHEED1DProcess.cbproj and Graph2D0x.cbproj with associated files, and should be compiled using Embarcadero RAD Studio 2010 along with Together visual-modelling platform. The program should be compiled with English/USA regional and language options. Additional comments: This version of the RHEED program is designed to run in conjunction with the GROWTH09 (ADVL_v3_0) program. It does not replace the previous, stand alone, RHEEDGR-09 (ADUY_v3_0) version. Running time: The typical running time is machine and user-parameters dependent. References:[1] OMG, Model Driven Architecture Guide Version 1.0.1, 2003.
NASA Astrophysics Data System (ADS)
Cipolla, Sam J.
2011-11-01
In this new version of ISICS, called ISICS2011, a few omissions and incorrect entries in the built-in file of electron binding energies have been corrected; operational situations leading to un-physical behavior have been identified and flagged. New version program summaryProgram title: ISICS2011 Catalogue identifier: ADDS_v5_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADDS_v5_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 6011 No. of bytes in distributed program, including test data, etc.: 130 587 Distribution format: tar.gz Programming language: C Computer: 80486 or higher-level PCs Operating system: WINDOWS XP and all earlier operating systems Classification: 16.7 Catalogue identifier of previous version: ADDS_v4_0 Journal reference of previous version: Comput. Phys. Commun. 180 (2009) 1716. Does the new version supersede the previous version?: Yes Nature of problem: Ionization and X-ray production cross section calculations for ion-atom collisions. Solution method: Numerical integration of form factor using a logarithmic transform and Gaussian quadrature, plus exact integration limits. Reasons for new version: General need for higher precision in output format for projectile energies; some built-in binding energies needed correcting; some anomalous results occur due to faulty read-in data or calculated parameters becoming un-physical; erroneous calculations could result for the L and M shells when restricted K-shell options are inadvertently chosen; to achieve general compatibility with ISICSoo, a companion C++ version that is portable to Linux and MacOS platforms, has been submitted for publication in the CPC Program Library approximately at the same time as this present new standalone version of ISICS [1]. Summary of revisions: The format field for projectile energies in the output has been expanded from two to four decimal places in order to distinguish between closely spaced energy values. There were a few entries in the executable binding energy file that needed correcting; K shell of Eu, M shells of Zn, M1 shell of Kr. The corrected values were also entered in the ENERGY.DAT file. In addition, an alternate data file of binding energies is included, called ENERGY_GW.DAT, which is more up-to-date [2]. Likewise, an alternate atomic parameters data file is now included, called FLOURE_JC.DAT, which is more up-to-date [3] fluorescence yields for the K and L shells and Coster-Kronig parameters for the L shell. Both data files can be read in using the -f usage option. To do this, the original energy file should be renamed and saved (e.g., ENERGY_BB.DAT) and the new file (ENERGY_GW.DAT ) should be duplicated as ENERGY.DAT to be read in using the -f option. Similarly for reading in an alternate FLOURE.DAT file. As with previous versions, the user can also simply input different values of any input quantity by invoking the "specify your own parameters" option from the main menu. You can also use this option to simply check the values of the built-in values of the parameters. If it still happens that a zero binding energy for a particular sub-shell is read in, the program will not completely abort, but will calculate results for the other sub-shells while setting the affected sub-shell output to zero. In calculating the Coulomb deflection factor, if the quantity inside the radical sign of the parameter z z=√{(1} becomes zero or negative, to prevent the program from aborting, the PWBA cross sections are still calculated while the ECPSSR cross sections are set to zero. This situation can happen for very low energy collisions, such as were noticed for helium ions on copper at energies of E⩽11.2 keV. It was observed during the engineering of ISICSoo [1] that erroneous calculations could result for the L- and M-shell cases when restricted K-shell R or HSR scaling options were inappropriately chosen. The program has now been fixed so that these inappropriate options are ignored for the L and M shells. In the previous versions, the usage for inputting a batch data file was incorrectly stated in the Users Manual as -Bxxx; the correct designation is -Fxxx, or alternatively, -Ixxx, as indicated on the usage screen in running the program. A revised Users Manual is also available. Restrictions: The consumed CPU time increases with the atomic shell (K, L, M), but execution is still very fast. Running time: This depends on which shell and the number of different energies to be used in the calculation. The running time is not significantly changed from the previous version.
Validation Results for LEWICE 2.0
NASA Technical Reports Server (NTRS)
Wright, William B.; Rutkowski, Adam
1999-01-01
A research project is underway at NASA Lewis to produce a computer code which can accurately predict ice growth under any meteorological conditions for any aircraft surface. This report will present results from version 2.0 of this code, which is called LEWICE. This version differs from previous releases due to its robustness and its ability to reproduce results accurately for different spacing and time step criteria across computing platform. It also differs in the extensive amount of effort undertaken to compare the results in a quantified manner against the database of ice shapes which have been generated in the NASA Lewis Icing Research Tunnel (IRT). The results of the shape comparisons are analyzed to determine the range of meteorological conditions under which LEWICE 2.0 is within the experimental repeatability. This comparison shows that the average variation of LEWICE 2.0 from the experimental data is 7.2% while the overall variability of the experimental data is 2.5%.
Model Package Report: Hanford Soil Inventory Model SIM v.2 Build 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nichols, Will E.; Zaher, U.; Mehta, S.
The Hanford Soil Inventory Model (SIM) is a tool for the estimation of inventory of contaminants that were released to soil from liquid discharges during the U.S. Department of Energy’s Hanford Site operations. This model package report documents the construction and development of a second version of SIM (SIM-v2) to support the needs of Hanford Site Composite Analysis. The SIM-v2 is implemented using GoldSim Pro®1 software with a new model architecture that preserves the uncertainty in inventory estimates while reducing the computational burden (compared to the previous version) and allowing more traceability and transparency in calculation methodology. The calculation architecturemore » is designed in such a manner that future updates to the waste stream composition along with addition or deletion of waste sites can be performed with relative ease. In addition, the new computational platform allows for continued hardware upgrade.« less
CARE 3, Version 4 enhancements
NASA Technical Reports Server (NTRS)
Bryant, L. A.; Stiffler, J. J.
1985-01-01
The enhancements and error corrections to CARE III Version 4 are listed. All changes to Version 4 with the exception of the internal redundancy model were implemented in Version 5. Version 4 is the first public release version for execution on the CDC Cyber 170 series computers. Version 5 is the second release version and it is written in ANSI standard FORTRAN 77 for execution on the DEC VAX 11/700 series computers and many others.
NASA Astrophysics Data System (ADS)
Stock, Joachim W.; Kitzmann, Daniel; Patzer, A. Beate C.; Sedlmayr, Erwin
2018-06-01
For the calculation of complex neutral/ionized gas phase chemical equilibria, we present a semi-analytical versatile and efficient computer program, called FastChem. The applied method is based on the solution of a system of coupled nonlinear (and linear) algebraic equations, namely the law of mass action and the element conservation equations including charge balance, in many variables. Specifically, the system of equations is decomposed into a set of coupled nonlinear equations in one variable each, which are solved analytically whenever feasible to reduce computation time. Notably, the electron density is determined by using the method of Nelder and Mead at low temperatures. The program is written in object-oriented C++ which makes it easy to couple the code with other programs, although a stand-alone version is provided. FastChem can be used in parallel or sequentially and is available under the GNU General Public License version 3 at https://github.com/exoclime/FastChem together with several sample applications. The code has been successfully validated against previous studies and its convergence behavior has been tested even for extreme physical parameter ranges down to 100 K and up to 1000 bar. FastChem converges stable and robust in even most demanding chemical situations, which posed sometimes extreme challenges for previous algorithms.
New PDS will predict performance of pallets made with used parts
John W. Clarke; Marshall S. White; Philip A. Araman
2001-01-01
The Pallet Design System (PDS) is a computer design program developed by Virginia Tech, the National Wooden Pallet & Container Association, and the U.S. Forest Service to quickly and accurately predict the performance of new wood pallets. PDS has been upgraded annually since its original version in 1984. All of the previous upgrades, however, have continued to...
Comparison of LEWICE and GlennICE in the SLD Regime
NASA Technical Reports Server (NTRS)
Wright, William B.; Potapczuk, Mark G.; Levinson, Laurie H.
2008-01-01
A research project is underway at the NASA Glenn Research Center (GRC) to produce computer software that can accurately predict ice growth under any meteorological conditions for any aircraft surface. This report will present results from two different computer programs. The first program, LEWICE version 3.2.2, has been reported on previously. The second program is GlennICE version 0.1. An extensive comparison of the results in a quantifiable manner against the database of ice shapes that have been generated in the GRC Icing Research Tunnel (IRT) has also been performed, including additional data taken to extend the database in the Super-cooled Large Drop (SLD) regime. This paper will show the differences in ice shape between LEWICE 3.2.2, GlennICE, and experimental data. This report will also provide a description of both programs. Comparisons are then made to recent additions to the SLD database and selected previous cases. Quantitative comparisons are shown for horn height, horn angle, icing limit, area, and leading edge thickness. The results show that the predicted results for both programs are within the accuracy limits of the experimental data for the majority of cases.
NASA Astrophysics Data System (ADS)
Rundle, J.; Rundle, P.; Donnellan, A.; Li, P.
2003-12-01
We consider the problem of the complex dynamics of earthquake fault systems, and whether numerical simulations can be used to define an ensemble forecasting technology similar to that used in weather and climate research. To effectively carry out such a program, we need 1) a topological realistic model to simulate the fault system; 2) data sets to constrain the model parameters through a systematic program of data assimilation; 3) a computational technology making use of modern paradigms of high performance and parallel computing systems; and 4) software to visualize and analyze the results. In particular, we focus attention of a new version of our code Virtual California (version 2001) in which we model all of the major strike slip faults extending throughout California, from the Mexico-California border to the Mendocino Triple Junction. We use the historic data set of earthquakes larger than magnitude M > 6 to define the frictional properties of all 654 fault segments (degrees of freedom) in the model. Previous versions of Virtual California had used only 215 fault segments to model the strike slip faults in southern California. To compute the dynamics and the associated surface deformation, we use message passing as implemented in the MPICH standard distribution on a small Beowulf cluster consisting of 10 cpus. We are also planning to run the code on significantly larger machines so that we can begin to examine much finer spatial scales of resolution, and to assess scaling properties of the code. We present results of simulations both as static images and as mpeg movies, so that the dynamical aspects of the computation can be assessed by the viewer. We also compute a variety of statistics from the simulations, including magnitude-frequency relations, and compare these with data from real fault systems.
NASA Technical Reports Server (NTRS)
Perkey, D. J.; Kreitzberg, C. W.
1984-01-01
The dynamic prediction model along with its macro-processor capability and data flow system from the Drexel Limited-Area and Mesoscale Prediction System (LAMPS) were converted and recorded for the Perkin-Elmer 3220. The previous version of this model was written for Control Data Corporation 7600 and CRAY-1a computer environment which existed until recently at the National Center for Atmospheric Research. The purpose of this conversion was to prepare LAMPS for porting to computer environments other than that encountered at NCAR. The emphasis was shifted from programming tasks to model simulation and evaluation tests.
Numerical evaluation of multi-loop integrals for arbitrary kinematics with SecDec 2.0
NASA Astrophysics Data System (ADS)
Borowka, Sophia; Carter, Jonathon; Heinrich, Gudrun
2013-02-01
We present the program SecDec 2.0, which contains various new features. First, it allows the numerical evaluation of multi-loop integrals with no restriction on the kinematics. Dimensionally regulated ultraviolet and infrared singularities are isolated via sector decomposition, while threshold singularities are handled by a deformation of the integration contour in the complex plane. As an application, we present numerical results for various massive two-loop four-point diagrams. SecDec 2.0 also contains new useful features for the calculation of more general parameter integrals, related for example to phase space integrals. Program summaryProgram title: SecDec 2.0 Catalogue identifier: AEIR_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIR_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 156829 No. of bytes in distributed program, including test data, etc.: 2137907 Distribution format: tar.gz Programming language: Wolfram Mathematica, Perl, Fortran/C++. Computer: From a single PC to a cluster, depending on the problem. Operating system: Unix, Linux. RAM: Depending on the complexity of the problem Classification: 4.4, 5, 11.1. Catalogue identifier of previous version: AEIR_v1_0 Journal reference of previous version: Comput. Phys. Comm. 182(2011)1566 Does the new version supersede the previous version?: Yes Nature of problem: Extraction of ultraviolet and infrared singularities from parametric integrals appearing in higher order perturbative calculations in gauge theories. Numerical integration in the presence of integrable singularities (e.g., kinematic thresholds). Solution method: Algebraic extraction of singularities in dimensional regularization using iterated sector decomposition. This leads to a Laurent series in the dimensional regularization parameter ɛ, where the coefficients are finite integrals over the unit hypercube. Those integrals are evaluated numerically by Monte Carlo integration. The integrable singularities are handled by choosing a suitable integration contour in the complex plane, in an automated way. Reasons for new version: In the previous version the calculation of multi-scale integrals was restricted to the Euclidean region. Now multi-loop integrals with arbitrary physical kinematics can be evaluated. Another major improvement is the possibility of full parallelization. Summary of revisions: No restriction on the kinematics for multi-loop integrals. The integrand can be constructed from the topological cuts of the diagram. Possibility of full parallelization. Numerical integration of multi-loop integrals written in C++ rather than Fortran. Possibility to loop over ranges of parameters. Restrictions: Depending on the complexity of the problem, limited by memory and CPU time. The restriction that multi-scale integrals could only be evaluated at Euclidean points is superseded in version 2.0. Running time: Between a few minutes and several days, depending on the complexity of the problem. Test runs provided take only seconds.
NEQAIRv14.0 Release Notes: Nonequilibrium and Equilibrium Radiative Transport Spectra Program
NASA Technical Reports Server (NTRS)
Brandis, Aaron Michael; Cruden, Brett A.
2014-01-01
NEQAIR v14.0 is the first parallelized version of NEQAIR. Starting from the last version of the code that went through the internal software release process at NASA Ames (NEQAIR 2008), there have been significant updates to the physics in the code and the computational efficiency. NEQAIR v14.0 supersedes NEQAIR v13.2, v13.1 and the suite of NEQAIR2009 versions. These updates have predominantly been performed by Brett Cruden and Aaron Brandis from ERC Inc at NASA Ames Research Center in 2013 and 2014. A new naming convention is being adopted with this current release. The current and future versions of the code will be named NEQAIR vY.X. The Y will refer to a major release increment. Minor revisions and update releases will involve incrementing X. This is to keep NEQAIR more in line with common software release practices. NEQAIR v14.0 is a standalone software tool for line-by-line spectral computation of radiative intensities and/or radiative heat flux, with one-dimensional transport of radiation. In order to accomplish this, NEQAIR v14.0, as in previous versions, requires the specification of distances (in cm), temperatures (in K) and number densities (in parts/cc) of constituent species along lines of sight. Therefore, it is assumed that flow quantities have been extracted from flow fields computed using other tools, such as CFD codes like DPLR or LAURA, and that lines of sight have been constructed and written out in the format required by NEQAIR v14.0. There are two principal modes for running NEQAIR v14.0. In the first mode NEQAIR v14.0 is used as a tool for creating synthetic spectra of any desired resolution (including convolution with a specified instrument/slit function). The first mode is typically exercised in simulating/interpreting spectroscopic measurements of different sources (e.g. shock tube data, plasma torches, etc.). In the second mode, NEQAIR v14.0 is used as a radiative heat flux prediction tool for flight projects. Correspondingly, NEQAIR has also been used to simulate the radiance measured on previous flight missions. This report summarizes the database updates, corrections that have been made to the code, changes to input files, parallelization, the current usage recommendations, including test cases, and an indication of the performance enhancements achieved.
Ellingwood, Nathan D; Yin, Youbing; Smith, Matthew; Lin, Ching-Long
2016-04-01
Faster and more accurate methods for registration of images are important for research involved in conducting population-based studies that utilize medical imaging, as well as improvements for use in clinical applications. We present a novel computation- and memory-efficient multi-level method on graphics processing units (GPU) for performing registration of two computed tomography (CT) volumetric lung images. We developed a computation- and memory-efficient Diffeomorphic Multi-level B-Spline Transform Composite (DMTC) method to implement nonrigid mass-preserving registration of two CT lung images on GPU. The framework consists of a hierarchy of B-Spline control grids of increasing resolution. A similarity criterion known as the sum of squared tissue volume difference (SSTVD) was adopted to preserve lung tissue mass. The use of SSTVD consists of the calculation of the tissue volume, the Jacobian, and their derivatives, which makes its implementation on GPU challenging due to memory constraints. The use of the DMTC method enabled reduced computation and memory storage of variables with minimal communication between GPU and Central Processing Unit (CPU) due to ability to pre-compute values. The method was assessed on six healthy human subjects. Resultant GPU-generated displacement fields were compared against the previously validated CPU counterpart fields, showing good agreement with an average normalized root mean square error (nRMS) of 0.044±0.015. Runtime and performance speedup are compared between single-threaded CPU, multi-threaded CPU, and GPU algorithms. Best performance speedup occurs at the highest resolution in the GPU implementation for the SSTVD cost and cost gradient computations, with a speedup of 112 times that of the single-threaded CPU version and 11 times over the twelve-threaded version when considering average time per iteration using a Nvidia Tesla K20X GPU. The proposed GPU-based DMTC method outperforms its multi-threaded CPU version in terms of runtime. Total registration time reduced runtime to 2.9min on the GPU version, compared to 12.8min on twelve-threaded CPU version and 112.5min on a single-threaded CPU. Furthermore, the GPU implementation discussed in this work can be adapted for use of other cost functions that require calculation of the first derivatives. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Stuckless, J.S.; VanTrump, G.
1979-01-01
A revised version of Graphic Normative Analysis Program (GNAP) has been developed to allow maximum flexibility in the evaluation of chemical data by the occasional computer user. GNAP calculates ClPW norms, Thornton and Tuttle's differentiation index, Barth's cations, Niggli values and values for variables defined by the user. Calculated values can be displayed graphically in X-Y plots or ternary diagrams. Plotting can be done on a line printer or Calcomp plotter with either weight percent or mole percent data. Modifications in the original program give the user some control over normative calculations for each sample. The number of user-defined variables that can be created from the data has been increased from ten to fifteen. Plotting and calculations can be based on the original data, data adjusted to sum to 100 percent, or data adjusted to sum to 100 percent without water. Analyses for which norms were previously not computable are now computed with footnotes that show excesses or deficiencies in oxides (or volatiles) not accounted for by the norm. This report contains a listing of the computer program, an explanation of the use of the program, and the two sample problems.
RAID v2.0: an updated resource of RNA-associated interactions across organisms
Yi, Ying; Zhao, Yue; Li, Chunhua; Zhang, Lin; Huang, Huiying; Li, Yana; Liu, Lanlan; Hou, Ping; Cui, Tianyu; Tan, Puwen; Hu, Yongfei; Zhang, Ting; Huang, Yan; Li, Xiaobo; Yu, Jia; Wang, Dong
2017-01-01
With the development of biotechnologies and computational prediction algorithms, the number of experimental and computational prediction RNA-associated interactions has grown rapidly in recent years. However, diverse RNA-associated interactions are scattered over a wide variety of resources and organisms, whereas a fully comprehensive view of diverse RNA-associated interactions is still not available for any species. Hence, we have updated the RAID database to version 2.0 (RAID v2.0, www.rna-society.org/raid/) by integrating experimental and computational prediction interactions from manually reading literature and other database resources under one common framework. The new developments in RAID v2.0 include (i) over 850-fold RNA-associated interactions, an enhancement compared to the previous version; (ii) numerous resources integrated with experimental or computational prediction evidence for each RNA-associated interaction; (iii) a reliability assessment for each RNA-associated interaction based on an integrative confidence score; and (iv) an increase of species coverage to 60. Consequently, RAID v2.0 recruits more than 5.27 million RNA-associated interactions, including more than 4 million RNA–RNA interactions and more than 1.2 million RNA–protein interactions, referring to nearly 130 000 RNA/protein symbols across 60 species. PMID:27899615
Verification and validation of RADMODL Version 1.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kimball, K.D.
1993-03-01
RADMODL is a system of linked computer codes designed to calculate the radiation environment following an accident in which nuclear materials are released. The RADMODL code and the corresponding Verification and Validation (V&V) calculations (Appendix A), were developed for Westinghouse Savannah River Company (WSRC) by EGS Corporation (EGS). Each module of RADMODL is an independent code and was verified separately. The full system was validated by comparing the output of the various modules with the corresponding output of a previously verified version of the modules. The results of the verification and validation tests show that RADMODL correctly calculates the transportmore » of radionuclides and radiation doses. As a result of this verification and validation effort, RADMODL Version 1.0 is certified for use in calculating the radiation environment following an accident.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Batista, Rafael Alves; Dundovic, Andrej; Sigl, Guenter
2016-05-01
We present the simulation framework CRPropa version 3 designed for efficient development of astrophysical predictions for ultra-high energy particles. Users can assemble modules of the most relevant propagation effects in galactic and extragalactic space, include their own physics modules with new features, and receive on output primary and secondary cosmic messengers including nuclei, neutrinos and photons. In extension to the propagation physics contained in a previous CRPropa version, the new version facilitates high-performance computing and comprises new physical features such as an interface for galactic propagation using lensing techniques, an improved photonuclear interaction calculation, and propagation in time dependent environmentsmore » to take into account cosmic evolution effects in anisotropy studies and variable sources. First applications using highlighted features are presented as well.« less
A parallel solver for huge dense linear systems
NASA Astrophysics Data System (ADS)
Badia, J. M.; Movilla, J. L.; Climente, J. I.; Castillo, M.; Marqués, M.; Mayo, R.; Quintana-Ortí, E. S.; Planelles, J.
2011-11-01
HDSS (Huge Dense Linear System Solver) is a Fortran Application Programming Interface (API) to facilitate the parallel solution of very large dense systems to scientists and engineers. The API makes use of parallelism to yield an efficient solution of the systems on a wide range of parallel platforms, from clusters of processors to massively parallel multiprocessors. It exploits out-of-core strategies to leverage the secondary memory in order to solve huge linear systems O(100.000). The API is based on the parallel linear algebra library PLAPACK, and on its Out-Of-Core (OOC) extension POOCLAPACK. Both PLAPACK and POOCLAPACK use the Message Passing Interface (MPI) as the communication layer and BLAS to perform the local matrix operations. The API provides a friendly interface to the users, hiding almost all the technical aspects related to the parallel execution of the code and the use of the secondary memory to solve the systems. In particular, the API can automatically select the best way to store and solve the systems, depending of the dimension of the system, the number of processes and the main memory of the platform. Experimental results on several parallel platforms report high performance, reaching more than 1 TFLOP with 64 cores to solve a system with more than 200 000 equations and more than 10 000 right-hand side vectors. New version program summaryProgram title: Huge Dense System Solver (HDSS) Catalogue identifier: AEHU_v1_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHU_v1_1.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 87 062 No. of bytes in distributed program, including test data, etc.: 1 069 110 Distribution format: tar.gz Programming language: Fortran90, C Computer: Parallel architectures: multiprocessors, computer clusters Operating system: Linux/Unix Has the code been vectorized or parallelized?: Yes, includes MPI primitives. RAM: Tested for up to 190 GB Classification: 6.5 External routines: MPI ( http://www.mpi-forum.org/), BLAS ( http://www.netlib.org/blas/), PLAPACK ( http://www.cs.utexas.edu/~plapack/), POOCLAPACK ( ftp://ftp.cs.utexas.edu/pub/rvdg/PLAPACK/pooclapack.ps) (code for PLAPACK and POOCLAPACK is included in the distribution). Catalogue identifier of previous version: AEHU_v1_0 Journal reference of previous version: Comput. Phys. Comm. 182 (2011) 533 Does the new version supersede the previous version?: Yes Nature of problem: Huge scale dense systems of linear equations, Ax=B, beyond standard LAPACK capabilities. Solution method: The linear systems are solved by means of parallelized routines based on the LU factorization, using efficient secondary storage algorithms when the available main memory is insufficient. Reasons for new version: In many applications we need to guarantee a high accuracy in the solution of very large linear systems and we can do it by using double-precision arithmetic. Summary of revisions: Version 1.1 Can be used to solve linear systems using double-precision arithmetic. New version of the initialization routine. The user can choose the kind of arithmetic and the values of several parameters of the environment. Running time: About 5 hours to solve a system with more than 200 000 equations and more than 10 000 right-hand side vectors using double-precision arithmetic on an eight-node commodity cluster with a total of 64 Intel cores.
Evaluation of SNS Beamline Shielding Configurations using MCNPX Accelerated by ADVANTG
DOE Office of Scientific and Technical Information (OSTI.GOV)
Risner, Joel M; Johnson, Seth R.; Remec, Igor
2015-01-01
Shielding analyses for the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory pose significant computational challenges, including highly anisotropic high-energy sources, a combination of deep penetration shielding and an unshielded beamline, and a desire to obtain well-converged nearly global solutions for mapping of predicted radiation fields. The majority of these analyses have been performed using MCNPX with manually generated variance reduction parameters (source biasing and cell-based splitting and Russian roulette) that were largely based on the analyst's insight into the problem specifics. Development of the variance reduction parameters required extensive analyst time, and was often tailored to specific portionsmore » of the model phase space. We previously applied a developmental version of the ADVANTG code to an SNS beamline study to perform a hybrid deterministic/Monte Carlo analysis and showed that we could obtain nearly global Monte Carlo solutions with essentially uniform relative errors for mesh tallies that cover extensive portions of the model with typical voxel spacing of a few centimeters. The use of weight window maps and consistent biased sources produced using the FW-CADIS methodology in ADVANTG allowed us to obtain these solutions using substantially less computer time than the previous cell-based splitting approach. While those results were promising, the process of using the developmental version of ADVANTG was somewhat laborious, requiring user-developed Python scripts to drive much of the analysis sequence. In addition, limitations imposed by the size of weight-window files in MCNPX necessitated the use of relatively coarse spatial and energy discretization for the deterministic Denovo calculations that we used to generate the variance reduction parameters. We recently applied the production version of ADVANTG to this beamline analysis, which substantially streamlined the analysis process. We also tested importance function collapsing (in space and energy) capabilities in ADVANTG. These changes, along with the support for parallel Denovo calculations using the current version of ADVANTG, give us the capability to improve the fidelity of the deterministic portion of the hybrid analysis sequence, obtain improved weight-window maps, and reduce both the analyst and computational time required for the analysis process.« less
Impact of data layouts on the efficiency of GPU-accelerated IDW interpolation.
Mei, Gang; Tian, Hong
2016-01-01
This paper focuses on evaluating the impact of different data layouts on the computational efficiency of GPU-accelerated Inverse Distance Weighting (IDW) interpolation algorithm. First we redesign and improve our previous GPU implementation that was performed by exploiting the feature of CUDA dynamic parallelism (CDP). Then we implement three versions of GPU implementations, i.e., the naive version, the tiled version, and the improved CDP version, based upon five data layouts, including the Structure of Arrays (SoA), the Array of Structures (AoS), the Array of aligned Structures (AoaS), the Structure of Arrays of aligned Structures (SoAoS), and the Hybrid layout. We also carry out several groups of experimental tests to evaluate the impact. Experimental results show that: the layouts AoS and AoaS achieve better performance than the layout SoA for both the naive version and tiled version, while the layout SoA is the best choice for the improved CDP version. We also observe that: for the two combined data layouts (the SoAoS and the Hybrid), there are no notable performance gains when compared to other three basic layouts. We recommend that: in practical applications, the layout AoaS is the best choice since the tiled version is the fastest one among three versions. The source code of all implementations are publicly available.
Solving large-scale dynamic systems using band Lanczos method in Rockwell NASTRAN on CRAY X-MP
NASA Technical Reports Server (NTRS)
Gupta, V. K.; Zillmer, S. D.; Allison, R. E.
1986-01-01
The improved cost effectiveness using better models, more accurate and faster algorithms and large scale computing offers more representative dynamic analyses. The band Lanczos eigen-solution method was implemented in Rockwell's version of 1984 COSMIC-released NASTRAN finite element structural analysis computer program to effectively solve for structural vibration modes including those of large complex systems exceeding 10,000 degrees of freedom. The Lanczos vectors were re-orthogonalized locally using the Lanczos Method and globally using the modified Gram-Schmidt method for sweeping rigid-body modes and previously generated modes and Lanczos vectors. The truncated band matrix was solved for vibration frequencies and mode shapes using Givens rotations. Numerical examples are included to demonstrate the cost effectiveness and accuracy of the method as implemented in ROCKWELL NASTRAN. The CRAY version is based on RPK's COSMIC/NASTRAN. The band Lanczos method was more reliable and accurate and converged faster than the single vector Lanczos Method. The band Lanczos method was comparable to the subspace iteration method which was a block version of the inverse power method. However, the subspace matrix tended to be fully populated in the case of subspace iteration and not as sparse as a band matrix.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alkhatib, H; Oves, S
Purpose: To demonstrate a quick and comprehensive method verifying the accuracy of the updated dose model by recalculating dose distribution in an anthropomorphic phantom with a new version of the TPS and comparing the results to measured values. Methods: CT images and IMRT plan of an RPC anthropomorphic head phantom, previously calculated by Pinnacle 9.0, was re-computed using Pinnacle 9.2 and 9.6. The dosimeters within the phantom include four TLD capsules representing a primary PTV, two TLD capsules representing a secondary PTV, and two TLD capsules representing an organ at risk. Also included were three sheets of Gafchromic film. Performancemore » of the updated TPS version was assessed by recalculating point doses and dose profiles corresponding to TLD and film position respectively and then comparing the results to reported values by the RPC. Results: Comparing calculated doses to reported measured doses from the RPC yielded an average disagreement of 1.48%, 2.04% and 2.10% for versions 9.0, 9.2, 9.6 respectively. Computed doses points all meet the RPC's passing criteria with the exception of the point representing the superior organ at risk in version 9.6. However, qualitative analysis of the recalculated dose profiles showed improved agreement with those of the RPC, especially in the penumbra region. Conclusion: This work has demonstrated the calculation results of Pinnacle 9.2 and 9.6 vs 9.0 version. Additionally, this study illustrates a method for the user to gain confidence upgrade to a newer version of the treatment planning system.« less
Design of a modular digital computer system, CDRL no. D001, final design plan
NASA Technical Reports Server (NTRS)
Easton, R. A.
1975-01-01
The engineering breadboard implementation for the CDRL no. D001 modular digital computer system developed during design of the logic system was documented. This effort followed the architecture study completed and documented previously, and was intended to verify the concepts of a fault tolerant, automatically reconfigurable, modular version of the computer system conceived during the architecture study. The system has a microprogrammed 32 bit word length, general register architecture and an instruction set consisting of a subset of the IBM System 360 instruction set plus additional fault tolerance firmware. The following areas were covered: breadboard packaging, central control element, central processing element, memory, input/output processor, and maintenance/status panel and electronics.
Koho, P; Aho, S; Kautiainen, H; Pohjolainen, T; Hurri, H
2014-12-01
To estimate the internal consistency, test-retest reliability and comparability of paper and computer versions of the Finnish version of the Tampa Scale of Kinesiophobia (TSK-FIN) among patients with chronic pain. In addition, patients' personal experiences of completing both versions of the TSK-FIN and preferences between these two methods of data collection were studied. Test-retest reliability study. Paper and computer versions of the TSK-FIN were completed twice on two consecutive days. The sample comprised 94 consecutive patients with chronic musculoskeletal pain participating in a pain management or individual rehabilitation programme. The group rehabilitation design consisted of physical and functional exercises, evaluation of the social situation, psychological assessment of pain-related stress factors, and personal pain management training in order to regain overall function and mitigate the inconvenience of pain and fear-avoidance behaviour. The mean TSK-FIN score was 37.1 [standard deviation (SD) 8.1] for the computer version and 35.3 (SD 7.9) for the paper version. The mean difference between the two versions was 1.9 (95% confidence interval 0.8 to 2.9). Test-retest reliability was 0.89 for the paper version and 0.88 for the computer version. Internal consistency was considered to be good for both versions. The intraclass correlation coefficient for comparability was 0.77 (95% confidence interval 0.66 to 0.85), indicating substantial reliability between the two methods. Both versions of the TSK-FIN demonstrated substantial intertest reliability, good test-retest reliability, good internal consistency and acceptable limits of agreement, suggesting their suitability for clinical use. However, subjects tended to score higher when using the computer version. As such, in an ideal situation, data should be collected in a similar manner throughout the course of rehabilitation or clinical research. Copyright © 2014 Chartered Society of Physiotherapy. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Hung, Linda; Huang, Chen; Shin, Ilgyou; Ho, Gregory S.; Lignères, Vincent L.; Carter, Emily A.
2010-12-01
Orbital-free density functional theory (OFDFT) is a first principles quantum mechanics method to find the ground-state energy of a system by variationally minimizing with respect to the electron density. No orbitals are used in the evaluation of the kinetic energy (unlike Kohn-Sham DFT), and the method scales nearly linearly with the size of the system. The PRinceton Orbital-Free Electronic Structure Software (PROFESS) uses OFDFT to model materials from the atomic scale to the mesoscale. This new version of PROFESS allows the study of larger systems with two significant changes: PROFESS is now parallelized, and the ion-electron and ion-ion terms scale quasilinearly, instead of quadratically as in PROFESS v1 (L. Hung and E.A. Carter, Chem. Phys. Lett. 475 (2009) 163). At the start of a run, PROFESS reads the various input files that describe the geometry of the system (ion positions and cell dimensions), the type of elements (defined by electron-ion pseudopotentials), the actions you want it to perform (minimize with respect to electron density and/or ion positions and/or cell lattice vectors), and the various options for the computation (such as which functionals you want it to use). Based on these inputs, PROFESS sets up a computation and performs the appropriate optimizations. Energies, forces, stresses, material geometries, and electron density configurations are some of the values that can be output throughout the optimization. New version program summaryProgram Title: PROFESS Catalogue identifier: AEBN_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEBN_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 68 721 No. of bytes in distributed program, including test data, etc.: 1 708 547 Distribution format: tar.gz Programming language: Fortran 90 Computer: Intel with ifort; AMD Opteron with pathf90 Operating system: Linux Has the code been vectorized or parallelized?: Yes. Parallelization is implemented through domain composition using MPI. RAM: Problem dependent, but 2 GB is sufficient for up to 10,000 ions. Classification: 7.3 External routines: FFTW 2.1.5 ( http://www.fftw.org) Catalogue identifier of previous version: AEBN_v1_0 Journal reference of previous version: Comput. Phys. Comm. 179 (2008) 839 Does the new version supersede the previous version?: Yes Nature of problem: Given a set of coordinates describing the initial ion positions under periodic boundary conditions, recovers the ground state energy, electron density, ion positions, and cell lattice vectors predicted by orbital-free density functional theory. The computation of all terms is effectively linear scaling. Parallelization is implemented through domain decomposition, and up to ˜10,000 ions may be included in the calculation on just a single processor, limited by RAM. For example, when optimizing the geometry of ˜50,000 aluminum ions (plus vacuum) on 48 cores, a single iteration of conjugate gradient ion geometry optimization takes ˜40 minutes wall time. However, each CG geometry step requires two or more electron density optimizations, so step times will vary. Solution method: Computes energies as described in text; minimizes this energy with respect to the electron density, ion positions, and cell lattice vectors. Reasons for new version: To allow much larger systems to be simulated using PROFESS. Restrictions: PROFESS cannot use nonlocal (such as ultrasoft) pseudopotentials. A variety of local pseudopotential files are available at the Carter group website ( http://www.princeton.edu/mae/people/faculty/carter/homepage/research/localpseudopotentials/). Also, due to the current state of the kinetic energy functionals, PROFESS is only reliable for main group metals and some properties of semiconductors. Running time: Problem dependent: the test example provided with the code takes less than a second to run. Timing results for large scale problems are given in the PROFESS paper and Ref. [1].
Simulation of n-qubit quantum systems. V. Quantum measurements
NASA Astrophysics Data System (ADS)
Radtke, T.; Fritzsche, S.
2010-02-01
The FEYNMAN program has been developed during the last years to support case studies on the dynamics and entanglement of n-qubit quantum registers. Apart from basic transformations and (gate) operations, it currently supports a good number of separability criteria and entanglement measures, quantum channels as well as the parametrizations of various frequently applied objects in quantum information theory, such as (pure and mixed) quantum states, hermitian and unitary matrices or classical probability distributions. With the present update of the FEYNMAN program, we provide a simple access to (the simulation of) quantum measurements. This includes not only the widely-applied projective measurements upon the eigenspaces of some given operator but also single-qubit measurements in various pre- and user-defined bases as well as the support for two-qubit Bell measurements. In addition, we help perform generalized and POVM measurements. Knowing the importance of measurements for many quantum information protocols, e.g., one-way computing, we hope that this update makes the FEYNMAN code an attractive and versatile tool for both, research and education. New version program summaryProgram title: FEYNMAN Catalogue identifier: ADWE_v5_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADWE_v5_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 27 210 No. of bytes in distributed program, including test data, etc.: 1 960 471 Distribution format: tar.gz Programming language: Maple 12 Computer: Any computer with Maple software installed Operating system: Any system that supports Maple; the program has been tested under Microsoft Windows XP and Linux Classification: 4.15 Catalogue identifier of previous version: ADWE_v4_0 Journal reference of previous version: Comput. Phys. Commun. 179 (2008) 647 Does the new version supersede the previous version?: Yes Nature of problem: During the last decade, the field of quantum information science has largely contributed to our understanding of quantum mechanics, and has provided also new and efficient protocols that are used on quantum entanglement. To further analyze the amount and transfer of entanglement in n-qubit quantum protocols, symbolic and numerical simulations need to be handled efficiently. Solution method: Using the computer algebra system Maple, we developed a set of procedures in order to support the definition, manipulation and analysis of n-qubit quantum registers. These procedures also help to deal with (unitary) logic gates and (nonunitary) quantum operations and measurements that act upon the quantum registers. All commands are organized in a hierarchical order and can be used interactively in order to simulate and analyze the evolution of n-qubit quantum systems, both in ideal and noisy quantum circuits. Reasons for new version: Until the present, the FEYNMAN program supported the basic data structures and operations of n-qubit quantum registers [1], a good number of separability and entanglement measures [2], quantum operations (noisy channels) [3] as well as the parametrizations of various frequently applied objects, such as (pure and mixed) quantum states, hermitian and unitary matrices or classical probability distributions [4]. With the current extension, we here add all necessary features to simulate quantum measurements, including the projective measurements in various single-qubit and the two-qubit Bell basis, and POVM measurements. Together with the previously implemented functionality, this greatly enhances the possibilities of analyzing quantum information protocols in which measurements play a central role, e.g., one-way computation. Running time: Most commands require ⩽10 seconds of processor time on a Pentium 4 processor with ⩾2 GHz RAM or newer, if they work with quantum registers with five or less qubits. Moreover, about 5-20 MB of working memory is typically needed (in addition to the memory for the Maple environment itself). However, especially when working with symbolic expressions, the requirements on the CPU time and memory critically depend on the size of the quantum registers owing to the exponential growth of the dimension of the associated Hilbert space. For example, complex (symbolic) noise models, i.e. with several Kraus operators, may result in very large expressions that dramatically slow down the evaluation of e.g. distance measures or the final-state entropy, etc. In these cases, Maple's assume facility sometimes helps to reduce the complexity of the symbolic expressions, but more often than not only a numerical evaluation is feasible. Since the various commands can be applied to quite different scenarios, no general scaling rule can be given for the CPU time or the request of memory. References:[1] T. Radtke, S. Fritzsche, Comput. Phys. Commun. 173 (2005) 91.[2] T. Radtke, S. Fritzsche, Comput. Phys. Commun. 175 (2006) 145.[3] T. Radtke, S. Fritzsche, Comput. Phys. Commun. 176 (2007) 617.[4] T. Radtke, S. Fritzsche, Comput. Phys. Commun. 179 (2008) 647.
Theiler, R; Spielberger, J; Bischoff, H A; Bellamy, N; Huber, J; Kroesen, S
2002-06-01
The Western Ontario and McMaster Universities (WOMAC) Osteoarthritis Index is a previously described self-administered questionnaire covering three domains: pain, stiffness and function. It has been validated in patients with osteoarthritis (OA) of the hip or knee in a paper-based format. To validate the WOMAC 3.0 using a numerical rating scale in a computerized touch screen format allowing immediate evaluation of the questionnaire. In the computed version cartoons, written and audio instruments were included in order facilitate application. Fifty patients, demographically balanced, with radiographically proven primary hip or knee OA completed the classical paper and the new computerized WOMAC version. Subjects were randomized either to paper format or computerized format first to balance possible order effects. The intra-class correlation coefficients for pain, stiffness and function values were 0.915, 0.745 and 0.940, respectively. The Spearman correlation coefficients for pain, stiffness and function were 0.88, 0.77 and 0.87, respectively. These data indicate that the computerized WOMAC OA index 3.0 is comparable to the paper WOMAC in all three dimensions. The computerized version would allow physicians to get an immediate result and if present a direct comparison with a previous exam. Copyright 2002 OsteoArthritis Research Society International. Published by Elsevier Science Ltd. All rights reserved.
Excoffier, Laurent; Lischer, Heidi E L
2010-05-01
We present here a new version of the Arlequin program available under three different forms: a Windows graphical version (Winarl35), a console version of Arlequin (arlecore), and a specific console version to compute summary statistics (arlsumstat). The command-line versions run under both Linux and Windows. The main innovations of the new version include enhanced outputs in XML format, the possibility to embed graphics displaying computation results directly into output files, and the implementation of a new method to detect loci under selection from genome scans. Command-line versions are designed to handle large series of files, and arlsumstat can be used to generate summary statistics from simulated data sets within an Approximate Bayesian Computation framework. © 2010 Blackwell Publishing Ltd.
RIS3: A program for relativistic isotope shift calculations
NASA Astrophysics Data System (ADS)
Nazé, C.; Gaidamauskas, E.; Gaigalas, G.; Godefroid, M.; Jönsson, P.
2013-09-01
An atomic spectral line is characteristic of the element producing the spectrum. The line also depends on the isotope. The program RIS3 (Relativistic Isotope Shift) calculates the electron density at the origin and the normal and specific mass shift parameters. Combining these electronic quantities with available nuclear data, isotope-dependent energy level shifts are determined. Program summaryProgram title:RIS3 Catalogue identifier: ADEK_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADEK_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 5147 No. of bytes in distributed program, including test data, etc.: 32869 Distribution format: tar.gz Programming language: Fortran 77. Computer: HP ProLiant BL465c G7 CTO. Operating system: Centos 5.5, which is a Linux distribution compatible with Red Hat Enterprise Advanced Server. Classification: 2.1. Catalogue identifier of previous version: ADEK_v1_0 Journal reference of previous version: Comput. Phys. Comm. 100 (1997) 81 Subprograms used: Cat Id Title Reference ADZL_v1_1 GRASP2K VERSION 1_1 to be published. Does the new version supersede the previous version?: Yes Nature of problem: Prediction of level and transition isotope shifts in atoms using four-component relativistic wave functions. Solution method: The nuclear motion and volume effects are treated in first order perturbation theory. Taking the zero-order wave function in terms of a configuration state expansion |Ψ>=∑μcμ|Φ(γμPJMj)>, where P, J and MJ are, respectively, the parity and angular quantum numbers, the electron density at the nucleus and the normal and specific mass shift parameters may generally be expressed as ∑cμcν<γμPJMj|V|γνPJMj> where V is the relevant operator. The matrix elements, in turn, can be expressed as sums over radial integrals multiplied by angular coefficients. All the angular coefficients are calculated using routines from the GRASP2K version 1_1 package [1]. Reasons for new version: This new version takes the nuclear recoil corrections into account within the (m2/M approximation [2] and also allows storage of the angular coefficients for a series of calculations within a given isoelectronic sequence. Furthermore, the program JJ2LSJ, a module of the GRASP2K version 1_1 toolkit that allows a transformation of ASFs from a jj-coupled CSF basis into an LSJ-coupled CSF basis, has been especially adapted to present RIS3 results using LSJ labels of the states. This additional tool is called RIS3_LSJ. Summary of revisions: This version is compatible with the new angular approach of the GRASP2K version 1_1 package [1] and can store necessary angular coefficients. According to the formalism of the relativistic nuclear recoil, the "uncorrected" expression of the normal mass shift has been fundamentally modified compared with its expression in [3]. Restrictions: The complexity of the cases that can be handled is entirely determined by the GRASP2K package [1] used for the generation of the electronic wave functions. Unusual features: Angular data is stored on disk and can be reused. LSJ labels are used for the states. Running time: As an example, we evaluated the isotope shift parameters and the electron density at the origin using the wave functions of Be-like system. We used the MCDHF wave function built on a complete active space (CAS) with n=8 (296 626 CSFs-62 orbitals) that contains 3 non-interacting blocks of given parity and J values involving 6 different eigenvalues in total. Calculations take around 10 h on one AMD Opteron 6100 @ 2.3 GHz CPU with 8 cores (64 GB DDR3 RAM 1.333 GHz). If angular files are available the time is reduced to 20 min. The storage of the angular data takes 139 MB and 7.2 GB for the one-body and the two-body elements, respectively. References: [1] P. Jönsson, G. Gaigalas, J. Bieroń, C. Froese Fischer, I.P. Grant, New version: GRASP2K relativistic atomic structure package, Comput. Phys. Commun. 184 (9) (2013) 2197-2203. [2] E. Gaidamauskas, C. Nazé, P. Rynkun, G. Gaigalas, P. Jönsson, M. Godefroid, J. Phys. B: At. Mol. Opt. Phys. 44 (17) (2011) 175003. [3] P. Jönsson, C. Froese Fischer, Comput. Phys. Commun. 100 (1997) 81-92.
Modeling and Grid Generation of Iced Airfoils
NASA Technical Reports Server (NTRS)
Vickerman, Mary B.; Baez, Marivell; Braun, Donald C.; Hackenberg, Anthony W.; Pennline, James A.; Schilling, Herbert W.
2007-01-01
SmaggIce Version 2.0 is a software toolkit for geometric modeling and grid generation for two-dimensional, singleand multi-element, clean and iced airfoils. A previous version of SmaggIce was described in Preparing and Analyzing Iced Airfoils, NASA Tech Briefs, Vol. 28, No. 8 (August 2004), page 32. To recapitulate: Ice shapes make it difficult to generate quality grids around airfoils, yet these grids are essential for predicting ice-induced complex flow. This software efficiently creates high-quality structured grids with tools that are uniquely tailored for various ice shapes. SmaggIce Version 2.0 significantly enhances the previous version primarily by adding the capability to generate grids for multi-element airfoils. This version of the software is an important step in streamlining the aeronautical analysis of ice airfoils using computational fluid dynamics (CFD) tools. The user may prepare the ice shape, define the flow domain, decompose it into blocks, generate grids, modify/divide/merge blocks, and control grid density and smoothness. All these steps may be performed efficiently even for the difficult glaze and rime ice shapes. Providing the means to generate highly controlled grids near rough ice, the software includes the creation of a wrap-around block (called the "viscous sublayer block"), which is a thin, C-type block around the wake line and iced airfoil. For multi-element airfoils, the software makes use of grids that wrap around and fill in the areas between the viscous sub-layer blocks for all elements that make up the airfoil. A scripting feature records the history of interactive steps, which can be edited and replayed later to produce other grids. Using this version of SmaggIce, ice shape handling and grid generation can become a practical engineering process, rather than a laborious research effort.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carrington, David Bradley; Waters, Jiajia
KIVA-hpFE is a high performance computer software for solving the physics of multi-species and multiphase turbulent reactive flow in complex geometries having immersed moving parts. The code is written in Fortran 90/95 and can be used on any computer platform with any popular complier. The code is in two versions, a serial version and a parallel version utilizing MPICH2 type Message Passing Interface (MPI or Intel MPI) for solving distributed domains. The parallel version is at least 30x faster than the serial version and much faster than our previous generation of parallel engine modeling software, by many factors. The 5thmore » generation algorithm construction is a Galerkin type Finite Element Method (FEM) solving conservative momentum, species, and energy transport equations along with two-equation turbulent model k-ω Reynolds Averaged Navier-Stokes (RANS) model and a Vreman type dynamic Large Eddy Simulation (LES) method. The LES method is capable modeling transitional flow from laminar to fully turbulent; therefore, this LES method does not require special hybrid or blending to walls. The FEM projection method also uses a Petrov-Galerkin (P-G) stabilization along with pressure stabilization. We employ hierarchical basis sets, constructed on the fly with enrichment in areas associated with relatively larger error as determined by error estimation methods. In addition, when not using the hp-adaptive module, the code employs Lagrangian basis or shape functions. The shape functions are constructed for hexahedral, prismatic and tetrahedral elements. The software is designed to solve many types of reactive flow problems, from burners to internal combustion engines and turbines. In addition, the formulation allows for direct integration of solid bodies (conjugate heat transfer), as in heat transfer through housings, parts, cylinders. It can also easily be extended to stress modeling of solids, used in fluid structure interactions problems, solidification, porous media modeling and magneto hydrodynamics.« less
NASA Astrophysics Data System (ADS)
Sanna, N.; Baccarelli, I.; Morelli, G.
2009-12-01
SCELib is a computer program which implements the Single Center Expansion (SCE) method to describe molecular electronic densities and the interaction potentials between a charged projectile (electron or positron) and a target molecular system. The first version (CPC Catalog identifier ADMG_v1_0) was submitted to the CPC Program Library in 2000, and version 2.0 (ADMG_v2_0) was submitted in 2004. We here announce the new release 3.0 which presents additional features with respect to the previous versions aiming at a significative enhance of its capabilities to deal with larger molecular systems. SCELib 3.0 allows for ab initio effective core potential (ECP) calculations of the molecular wavefunctions to be used in the SCE method in addition to the standard all-electron description of the molecule. The list of supported architectures has been updated and the code has been ported to platforms based on accelerating coprocessors, such as the NVIDIA GPGPU and the new parallel model adopted is able to efficiently run on a mixed many-core computing system. Program summaryProgram title: SCELib3.0 Catalogue identifier: ADMG_v3_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADMG_v3_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 2 018 862 No. of bytes in distributed program, including test data, etc.: 4 955 014 Distribution format: tar.gz Programming language: C Compilers used: xlc V8.x, Intel C V10.x, Portland Group V7.x, nvcc V2.x Computer: All SMP platforms based on AIX, Linux and SUNOS operating systems over SPARC, POWER, Intel Itanium2, X86, em64t and Opteron processors Operating system: SUNOS, IBM AIX, Linux RedHat (Enterprise), Linux SuSE (SLES) Has the code been vectorized or parallelized?: Yes. 1 to 32 (CPU or GPU) used RAM: Up to 32 GB depending on the molecular system and runtime parameters Classification: 16.5 Catalogue identifier of previous version: ADMG_v2_0 Journal reference of previous version: Comput. Phys. Comm. 162 (2004) 51 External routines: CUDA libraries (SDK V2.x). Does the new version supersede the previous version?: Yes Nature of problem: In this set of codes an efficient procedure is implemented to describe the wavefunction and related molecular properties of a polyatomic molecular system within the Single Center of Expansion (SCE) approximation. The resulting SCE wavefunction, electron density, electrostatic and correlation/polarization potentials can then be used in a wide variety of applications, such as electron-molecule scattering calculations, quantum chemistry studies, biomodelling and drug design. Solution method: The polycentre Hartree-Fock solution for a molecule of arbitrary geometry, based on linear combination of Gaussian-Type Orbital (GTO), is expanded over a single center, typically the Center Of Mass (C.O.M.), by means of a Gauss Legendre/Chebyschev quadrature over the θ,φ angular coordinates. The resulting SCE numerical wavefunction is then used to calculate the one-particle electron density, the electrostatic potential and two different models for the correlation/polarization potentials induced by the impinging electron, which have the correct asymptotic behavior for the leading dipole molecular polarizabilities. Reasons for new version: The present release of SCELib allows the study of larger molecular systems with respect to the previous versions by means of theoretical and technological advances, with the first implementation of the code over a many-core computing system. Summary of revisions: The major features added with respect to SCELib Version 2.0 are molecular wavefunctions obtained via the Los Alamos (Hay and Wadt) LAN ECP plus DZ description of the inner-shell electrons (on Na-La, Hf-Bi elements) [1] can now be single-center-expanded; the addition required modifications of: (i) the filtering code readgau, (ii) the main reading function setinp, (iii) the sphint code (including changes to the CalcMO code), (iv) the densty code, (v) the vst code; the classes of platforms supported now include two more architectures based on accelerated coprocessors (Nvidia GSeries GPGPU and ClearSpeed e720 (ClearSpeed version, experimental; initial preliminary porting of the sphint() function not for production runs - see the code documentation for additional detail). A single-precision representation for real numbers in the SCE mapping of the GTOs ( sphint code), has been implemented into the new code; the I h symmetry point group for the molecular systems has been added to those already allowed in the SCE procedure; the orientation of the molecular axis system for the Cs (planar) symmetry has been changed in accord with the standard orientation adopted by the latest version of the quantum chemistry code (Gaussian C03 [2]), which is used to generate the input multi-centre molecular wavefunctions ( z-axis perpendicular to the symmetry plane); the abelian subgroup for the Cs point group has been changed from C 1 to Cs; atomic basis functions including g-type GTOs can now be single-center-expanded. Restrictions: Depending on the molecular system under study and on the operating conditions the program may or may not fit into available RAM memory. In this case a feature of the program is to memory map a disk file in order to efficiently access the memory data through a disk device. The parallel GP-GPU implementation limits the number of CPU threads to the number of GPU cores present. Running time: The execution time strongly depends on the molecular target description and on the hardware/OS chosen, it is directly proportional to the ( r,θ,φ) grid size and to the number of angular basis functions used. Thus, from the program printout of the main arrays memory occupancy, the user can approximately derive the expected computer time needed for a given calculation executed in serial mode. For parallel executions the overall efficiency must be further taken into account, and this depends on the no. of processors used as well as on the parallel architecture chosen, so a simple general law is at present not determinable. References:[1] P.J. Hay, W.R. Wadt, J. Chem. Phys. 82 (1985) 270; W.R. Wadt, P.J. Hay, J. Chem. Phys. 284 (1985);P.J. Hay, W.R. Wadt, J. Chem. Phys. 299 (1985). [2] M.J. Frisch et al., Gaussian 03, revision C.02, Gaussian, Inc., Wallingford, CT, 2004.
Image Algebra Matlab language version 2.3 for image processing and compression research
NASA Astrophysics Data System (ADS)
Schmalz, Mark S.; Ritter, Gerhard X.; Hayden, Eric
2010-08-01
Image algebra is a rigorous, concise notation that unifies linear and nonlinear mathematics in the image domain. Image algebra was developed under DARPA and US Air Force sponsorship at University of Florida for over 15 years beginning in 1984. Image algebra has been implemented in a variety of programming languages designed specifically to support the development of image processing and computer vision algorithms and software. The University of Florida has been associated with development of the languages FORTRAN, Ada, Lisp, and C++. The latter implementation involved a class library, iac++, that supported image algebra programming in C++. Since image processing and computer vision are generally performed with operands that are array-based, the Matlab™ programming language is ideal for implementing the common subset of image algebra. Objects include sets and set operations, images and operations on images, as well as templates and image-template convolution operations. This implementation, called Image Algebra Matlab (IAM), has been found to be useful for research in data, image, and video compression, as described herein. Due to the widespread acceptance of the Matlab programming language in the computing community, IAM offers exciting possibilities for supporting a large group of users. The control over an object's computational resources provided to the algorithm designer by Matlab means that IAM programs can employ versatile representations for the operands and operations of the algebra, which are supported by the underlying libraries written in Matlab. In a previous publication, we showed how the functionality of IAC++ could be carried forth into a Matlab implementation, and provided practical details of a prototype implementation called IAM Version 1. In this paper, we further elaborate the purpose and structure of image algebra, then present a maturing implementation of Image Algebra Matlab called IAM Version 2.3, which extends the previous implementation of IAM to include polymorphic operations over different point sets, as well as recursive convolution operations and functional composition. We also show how image algebra and IAM can be employed in image processing and compression research, as well as algorithm development and analysis.
Abenhaim, Haim A; Varin, Jocelyne; Boucher, Marc
2009-01-01
Whether or not women with a previous cesarean section should be considered for an external cephalic version remains unclear. In our study, we sought to examine the relationship between a history of previous cesarean section and outcomes of external cephalic version for pregnancies at 36 completed weeks of gestation or more. Data on obstetrical history and on external cephalic version outcomes was obtained from the C.H.U. Sainte-Justine External Cephalic Version Database. Baseline clinical characteristics were compared among women with and without a history of previous cesarean section. We used logistic regression analysis to evaluate the effect of previous cesarean section on success of external cephalic version while adjusting for parity, maternal body mass index, gestational age, estimated fetal weight, and amniotic fluid index. Over a 15-year period, 1425 external cephalic versions were attempted of which 36 (2.5%) were performed on women with a previous cesarean section. Although women with a history of previous cesarean section were more likely to be older and para >2 (38.93% vs. 15.0%), there were no difference in gestational age, estimated fetal weight, and amniotic fluid index. Women with a prior cesarean section had a success rate similar to women without [50.0% vs. 51.6%, adjusted OR: 1.31 (0.48-3.59)]. Women with a previous cesarean section who undergo an external cephalic version have similar success rates than do women without. Concern about procedural success in women with a previous cesarean section is unwarranted and should not deter attempting an external cephalic version.
Maple procedures for the coupling of angular momenta. IX. Wigner D-functions and rotation matrices
NASA Astrophysics Data System (ADS)
Pagaran, J.; Fritzsche, S.; Gaigalas, G.
2006-04-01
The Wigner D-functions, Dpqj(α,β,γ), are known for their frequent use in quantum mechanics. Defined as the matrix elements of the rotation operator Rˆ(α,β,γ) in R and parametrized in terms of the three Euler angles α, β, and γ, these functions arise not only in the transformation of tensor components under the rotation of the coordinates, but also as the eigenfunctions of the spherical top. In practice, however, the use of the Wigner D-functions is not always that simple, in particular, if expressions in terms of these and other functions from the theory of angular momentum need to be simplified before some computations can be carried out in detail. To facilitate the manipulation of such Racah expressions, here we present an extension to the RACAH program [S. Fritzsche, Comput. Phys. Comm. 103 (1997) 51] in which the properties and the algebraic rules of the Wigner D-functions and reduced rotation matrices are implemented. Care has been taken to combine the standard knowledge about the rotation matrices with the previously implemented rules for the Clebsch-Gordan coefficients, Wigner n-j symbols, and the spherical harmonics. Moreover, the application of the program has been illustrated below by means of three examples. Program summaryTitle of program:RACAH Catalogue identifier:ADFv_9_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADFv_9_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Catalogue identifier of previous version: ADFW, ADHW, title RACAH Journal reference of previous version(s): S. Fritzsche, Comput. Phys. Comm. 103 (1997) 51; S. Fritzsche, S. Varga, D. Geschke, B. Fricke, Comput. Phys. Comm. 111 (1998) 167; S. Fritzsche, T. Inghoff, M. Tomaselli, Comput. Phys. Comm. 153 (2003) 424. Does the new version supersede the previous one: Yes, in addition to the spherical harmonics and recoupling coefficients, the program now supports also the occurrence of the Wigner rotation matrices in the algebraic expressions to be evaluated. Licensing provisions:None Computer for which the program is designed and others on which it is operable: All computers with a license for the computer algebra package Maple [Maple is a registered trademark of Waterloo Maple Inc.] Installations:University of Kassel (Germany) Operating systems under which the program has been tested: Linux 8.2+ Program language used:MAPLE, Release 8 and 9 Memory required to execute with typical data:10-50 MB No. of lines in distributed program, including test data, etc.:52 653 No. of bytes in distributed program, including test data, etc.:1 195 346 Distribution format:tar.gzip Nature of the physical problem: The Wigner D-functions and (reduced) rotation matrices occur very frequently in physical applications. They are known not only as the (infinite) representation of the rotation group but also to obey a number of integral and summation rules, including those for their orthogonality and completeness. Instead of the direct computation of these matrices, therefore, one first often wishes to find algebraic simplifications before the computations can be carried out in practice. Reasons for new version: The RACAH program has been found an efficient tool during recent years, in order to evaluate and simplify expressions from Racah's algebra. Apart from the Wigner n-j symbols ( j=3,6,9) and spherical harmonics, we now extended the code to allow for Wigner rotation matrices. This extension will support the study of those quantum processes especially where different axis of quantization occurs in course of the theoretical deviations. Summary of revisions: In a revised version of the RACAH program [S. Fritzsche, Comput. Phys. Comm. 103 (1997) 51; S. Fritzsche, T. Inghoff, M. Tomaselli, Comput. Phys. Comm. 153 (2003) 424], we now also support the occurrence of the Wigner D-functions and reduced rotation matrices. By following our previous design, the (algebraic) properties of these rotation matrices as well as a number of summation and integration rules are implemented to facilitate the algebraic simplification of expressions from the theories of angular momentum and the spherical tensor operators. Restrictions onto the complexity of the problem: The definition as well as the properties of the rotation matrices, as used in our implementation, are based mainly on the book of Varshalovich et al. [D.A. Varshalovich, A.N. Moskalev, V.K. Khersonskii, Quantum Theory of Angular Momentum, World Scientific, Singapore, 1988], Chapter 4. From this monograph, most of the relations involving the Wigner D-functions and rotation matrices are taken into account although, in practice, only a rather selected set was needed to be implemented explicitly owing to the symmetries of these functions. In the integration over the rotation matrices, products of up to three Wigner D-functions or reduced matrices (with the same angular arguments) are recognized and simplified properly; for the integration over a solid angle, however, the domain of integration must be specified for the Euler angles α and γ. This restriction arose because MAPLE does not generate a constant of integration when the limits in the integral are omitted. For any integration over the angle β the range of the integration, if omitted, is always taken from 0 to π. Unusual features of the program: The RACAH program is designed for interactive use that allows a quick and algebraic evaluation of (complex) expression from Racah's algebra. It is based on a number of well-defined data structures that are now extended to incorporate the Wigner rotation matrices. For these matrices, the transformation properties, sum rules, recursion relations, as well as a variety of special function expansions have been added to the previous functionality of the RACAH program. Moreover, the knowledge about the orthogonality as well as the completeness of the Wigner D-functions is also implemented. Typical running time:All the examples presented in Section 4 take only a few seconds on a 1.5 GHz Pentium Pro computer.
NASA Astrophysics Data System (ADS)
Oh, Kwang Jin; Kang, Ji Hoon; Myung, Hun Joo
2012-02-01
We have revised a general purpose parallel molecular dynamics simulation program mm_par using the object-oriented programming. We parallelized the revised version using a hierarchical scheme in order to utilize more processors for a given system size. The benchmark result will be presented here. New version program summaryProgram title: mm_par2.0 Catalogue identifier: ADXP_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXP_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 2 390 858 No. of bytes in distributed program, including test data, etc.: 25 068 310 Distribution format: tar.gz Programming language: C++ Computer: Any system operated by Linux or Unix Operating system: Linux Classification: 7.7 External routines: We provide wrappers for FFTW [1], Intel MKL library [2] FFT routine, and Numerical recipes [3] FFT, random number generator, and eigenvalue solver routines, SPRNG [4] random number generator, Mersenne Twister [5] random number generator, space filling curve routine. Catalogue identifier of previous version: ADXP_v1_0 Journal reference of previous version: Comput. Phys. Comm. 174 (2006) 560 Does the new version supersede the previous version?: Yes Nature of problem: Structural, thermodynamic, and dynamical properties of fluids and solids from microscopic scales to mesoscopic scales. Solution method: Molecular dynamics simulation in NVE, NVT, and NPT ensemble, Langevin dynamics simulation, dissipative particle dynamics simulation. Reasons for new version: First, object-oriented programming has been used, which is known to be open for extension and closed for modification. It is also known to be better for maintenance. Second, version 1.0 was based on atom decomposition and domain decomposition scheme [6] for parallelization. However, atom decomposition is not popular due to its poor scalability. On the other hand, domain decomposition scheme is better for scalability. It still has a limitation in utilizing a large number of cores on recent petascale computers due to the requirement that the domain size is larger than the potential cutoff distance. To go beyond such a limitation, a hierarchical parallelization scheme has been adopted in this new version and implemented using MPI [7] and OPENMP [8]. Summary of revisions: (1) Object-oriented programming has been used. (2) A hierarchical parallelization scheme has been adopted. (3) SPME routine has been fully parallelized with parallel 3D FFT using volumetric decomposition scheme [9]. K.J.O. thanks Mr. Seung Min Lee for useful discussion on programming and debugging. Running time: Running time depends on system size and methods used. For test system containing a protein (PDB id: 5DHFR) with CHARMM22 force field [10] and 7023 TIP3P [11] waters in simulation box having dimension 62.23 Å×62.23 Å×62.23 Å, the benchmark results are given in Fig. 1. Here the potential cutoff distance was set to 12 Å and the switching function was applied from 10 Å for the force calculation in real space. For the SPME [12] calculation, K, K, and K were set to 64 and the interpolation order was set to 4. To do the fast Fourier transform, we used Intel MKL library. All bonds including hydrogen atoms were constrained using SHAKE/RATTLE algorithms [13,14]. The code was compiled using Intel compiler version 11.1 and mvapich2 version 1.5. Fig. 2 shows performance gains from using CUDA-enabled version [15] of mm_par for 5DHFR simulation in water on Intel Core2Quad 2.83 GHz and GeForce GTX 580. Even though mm_par2.0 is not ported yet for GPU, its performance data would be useful to expect mm_par2.0 performance on GPU. Timing results for 1000 MD steps. 1, 2, 4, and 8 in the figure mean the number of OPENMP threads. Timing results for 1000 MD steps from double precision simulation on CPU, single precision simulation on GPU, and double precision simulation on GPU.
Computer versus paper--does it make any difference in test performance?
Karay, Yassin; Schauber, Stefan K; Stosch, Christoph; Schüttpelz-Brauns, Katrin
2015-01-01
CONSTRUCT: In this study, we examine the differences in test performance between the paper-based and the computer-based version of the Berlin formative Progress Test. In this context it is the first study that allows controlling for students' prior performance. Computer-based tests make possible a more efficient examination procedure for test administration and review. Although university staff will benefit largely from computer-based tests, the question arises if computer-based tests influence students' test performance. A total of 266 German students from the 9th and 10th semester of medicine (comparable with the 4th-year North American medical school schedule) participated in the study (paper = 132, computer = 134). The allocation of the test format was conducted as a randomized matched-pair design in which students were first sorted according to their prior test results. The organizational procedure, the examination conditions, the room, and seating arrangements, as well as the order of questions and answers, were identical in both groups. The sociodemographic variables and pretest scores of both groups were comparable. The test results from the paper and computer versions did not differ. The groups remained within the allotted time, but students using the computer version (particularly the high performers) needed significantly less time to complete the test. In addition, we found significant differences in guessing behavior. Low performers using the computer version guess significantly more than low-performing students in the paper-pencil version. Participants in computer-based tests are not at a disadvantage in terms of their test results. The computer-based test required less processing time. The reason for the longer processing time when using the paper-pencil version might be due to the time needed to write the answer down, controlling for transferring the answer correctly. It is still not known why students using the computer version (particularly low-performing students) guess at a higher rate. Further studies are necessary to understand this finding.
A Summary of Validation Results for LEWICE 2.0
NASA Technical Reports Server (NTRS)
Wright, William B.
1998-01-01
A research project is underway at NASA Lewis to produce a computer code which can accurately predict ice growth under any meteorological conditions for any aircraft surface. This report will present results from version 2.0 of this code, which is called LEWICE. This version differs from previous releases due to its robustness and its ability to reproduce results accurately for different point spacing, and time step criteria across general computing platforms. It also differs in the extensive amount of effort undertaken to compare the results in a quantifiable manner against the database of ice shapes which have been generated in the NASA Lewis Icing, Research Tunnel (IRT), The complete set of data used for this comparison is available in a recent contractor report . The result of this comparison shows that the difference between the predicted ice shape from LEWICE 2.0 and the average of the experimental data is 7.2% while the variability of the experimental data is 2.5%.
A new version of the CADNA library for estimating round-off error propagation in Fortran programs
NASA Astrophysics Data System (ADS)
Jézéquel, Fabienne; Chesneaux, Jean-Marie; Lamotte, Jean-Luc
2010-11-01
The CADNA library enables one to estimate, using a probabilistic approach, round-off error propagation in any simulation program. CADNA provides new numerical types, the so-called stochastic types, on which round-off errors can be estimated. Furthermore CADNA contains the definition of arithmetic and relational operators which are overloaded for stochastic variables and the definition of mathematical functions which can be used with stochastic arguments. On 64-bit processors, depending on the rounding mode chosen, the mathematical library associated with the GNU Fortran compiler may provide incorrect results or generate severe bugs. Therefore the CADNA library has been improved to enable the numerical validation of programs on 64-bit processors. New version program summaryProgram title: CADNA Catalogue identifier: AEAT_v1_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAT_v1_1.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 28 488 No. of bytes in distributed program, including test data, etc.: 463 778 Distribution format: tar.gz Programming language: Fortran NOTE: A C++ version of this program is available in the Library as AEGQ_v1_0 Computer: PC running LINUX with an i686 or an ia64 processor, UNIX workstations including SUN, IBM Operating system: LINUX, UNIX Classification: 6.5 Catalogue identifier of previous version: AEAT_v1_0 Journal reference of previous version: Comput. Phys. Commun. 178 (2008) 933 Does the new version supersede the previous version?: Yes Nature of problem: A simulation program which uses floating-point arithmetic generates round-off errors, due to the rounding performed at each assignment and at each arithmetic operation. Round-off error propagation may invalidate the result of a program. The CADNA library enables one to estimate round-off error propagation in any simulation program and to detect all numerical instabilities that may occur at run time. Solution method: The CADNA library [1-3] implements Discrete Stochastic Arithmetic [4,5] which is based on a probabilistic model of round-off errors. The program is run several times with a random rounding mode generating different results each time. From this set of results, CADNA estimates the number of exact significant digits in the result that would have been computed with standard floating-point arithmetic. Reasons for new version: On 64-bit processors, the mathematical library associated with the GNU Fortran compiler may provide incorrect results or generate severe bugs with rounding towards -∞ and +∞, which the random rounding mode is based on. Therefore a particular definition of mathematical functions for stochastic arguments has been included in the CADNA library to enable its use with the GNU Fortran compiler on 64-bit processors. Summary of revisions: If CADNA is used on a 64-bit processor with the GNU Fortran compiler, mathematical functions are computed with rounding to the nearest, otherwise they are computed with the random rounding mode. It must be pointed out that the knowledge of the accuracy of the stochastic argument of a mathematical function is never lost. Restrictions: CADNA requires a Fortran 90 (or newer) compiler. In the program to be linked with the CADNA library, round-off errors on complex variables cannot be estimated. Furthermore array functions such as product or sum must not be used. Only the arithmetic operators and the abs, min, max and sqrt functions can be used for arrays. Additional comments: In the library archive, users are advised to read the INSTALL file first. The doc directory contains a user guide named ug.cadna.pdf which shows how to control the numerical accuracy of a program using CADNA, provides installation instructions and describes test runs. The source code, which is located in the src directory, consists of one assembly language file (cadna_rounding.s) and eighteen Fortran language files. cadna_rounding.s is a symbolic link to the assembly file corresponding to the processor and the Fortran compiler used. This assembly file contains routines which are frequently called in the CADNA Fortran files to change the rounding mode. The Fortran language files contain the definition of the stochastic types on which the control of accuracy can be performed, CADNA specific functions (for instance to enable or disable the detection of numerical instabilities), the definition of arithmetic and relational operators which are overloaded for stochastic variables and the definition of mathematical functions which can be used with stochastic arguments. The examples directory contains seven test runs which illustrate the use of the CADNA library and the benefits of Discrete Stochastic Arithmetic. Running time: The version of a code which uses CADNA runs at least three times slower than its floating-point version. This cost depends on the computer architecture and can be higher if the detection of numerical instabilities is enabled. In this case, the cost may be related to the number of instabilities detected.
MaMiCo: Transient multi-instance molecular-continuum flow simulation on supercomputers
NASA Astrophysics Data System (ADS)
Neumann, Philipp; Bian, Xin
2017-11-01
We present extensions of the macro-micro-coupling tool MaMiCo, which was designed to couple continuum fluid dynamics solvers with discrete particle dynamics. To enable local extraction of smooth flow field quantities especially on rather short time scales, sampling over an ensemble of molecular dynamics simulations is introduced. We provide details on these extensions including the transient coupling algorithm, open boundary forcing, and multi-instance sampling. Furthermore, we validate the coupling in Couette flow using different particle simulation software packages and particle models, i.e. molecular dynamics and dissipative particle dynamics. Finally, we demonstrate the parallel scalability of the molecular-continuum simulations by using up to 65 536 compute cores of the supercomputer Shaheen II located at KAUST. Program Files doi:http://dx.doi.org/10.17632/w7rgdrhb85.1 Licensing provisions: BSD 3-clause Programming language: C, C++ External routines/libraries: For compiling: SCons, MPI (optional) Subprograms used: ESPResSo, LAMMPS, ls1 mardyn, waLBerla For installation procedures of the MaMiCo interfaces, see the README files in the respective code directories located in coupling/interface/impl. Journal reference of previous version: P. Neumann, H. Flohr, R. Arora, P. Jarmatz, N. Tchipev, H.-J. Bungartz. MaMiCo: Software design for parallel molecular-continuum flow simulations, Computer Physics Communications 200: 324-335, 2016 Does the new version supersede the previous version?: Yes. The functionality of the previous version is completely retained in the new version. Nature of problem: Coupled molecular-continuum simulation for multi-resolution fluid dynamics: parts of the domain are resolved by molecular dynamics or another particle-based solver whereas large parts are covered by a mesh-based CFD solver, e.g. a lattice Boltzmann automaton. Solution method: We couple existing MD and CFD solvers via MaMiCo (macro-micro coupling tool). Data exchange and coupling algorithmics are abstracted and incorporated in MaMiCo. Once an algorithm is set up in MaMiCo, it can be used and extended, even if other solvers are used (as soon as the respective interfaces are implemented/available). Reasons for the new version: We have incorporated a new algorithm to simulate transient molecular-continuum systems and to automatically sample data over multiple MD runs that can be executed simultaneously (on, e.g., a compute cluster). MaMiCo has further been extended by an interface to incorporate boundary forcing to account for open molecular dynamics boundaries. Besides support for coupling with various MD and CFD frameworks, the new version contains a test case that allows to run molecular-continuum Couette flow simulations out-of-the-box. No external tools or simulation codes are required anymore. However, the user is free to switch from the included MD simulation package to LAMMPS. For details on how to run the transient Couette problem, see the file README in the folder coupling/tests, Remark on MaMiCo V1.1. Summary of revisions: Open boundary forcing; Multi-instance MD sampling; support for transient molecular-continuum systems Restrictions: Currently, only single-centered systems are supported. For access to the LAMMPS-based implementation of DPD boundary forcing, please contact Xin Bian, xin.bian@tum.de. Additional comments: Please see file license_mamico.txt for further details regarding distribution and advertising of this software.
RAID v2.0: an updated resource of RNA-associated interactions across organisms.
Yi, Ying; Zhao, Yue; Li, Chunhua; Zhang, Lin; Huang, Huiying; Li, Yana; Liu, Lanlan; Hou, Ping; Cui, Tianyu; Tan, Puwen; Hu, Yongfei; Zhang, Ting; Huang, Yan; Li, Xiaobo; Yu, Jia; Wang, Dong
2017-01-04
With the development of biotechnologies and computational prediction algorithms, the number of experimental and computational prediction RNA-associated interactions has grown rapidly in recent years. However, diverse RNA-associated interactions are scattered over a wide variety of resources and organisms, whereas a fully comprehensive view of diverse RNA-associated interactions is still not available for any species. Hence, we have updated the RAID database to version 2.0 (RAID v2.0, www.rna-society.org/raid/) by integrating experimental and computational prediction interactions from manually reading literature and other database resources under one common framework. The new developments in RAID v2.0 include (i) over 850-fold RNA-associated interactions, an enhancement compared to the previous version; (ii) numerous resources integrated with experimental or computational prediction evidence for each RNA-associated interaction; (iii) a reliability assessment for each RNA-associated interaction based on an integrative confidence score; and (iv) an increase of species coverage to 60. Consequently, RAID v2.0 recruits more than 5.27 million RNA-associated interactions, including more than 4 million RNA-RNA interactions and more than 1.2 million RNA-protein interactions, referring to nearly 130 000 RNA/protein symbols across 60 species. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
Modelling total solar irradiance since 1878 from simulated magnetograms
NASA Astrophysics Data System (ADS)
Dasi-Espuig, M.; Jiang, J.; Krivova, N. A.; Solanki, S. K.
2014-10-01
Aims: We present a new model of total solar irradiance (TSI) based on magnetograms simulated with a surface flux transport model (SFTM) and the Spectral And Total Irradiance REconstructions (SATIRE) model. Our model provides daily maps of the distribution of the photospheric field and the TSI starting from 1878. Methods: The modelling is done in two main steps. We first calculate the magnetic flux on the solar surface emerging in active and ephemeral regions. The evolution of the magnetic flux in active regions (sunspots and faculae) is computed using a surface flux transport model fed with the observed record of sunspot group areas and positions. The magnetic flux in ephemeral regions is treated separately using the concept of overlapping cycles. We then use a version of the SATIRE model to compute the TSI. The area coverage and the distribution of different magnetic features as a function of time, which are required by SATIRE, are extracted from the simulated magnetograms and the modelled ephemeral region magnetic flux. Previously computed intensity spectra of the various types of magnetic features are employed. Results: Our model reproduces the PMOD composite of TSI measurements starting from 1978 at daily and rotational timescales more accurately than the previous version of the SATIRE model computing TSI over this period of time. The simulated magnetograms provide a more realistic representation of the evolution of the magnetic field on the photosphere and also allow us to make use of information on the spatial distribution of the magnetic fields before the times when observed magnetograms were available. We find that the secular increase in TSI since 1878 is fairly stable to modifications of the treatment of the ephemeral region magnetic flux.
FPPAC94: A two-dimensional multispecies nonlinear Fokker-Planck package for UNIX systems
NASA Astrophysics Data System (ADS)
Mirin, A. A.; McCoy, M. G.; Tomaschke, G. P.; Killeen, J.
1994-07-01
FPPAC94 solves the complete nonlinear multispecies Fokker-Planck collison operator for a plasma in two-dimensional velocity space. The operator is expressed in terms of spherical coordinates (speed and pitch angle) under the assumption of azimuthal symmetry. Provision is made for additional physics contributions (e.g. rf heating, electric field acceleration). The charged species, referred to as general species, are assumed to be in the presence of an arbitrary number of fixed Maxwellian species. The electrons may be treated either as one of these Maxwellian species or as a general species. Coulomb interactions among all charged species are considered This program is a new version of FPPAC. FPPAC was last published in Computer Physics Communications in 1988. This new version is identical in scope to the previous version. However, it is written in standard Fortran 77 and is able to execute on a variety of Unix systems. The code has been tested on the Cray-C90, HP-755 and Sun Sparc-1. The answers agree on all platforms where the code has been tested. The test problems are the same as those provided in 1988. This version also corrects a bug in the 1988 version.
GENXICC2.1: An improved version of GENXICC for hadronic production of doubly heavy baryons
NASA Astrophysics Data System (ADS)
Wang, Xian-You; Wu, Xing-Gang
2013-03-01
We present an improved version of GENXICC, which is a generator for hadronic production of the doubly heavy baryons Ξcc, Ξbc and Ξbb and has been introduced by C.H. Chang, J.X. Wang and X.G. Wu [Comput. Phys. Commun. 177 (2007) 467; Comput. Phys. Commun. 181 (2010) 1144]. In comparison with the previous GENXICC versions, we update the program in order to generate the unweighted baryon events more effectively under various simulation environments, whose distributions are now generated according to the probability proportional to the integrand. One Les Houches Event (LHE) common block has been added to produce a standard LHE data file that contains useful information of the doubly heavy baryon and its accompanying partons. Such LHE data can be conveniently imported into PYTHIA to do further hadronization and decay simulation, especially, the color-flow problem can be solved with PYTHIA8.0. NEW VERSION PROGRAM SUMMARYTitle of program: GENXICC2.1 Program obtained from: CPC Program Library Reference to original program: GENXICC Reference in CPC: Comput. Phys. Commun. 177, 467 (2007); Comput. Phys. Commun. 181, 1144 (2010) Does the new version supersede the old program: No Computer: Any LINUX based on PC with FORTRAN 77 or FORTRAN 90 and GNU C compiler as well Operating systems: LINUX Programming language used: FORTRAN 77/90 Memory required to execute with typical data: About 2.0 MB No. of bytes in distributed program: About 2 MB, including PYTHIA6.4 Distribution format: .tar.gz Nature of physical problem: Hadronic production of doubly heavy baryons Ξcc, Ξbc and Ξbb. Method of solution: The upgraded version with a proper interface to PYTHIA can generate full production and decay events, either weighted or unweighted, conveniently and effectively. Especially, the unweighted events are generated by using an improved hit-and-miss approach. Reasons for new version: Responding to the feedback from users of CMS and LHCb groups at the Large Hadron Collider, and based on the recent improvements of PYTHIA on the color-flow problem, we improve the efficiency for generating the unweighted events, and also improve the color-flow part for further hadronization. Especially, an interface has been added to import the output production events into a suitable form for PYTHIA8.0 simulation, in which the color-flow during the simulation can be correctly set. Typical running time: It depends on which option is chosen to match PYTHIA when generating the full events and also on which mechanism is chosen to generate the events. Typically, for the dominant gluon-gluon fusion mechanism to generate the mixed events via the intermediate diquarks in (cc)[3S1]3¯ and (cc)[1S0]6 states, setting IDWTUP=3 and unwght =.true., it takes 30 min to generate 105 unweighted events on a 2.27 GHz Intel Xeon E5520 processor machine; setting IDWTUP=3 and unwght =.false. or IDWTUP=1 and IGENERATE=0, it only needs 2 min to generate the 105 baryon events (the fastest way, for theoretical purposes only). As a comparison, for previous GENXICC versions, if setting IDWTUP=1 and IGENERATE=1, it takes about 22 hours to generate 1000 unweighted events. Keywords: Event generator; Doubly heavy baryons; Hadronic production. Summary of the changes (improvements): (1) The scheme for generating unweighted events has been improved; (2) One Les Houches Event (LHE) common block has been added to record the standard LHE data in order to be the correct input for PYTHIA8.0 for later simulation; (3) We present the code for connecting GENXICC to PYTHIA8.0, where three color-flows have to be correctly set for later simulation. More specifically, we present the changes together with their detailed explanations in the following:
ERIC Educational Resources Information Center
Degiorgio, Lisa
2015-01-01
Equivalency of test versions is often assumed by counselors and evaluators. This study examined two versions, paper-pencil and computer based, of the Driver Risk Inventory, a DUI/DWI (driving under the influence/driving while intoxicated) risk assessment. An overview of computer-based testing and standards for equivalency is also provided. Results…
Improvements to the adaptive maneuvering logic program
NASA Technical Reports Server (NTRS)
Burgin, George H.
1986-01-01
The Adaptive Maneuvering Logic (AML) computer program simulates close-in, one-on-one air-to-air combat between two fighter aircraft. Three important improvements are described. First, the previously available versions of AML were examined for their suitability as a baseline program. The selected program was then revised to eliminate some programming bugs which were uncovered over the years. A listing of this baseline program is included. Second, the equations governing the motion of the aircraft were completely revised. This resulted in a model with substantially higher fidelity than the original equations of motion provided. It also completely eliminated the over-the-top problem, which occurred in the older versions when the AML-driven aircraft attempted a vertical or near vertical loop. Third, the requirements for a versatile generic, yet realistic, aircraft model were studied and implemented in the program. The report contains detailed tables which make the generic aircraft to be either a modern, high performance aircraft, an older high performance aircraft, or a previous generation jet fighter.
FLY MPI-2: a parallel tree code for LSS
NASA Astrophysics Data System (ADS)
Becciani, U.; Comparato, M.; Antonuccio-Delogu, V.
2006-04-01
New version program summaryProgram title: FLY 3.1 Catalogue identifier: ADSC_v2_0 Licensing provisions: yes Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADSC_v2_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland No. of lines in distributed program, including test data, etc.: 158 172 No. of bytes in distributed program, including test data, etc.: 4 719 953 Distribution format: tar.gz Programming language: Fortran 90, C Computer: Beowulf cluster, PC, MPP systems Operating system: Linux, Aix RAM: 100M words Catalogue identifier of previous version: ADSC_v1_0 Journal reference of previous version: Comput. Phys. Comm. 155 (2003) 159 Does the new version supersede the previous version?: yes Nature of problem: FLY is a parallel collisionless N-body code for the calculation of the gravitational force Solution method: FLY is based on the hierarchical oct-tree domain decomposition introduced by Barnes and Hut (1986) Reasons for the new version: The new version of FLY is implemented by using the MPI-2 standard: the distributed version 3.1 was developed by using the MPICH2 library on a PC Linux cluster. Today the FLY performance allows us to consider the FLY code among the most powerful parallel codes for tree N-body simulations. Another important new feature regards the availability of an interface with hydrodynamical Paramesh based codes. Simulations must follow a box large enough to accurately represent the power spectrum of fluctuations on very large scales so that we may hope to compare them meaningfully with real data. The number of particles then sets the mass resolution of the simulation, which we would like to make as fine as possible. The idea to build an interface between two codes, that have different and complementary cosmological tasks, allows us to execute complex cosmological simulations with FLY, specialized for DM evolution, and a code specialized for hydrodynamical components that uses a Paramesh block structure. Summary of revisions: The parallel communication schema was totally changed. The new version adopts the MPICH2 library. Now FLY can be executed on all Unix systems having an MPI-2 standard library. The main data structure, is declared in a module procedure of FLY (fly_h.F90 routine). FLY creates the MPI Window object for one-sided communication for all the shared arrays, with a call like the following: CALL MPI_WIN_CREATE(POS, SIZE, REAL8, MPI_INFO_NULL, MPI_COMM_WORLD, WIN_POS, IERR) the following main window objects are created: win_pos, win_vel, win_acc: particles positions velocities and accelerations, win_pos_cell, win_mass_cell, win_quad, win_subp, win_grouping: cells positions, masses, quadrupole momenta, tree structure and grouping cells. Other windows are created for dynamic load balance and global counters. Restrictions: The program uses the leapfrog integrator schema, but could be changed by the user. Unusual features: FLY uses the MPI-2 standard: the MPICH2 library on Linux systems was adopted. To run this version of FLY the working directory must be shared among all the processors that execute FLY. Additional comments: Full documentation for the program is included in the distribution in the form of a README file, a User Guide and a Reference manuscript. Running time: IBM Linux Cluster 1350, 512 nodes with 2 processors for each node and 2 GB RAM for each processor, at Cineca, was adopted to make performance tests. Processor type: Intel Xeon Pentium IV 3.0 GHz and 512 KB cache (128 nodes have Nocona processors). Internal Network: Myricom LAN Card "C" Version and "D" Version. Operating System: Linux SuSE SLES 8. The code was compiled using the mpif90 compiler version 8.1 and with basic optimization options in order to have performances that could be useful compared with other generic clusters Processors
A new method for the prediction of combustion instability
NASA Astrophysics Data System (ADS)
Flanagan, Steven Meville
This dissertation presents a new approach to the prediction of combustion instability in solid rocket motors. Previous attempts at developing computational tools to solve this problem have been largely unsuccessful, showing very poor agreement with experimental results and having little or no predictive capability. This is due primarily to deficiencies in the linear stability theory upon which these efforts have been based. Recent advances in linear instability theory by Flandro have demonstrated the importance of including unsteady rotational effects, previously considered negligible. Previous versions of the theory also neglected corrections to the unsteady flow field of the first order in the mean flow Mach number. This research explores the stability implications of extending the solution to include these corrections. Also, the corrected linear stability theory based upon a rotational unsteady flow field extended to first order in mean flow Mach number has been implemented in two computer programs developed for the Macintosh platform. A quasi one-dimensional version of the program has been developed which is based upon an approximate solution to the cavity acoustics problem. The three-dimensional program applies Greens's Function Discretization (GFD) to the solution for the acoustic mode shapes and frequency. GFD is a recently developed numerical method for finding fully three dimensional solutions for this class of problems. The analysis of complex motor geometries, previously a tedious and time consuming task, has also been greatly simplified through the development of a drawing package designed specifically to facilitate the specification of typical motor geometries. The combination of the drawing package, improved acoustic solutions, and new analysis, results in a tool which is capable of producing more accurate and meaningful predictions than have been possible in the past.
The LARSYS Educational Package: Instructor's Notes for Use with the Data 100
NASA Technical Reports Server (NTRS)
Lindenlaub, J. C.; Russell, J. D.
1977-01-01
The LARSYS Educational Package is a set of instructional materials developed to train people to analyze remotely sensed multispectral data using LARSYS, a computer software system. The materials included in this volume have been designed to assist LARSYS instructors as they guide students through the LARSYS Educational Package. All of the materials have been updated from the previous version to reflect the use of a Data 100 Remote Terminal.
Modeling of boron species in the Falcon 17 and ISP-34 integral tests
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lazaridis, M.; Capitao, J.A.; Drossinos, Y.
1996-09-01
The RAFT computer code for aerosol formation and transport was modified to include boron species in its chemical database. The modification was necessary to calculate fission product transport and deposition in the FAL-17 and ISP-34 Falcon tests, where boric acid was injected. The experimental results suggest that the transport of cesium is modified in the presence of boron. The results obtained with the modified RAFT code are presented; they show good agreement with experimental results for cesium and partial agreement for boron deposition in the Falcon silica tube. The new version of the RAFT code predicts the same behavior formore » iodine deposition as the previous version, where boron species were not included.« less
Improvements, testing and development of the ADM-τ sub-grid surface tension model for two-phase LES
NASA Astrophysics Data System (ADS)
Aniszewski, Wojciech
2016-12-01
In this paper, a specific subgrid term occurring in Large Eddy Simulation (LES) of two-phase flows is investigated. This and other subgrid terms are presented, we subsequently elaborate on the existing models for those and re-formulate the ADM-τ model for sub-grid surface tension previously published by these authors. This paper presents a substantial, conceptual simplification over the original model version, accompanied by a decrease in its computational cost. At the same time, it addresses the issues the original model version faced, e.g. introduces non-isotropic applicability criteria based on resolved interface's principal curvature radii. Additionally, this paper introduces more throughout testing of the ADM-τ, in both simple and complex flows.
NASA Astrophysics Data System (ADS)
Allanach, B. C.; Athron, P.; Tunstall, Lewis C.; Voigt, A.; Williams, A. G.
2014-09-01
We describe an extension to the SOFTSUSY program that provides for the calculation of the sparticle spectrum in the Next-to-Minimal Supersymmetric Standard Model (NMSSM), where a chiral superfield that is a singlet of the Standard Model gauge group is added to the Minimal Supersymmetric Standard Model (MSSM) fields. Often, a Z3 symmetry is imposed upon the model. SOFTSUSY can calculate the spectrum in this case as well as the case where general Z3 violating (denoted as =) terms are added to the soft supersymmetry breaking terms and the superpotential. The user provides a theoretical boundary condition for the couplings and mass terms of the singlet. Radiative electroweak symmetry breaking data along with electroweak and CKM matrix data are used as weak-scale boundary conditions. The renormalisation group equations are solved numerically between the weak scale and a high energy scale using a nested iterative algorithm. This paper serves as a manual to the NMSSM mode of the program, detailing the approximations and conventions used. Catalogue identifier: ADPM_v4_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADPM_v4_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 154886 No. of bytes in distributed program, including test data, etc.: 1870890 Distribution format: tar.gz Programming language: C++, fortran. Computer: Personal computer. Operating system: Tested on Linux 3.x. Word size: 64 bits Classification: 11.1, 11.6. Does the new version supersede the previous version?: Yes Catalogue identifier of previous version: ADPM_v3_0 Journal reference of previous version: Comput. Phys. Comm. 183 (2012) 785 Nature of problem: Calculating supersymmetric particle spectrum and mixing parameters in the next-to-minimal supersymmetric standard model. The solution to the renormalisation group equations must be consistent with boundary conditions on supersymmetry breaking parameters, as well as on the weak-scale boundary condition on gauge couplings, Yukawa couplings and the Higgs potential parameters. Solution method: Nested iterative algorithm and numerical minimisation of the Higgs potential. Reasons for new version: Major extension to include the next-to-minimal supersymmetric standard model. Summary of revisions: Added additional supersymmetric and supersymmetry breaking parameters associated with the additional gauge singlet. Electroweak symmetry breaking conditions are significantly changed in the next-to-minimal mode, and some sparticle mixing changes. An interface to NMSSMTools has also been included. Some of the object structure has also changed, and the command line interface has been made more user friendly. Restrictions: SOFTSUSY will provide a solution only in the perturbative regime and it assumes that all couplings of the model are real (i.e. CP-conserving). If the parameter point under investigation is non-physical for some reason (for example because the electroweak potential does not have an acceptable minimum), SOFTSUSY returns an error message. Running time: A few seconds per parameter point.
User guide for MODPATH version 6 - A particle-tracking model for MODFLOW
Pollock, David W.
2012-01-01
MODPATH is a particle-tracking post-processing model that computes three-dimensional flow paths using output from groundwater flow simulations based on MODFLOW, the U.S. Geological Survey (USGS) finite-difference groundwater flow model. This report documents MODPATH version 6. Previous versions were documented in USGS Open-File Reports 89-381 and 94-464. The program uses a semianalytical particle-tracking scheme that allows an analytical expression of a particle's flow path to be obtained within each finite-difference grid cell. A particle's path is computed by tracking the particle from one cell to the next until it reaches a boundary, an internal sink/source, or satisfies another termination criterion. Data input to MODPATH consists of a combination of MODFLOW input data files, MODFLOW head and flow output files, and other input files specific to MODPATH. Output from MODPATH consists of several output files, including a number of particle coordinate output files intended to serve as input data for other programs that process, analyze, and display the results in various ways. MODPATH is written in FORTRAN and can be compiled by any FORTRAN compiler that fully supports FORTRAN-2003 or by most commercially available FORTRAN-95 compilers that support the major FORTRAN-2003 language extensions.
What's new in the Atmospheric Model Evaluation Tool (AMET) version 1.3
A new version of the Atmospheric Model Evaluation Tool (AMET) has been released. The new version of AMET, version 1.3 (AMETv1.3), contains a number of updates and changes from the previous of version of AMET (v1.2) released in 2012. First, the Perl scripts used in the previous ve...
Parametric Simulations of the Great Dark Spots of Neptune
NASA Astrophysics Data System (ADS)
Deng, Xiaolong; Le Beau, R.
2006-09-01
Observations by Voyager II and the Hubble Space Telescope of the Great Dark Spots (GDS) of Neptune suggest that large vortices with lifespans of years are not uncommon occurrences in the atmosphere of Neptune. The variability of these features over time, in particular the complex motions of GDS-89, make them challenging candidates to simulate in atmospheric models. Previously, using the Explicit Planetary Isentropic-Coordinate (EPIC) General Circulation Model, LeBeau and Dowling (1998) simulated the GDS-like vortex features. Qualitatively, the drift, oscillation, and tail-like features of GDS-89 were recreated, although precise numerical matches were only achieved for the meridional drift rate. In 2001, Stratman et al. applied EPIC to simulate the formation of bright companion clouds to the Great Dark Spots. In 2006, Dowling et al. presented a new version of EPIC, which includes hybrid vertical coordinate, cloud physics, advanced chemistry, and new turbulence models. With the new version of EPIC, more observation results, and more powerful computers, it is the time to revisit CFD simulations of the Neptune's atmosphere and do more detailed work on GDS-like vortices. In this presentation, we apply the new version of EPIC to simulate GDS-89. We test the influences of different parameters in the EPIC model: potential vorticity gradient, wind profile, initial latitude, vortex shape, and vertical structure. The observed motions, especially the latitudinal drift and oscillations in orientation angle and aspect ratio, are used as diagnostics of these unobserved atmospheric conditions. Increased computing power allows for more refined and longer simulations and greater coverage of the parameter space than previous efforts. Improved quantitative results have been achieved, including voritices with near eight-day oscillations and comparable variations in shape to GDS-89. This research has been supported by Kentucky NASA EPSCoR.
MinFinder v2.0: An improved version of MinFinder
NASA Astrophysics Data System (ADS)
Tsoulos, Ioannis G.; Lagaris, Isaac E.
2008-10-01
A new version of the "MinFinder" program is presented that offers an augmented linking procedure for Fortran-77 subprograms, two additional stopping rules and a new start-point rejection mechanism that saves a significant portion of gradient and function evaluations. The method is applied on a set of standard test functions and the results are reported. New version program summaryProgram title: MinFinder v2.0 Catalogue identifier: ADWU_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADWU_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC Licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 14 150 No. of bytes in distributed program, including test data, etc.: 218 144 Distribution format: tar.gz Programming language used: GNU C++, GNU FORTRAN, GNU C Computer: The program is designed to be portable in all systems running the GNU C++ compiler Operating system: Linux, Solaris, FreeBSD RAM: 200 000 bytes Classification: 4.9 Catalogue identifier of previous version: ADWU_v1_0 Journal reference of previous version: Computer Physics Communications 174 (2006) 166-179 Does the new version supersede the previous version?: Yes Nature of problem: A multitude of problems in science and engineering are often reduced to minimizing a function of many variables. There are instances that a local optimum does not correspond to the desired physical solution and hence the search for a better solution is required. Local optimization techniques can be trapped in any local minimum. Global optimization is then the appropriate tool. For example, solving a non-linear system of equations via optimization, one may encounter many local minima that do not correspond to solutions, i.e. they are far from zero. Solution method: Using a uniform pdf, points are sampled from a rectangular domain. A clustering technique, based on a typical distance and a gradient criterion, is used to decide from which points a local search should be started. Further searching is terminated when all the local minima inside the search domain are thought to be found. This is accomplished via three stopping rules: the "double-box" stopping rule, the "observables" stopping rule and the "expected minimizers" stopping rule. Reasons for the new version: The link procedure for source code in Fortran 77 is enhanced, two additional stopping rules are implemented and a new criterion for accepting-start points, that economizes on function and gradient calls, is introduced. Summary of revisions:Addition of command line parameters to the utility program make_program. Augmentation of the link process for Fortran 77 subprograms, by linking the final executable with the g2c library. Addition of two probabilistic stopping rules. Introduction of a rejection mechanism to the Checking step of the original method, that reduces the number of gradient evaluations. Additional comments: A technical report describing the revisions, experiments and test runs is packaged with the source code. Running time: Depending on the objective function.
ROOT — A C++ framework for petabyte data storage, statistical analysis and visualization
NASA Astrophysics Data System (ADS)
Antcheva, I.; Ballintijn, M.; Bellenot, B.; Biskup, M.; Brun, R.; Buncic, N.; Canal, Ph.; Casadei, D.; Couet, O.; Fine, V.; Franco, L.; Ganis, G.; Gheata, A.; Maline, D. Gonzalez; Goto, M.; Iwaszkiewicz, J.; Kreshuk, A.; Segura, D. Marcos; Maunder, R.; Moneta, L.; Naumann, A.; Offermann, E.; Onuchin, V.; Panacek, S.; Rademakers, F.; Russo, P.; Tadel, M.
2011-06-01
A new stable version ("production version") v5.28.00 of ROOT [1] has been published [2]. It features several major improvements in many areas, most noteworthy data storage performance as well as statistics and graphics features. Some of these improvements have already been predicted in the original publication Antcheva et al. (2009) [3]. This version will be maintained for at least 6 months; new minor revisions ("patch releases") will be published [4] to solve problems reported with this version. New version program summaryProgram title: ROOT Catalogue identifier: AEFA_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFA_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU Lesser Public License v.2.1 No. of lines in distributed program, including test data, etc.: 2 934 693 No. of bytes in distributed program, including test data, etc.: 1009 Distribution format: tar.gz Programming language: C++ Computer: Intel i386, Intel x86-64, Motorola PPC, Sun Sparc, HP PA-RISC Operating system: GNU/Linux, Windows XP/Vista/7, Mac OS X, FreeBSD, OpenBSD, Solaris, HP-UX, AIX Has the code been vectorized or parallelized?: Yes RAM: > 55 Mbytes Classification: 4, 9, 11.9, 14 Catalogue identifier of previous version: AEFA_v1_0 Journal reference of previous version: Comput. Phys. Commun. 180 (2009) 2499 Does the new version supersede the previous version?: Yes Nature of problem: Storage, analysis and visualization of scientific data Solution method: Object store, wide range of analysis algorithms and visualization methods Reasons for new version: Added features and corrections of deficiencies Summary of revisions: The release notes at http://root.cern.ch/root/v528/Version528.news.html give a module-oriented overview of the changes in v5.28.00. Highlights include File format Reading of TTrees has been improved dramatically with respect to CPU time (30%) and notably with respect to disk space. Histograms A new TEfficiency class has been provided to handle the calculation of efficiencies and their uncertainties, TH2Poly for polygon-shaped bins (e.g. maps), TKDE for kernel density estimation, and TSVDUnfold for singular value decomposition. Graphics Kerning is now supported in TLatex, PostScript and PDF; a table of contents can be added to PDF files. A new font provides italic symbols. A TPad containing GL can be stored in a binary (i.e. non-vector) image file; add support for full-scene anti-aliasing. Usability enhancements to EVE. Math New interfaces for generating random number according to a given distribution, goodness of fit tests of unbinned data, binning multidimensional data, and several advanced statistical functions were added. RooFit Introduction of HistFactory; major additions to RooStats. TMVA Updated to version 4.1.0, adding e.g. the support for simultaneous classification of multiple output classes for several multivariate methods. PROOF Many new features, adding to PROOF's usability, plus improvements and fixes. PyROOT Support of Python 3 has been added. Tutorials Several new tutorials were provided for above new features (notably RooStats). A detailed list of all the changes is available at http://root.cern.ch/root/htmldoc/examples/V5. Additional comments: For an up-to-date author list see: http://root.cern.ch/drupal/content/root-development-team and http://root.cern.ch/drupal/content/former-root-developers. The distribution file for this program is over 30 Mbytes and therefore is not delivered directly when download or E-mail is requested. Instead a html file giving details of how the program can be obtained is sent. Running time: Depending on the data size and complexity of analysis algorithms. References: id="pr0100" view="all">http://root.cern.ch. http://root.cern.ch/drupal/content/production-version-528. I. Antcheva, M. Ballintijn, B. Bellenot, M. Biskup, R. Brun, N. Buncic, Ph. Canal, D. Casadei, O. Couet, V. Fine, L. Franco, G. Ganis, A. Gheata, D. Gonzalez Maline, M. Goto, J. Iwaszkiewicz, A. Kreshuk, D. Marcos Segura, R. Maunder, L. Moneta, A. Naumann, E. Offermann, V. Onuchin, S. Panacek, F. Rademakers, P. Russo, M. Tadel, ROOT — A C++ framework for petabyte data storage, statistical analysis and visualization, Comput. Phys. Commun. 180 (2009) 2499. http://root.cern.ch/drupal/content/root-version-v5-28-00-patch-release-notes.
Computing Operating Characteristics Of Bearing/Shaft Systems
NASA Technical Reports Server (NTRS)
Moore, James D.
1996-01-01
SHABERTH computer program predicts operating characteristics of bearings in multibearing load-support system. Lubricated and nonlubricated bearings modeled. Calculates loads, torques, temperatures, and fatigue lives of ball and/or roller bearings on single shaft. Provides for analysis of reaction of system to termination of supply of lubricant to bearings and other lubricated mechanical elements. Valuable in design and analysis of shaft/bearing systems. Two versions of SHABERTH available. Cray version (LEW-14860), "Computing Thermal Performances Of Shafts and Bearings". IBM PC version (MFS-28818), written for IBM PC-series and compatible computers running MS-DOS.
Computer Documentation: Effects on Students' Computing Behaviors, Attitudes, and Use of Computers.
ERIC Educational Resources Information Center
Duin, Ann Hill
This study investigated the effects of a minimal manual version versus a cards version of documentation on students' computing behaviors while learning to use a telecommunication system (Appleshare), students' attitudes toward the documentation, and students' later use of the system. Guidelines from the work of Carroll and colleagues at the IBM…
Revision history aware repositories of computational models of biological systems.
Miller, Andrew K; Yu, Tommy; Britten, Randall; Cooling, Mike T; Lawson, James; Cowan, Dougal; Garny, Alan; Halstead, Matt D B; Hunter, Peter J; Nickerson, David P; Nunns, Geo; Wimalaratne, Sarala M; Nielsen, Poul M F
2011-01-14
Building repositories of computational models of biological systems ensures that published models are available for both education and further research, and can provide a source of smaller, previously verified models to integrate into a larger model. One problem with earlier repositories has been the limitations in facilities to record the revision history of models. Often, these facilities are limited to a linear series of versions which were deposited in the repository. This is problematic for several reasons. Firstly, there are many instances in the history of biological systems modelling where an 'ancestral' model is modified by different groups to create many different models. With a linear series of versions, if the changes made to one model are merged into another model, the merge appears as a single item in the history. This hides useful revision history information, and also makes further merges much more difficult, as there is no record of which changes have or have not already been merged. In addition, a long series of individual changes made outside of the repository are also all merged into a single revision when they are put back into the repository, making it difficult to separate out individual changes. Furthermore, many earlier repositories only retain the revision history of individual files, rather than of a group of files. This is an important limitation to overcome, because some types of models, such as CellML 1.1 models, can be developed as a collection of modules, each in a separate file. The need for revision history is widely recognised for computer software, and a lot of work has gone into developing version control systems and distributed version control systems (DVCSs) for tracking the revision history. However, to date, there has been no published research on how DVCSs can be applied to repositories of computational models of biological systems. We have extended the Physiome Model Repository software to be fully revision history aware, by building it on top of Mercurial, an existing DVCS. We have demonstrated the utility of this approach, when used in conjunction with the model composition facilities in CellML, to build and understand more complex models. We have also demonstrated the ability of the repository software to present version history to casual users over the web, and to highlight specific versions which are likely to be useful to users. Providing facilities for maintaining and using revision history information is an important part of building a useful repository of computational models, as this information is useful both for understanding the source of and justification for parts of a model, and to facilitate automated processes such as merges. The availability of fully revision history aware repositories, and associated tools, will therefore be of significant benefit to the community.
NASA Astrophysics Data System (ADS)
Sandner, Raimar; Vukics, András
2014-09-01
The v2 Milestone 10 release of C++QED is primarily a feature release, which also corrects some problems of the previous release, especially as regards the build system. The adoption of C++11 features has led to many simplifications in the codebase. A full doxygen-based API manual [1] is now provided together with updated user guides. A largely automated, versatile new testsuite directed both towards computational and physics features allows for quickly spotting arising errors. The states of trajectories are now savable and recoverable with full binary precision, allowing for trajectory continuation regardless of evolution method (single/ensemble Monte Carlo wave-function or Master equation trajectory). As the main new feature, the framework now presents Python bindings to the highest-level programming interface, so that actual simulations for given composite quantum systems can now be performed from Python. Catalogue identifier: AELU_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AELU_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: yes No. of lines in distributed program, including test data, etc.: 492422 No. of bytes in distributed program, including test data, etc.: 8070987 Distribution format: tar.gz Programming language: C++/Python. Computer: i386-i686, x86 64. Operating system: In principle cross-platform, as yet tested only on UNIX-like systems (including Mac OS X). RAM: The framework itself takes about 60MB, which is fully shared. The additional memory taken by the program which defines the actual physical system (script) is typically less than 1MB. The memory storing the actual data scales with the system dimension for state-vector manipulations, and the square of the dimension for density-operator manipulations. This might easily be GBs, and often the memory of the machine limits the size of the simulated system. Classification: 4.3, 4.13, 6.2. External routines: Boost C++ libraries, GNU Scientific Library, Blitz++, FLENS, NumPy, SciPy Catalogue identifier of previous version: AELU_v1_0 Journal reference of previous version: Comput. Phys. Comm. 183 (2012) 1381 Does the new version supersede the previous version?: Yes Nature of problem: Definition of (open) composite quantum systems out of elementary building blocks [2,3]. Manipulation of such systems, with emphasis on dynamical simulations such as Master-equation evolution [4] and Monte Carlo wave-function simulation [5]. Solution method: Master equation, Monte Carlo wave-function method Reasons for new version: The new version is mainly a feature release, but it does correct some problems of the previous version, especially as regards the build system. Summary of revisions: We give an example for a typical Python script implementing the ring-cavity system presented in Sec. 3.3 of Ref. [2]: Restrictions: Total dimensionality of the system. Master equation-few thousands. Monte Carlo wave-function trajectory-several millions. Unusual features: Because of the heavy use of compile-time algorithms, compilation of programs written in the framework may take a long time and much memory (up to several GBs). Additional comments: The framework is not a program, but provides and implements an application-programming interface for developing simulations in the indicated problem domain. We use several C++11 features which limits the range of supported compilers (g++ 4.7, clang++ 3.1) Documentation, http://cppqed.sourceforge.net/ Running time: Depending on the magnitude of the problem, can vary from a few seconds to weeks. References: [1] Entry point: http://cppqed.sf.net [2] A. Vukics, C++QEDv2: The multi-array concept and compile-time algorithms in the definition of composite quantum systems, Comp. Phys. Comm. 183(2012)1381. [3] A. Vukics, H. Ritsch, C++QED: an object-oriented framework for wave-function simulations of cavity QED systems, Eur. Phys. J. D 44 (2007) 585. [4] H. J. Carmichael, An Open Systems Approach to Quantum Optics, Springer, 1993. [5] J. Dalibard, Y. Castin, K. Molmer, Wave-function approach to dissipative processes in quantum optics, Phys. Rev. Lett. 68 (1992) 580.
ERIC Educational Resources Information Center
Schoen, Robert C.; LaVenia, Mark; Bauduin, Charity; Farina, Kristy
2016-01-01
The subject of this report is a pair of written, group-administered tests designed to measure the performance of grade 1 and grade 2 students at the beginning of the school year in the domain of number and operations. These tests build on previous versions field-tested in fall 2013 (Schoen, LaVenia, Bauduin, & Farina, 2016). Because the tests…
Replaying the game: hypnagogic images in normals and amnesics.
Stickgold, R; Malia, A; Maguire, D; Roddenberry, D; O'Connor, M
2000-10-13
Participants playing the computer game Tetris reported intrusive, stereotypical, visual images of the game at sleep onset. Three amnesic patients with extensive bilateral medial temporal lobe damage produced similar hypnagogic reports despite being unable to recall playing the game, suggesting that such imagery may arise without important contribution from the declarative memory system. In addition, control participants reported images from previously played versions of the game, demonstrating that remote memories can influence the images from recent waking experience.
Numerical Arc Segmentation Algorithm for a Radio Conference-NASARC, Version 2.0: User's Manual
NASA Technical Reports Server (NTRS)
Whyte, Wayne A., Jr.; Heyward, Ann O.; Ponchak, Denise S.; Spence, Rodney L.; Zuzek, John E.
1987-01-01
The information contained in the NASARC (Version 2.0) Technical Manual (NASA TM-100160) and the NASARC (Version 2.0) User's Manual (NASA TM-100161) relates to the state of the Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) software development through October 16, 1987. The technical manual describes the NASARC concept and the algorithms which are used to implement it. The User's Manual provides information on computer system considerations, installation instructions, description of input files, and program operation instructions. Significant revisions have been incorporated in the Version 2.0 software over prior versions. These revisions have enhanced the modeling capabilities of the NASARC procedure while greatly reducing the computer run time and memory requirements. Array dimensions within the software have been structured to fit into the currently available 6-megabyte memory capacity of the International Frequency Registration Board (IFRB) computer facility. A piecewise approach to predetermined arc generation in NASARC (Version 2.0) allows worldwide scenarios to be accommodated within these memory constraints while at the same time reducing computer run time.
Remodeling Cildb, a popular database for cilia and links for ciliopathies
2014-01-01
Background New generation technologies in cell and molecular biology generate large amounts of data hard to exploit for individual proteins. This is particularly true for ciliary and centrosomal research. Cildb is a multi–species knowledgebase gathering high throughput studies, which allows advanced searches to identify proteins involved in centrosome, basal body or cilia biogenesis, composition and function. Combined to localization of genetic diseases on human chromosomes given by OMIM links, candidate ciliopathy proteins can be compiled through Cildb searches. Methods Othology between recent versions of the whole proteomes was computed using Inparanoid and ciliary high throughput studies were remapped on these recent versions. Results Due to constant evolution of the ciliary and centrosomal field, Cildb has been recently upgraded twice, with new species whole proteomes and new ciliary studies, and the latter version displays a novel BioMart interface, much more intuitive than the previous ones. Conclusions This already popular database is designed now for easier use and is up to date in regard to high throughput ciliary studies. PMID:25422781
Aerodynamic analysis of three advanced configurations using the TranAir full-potential code
NASA Technical Reports Server (NTRS)
Madson, M. D.; Carmichael, R. L.; Mendoza, J. P.
1989-01-01
Computational results are presented for three advanced configurations: the F-16A with wing tip missiles and under wing fuel tanks, the Oblique Wing Research Aircraft, and an Advanced Turboprop research model. These results were generated by the latest version of the TranAir full potential code, which solves for transonic flow over complex configurations. TranAir embeds a surface paneled geometry definition in a uniform rectangular flow field grid, thus avoiding the use of surface conforming grids, and decoupling the grid generation process from the definition of the configuration. The new version of the code locally refines the uniform grid near the surface of the geometry, based on local panel size and/or user input. This method distributes the flow field grid points much more efficiently than the previous version of the code, which solved for a grid that was uniform everywhere in the flow field. TranAir results are presented for the three configurations and are compared with wind tunnel data.
Mars Global Reference Atmospheric Model 2001 Version (Mars-GRAM 2001): Users Guide
NASA Technical Reports Server (NTRS)
Justus, C. G.; Johnson, D. L.
2001-01-01
This document presents Mars Global Reference Atmospheric Model 2001 Version (Mars-GRAM 2001) and its new features. As with the previous version (mars-2000), all parameterizations fro temperature, pressure, density, and winds versus height, latitude, longitude, time of day, and season (Ls) use input data tables from NASA Ames Mars General Circulation Model (MGCM) for the surface through 80-km altitude and the University of Arizona Mars Thermospheric General Circulation Model (MTGCM) for 80 to 70 km. Mars-GRAM 2001 is based on topography from the Mars Orbiter Laser Altimeter (MOLA) and includes new MGCM data at the topographic surface. A new auxiliary program allows Mars-GRAM output to be used to compute shortwave (solar) and longwave (thermal) radiation at the surface and top of atmosphere. This memorandum includes instructions on obtaining Mars-GRAN source code and data files and for running the program. It also provides sample input and output and an example for incorporating Mars-GRAM as an atmospheric subroutine in a trajectory code.
TaylUR 3, a multivariate arbitrary-order automatic differentiation package for Fortran 95
NASA Astrophysics Data System (ADS)
von Hippel, G. M.
2010-03-01
This new version of TaylUR is based on a completely new core, which now is able to compute the numerical values of all of a complex-valued function's partial derivatives up to an arbitrary order, including mixed partial derivatives. New version program summaryProgram title: TaylUR Catalogue identifier: ADXR_v3_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXR_v3_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GPLv2 No. of lines in distributed program, including test data, etc.: 6750 No. of bytes in distributed program, including test data, etc.: 19 162 Distribution format: tar.gz Programming language: Fortran 95 Computer: Any computer with a conforming Fortran 95 compiler Operating system: Any system with a conforming Fortran 95 compiler Classification: 4.12, 4.14 Catalogue identifier of previous version: ADXR_v2_0 Journal reference of previous version: Comput. Phys. Comm. 176 (2007) 710 Does the new version supersede the previous version?: Yes Nature of problem: Problems that require potentially high orders of partial derivatives with respect to several variables or derivatives of complex-valued functions, such as e.g. momentum or mass expansions of Feynman diagrams in perturbative QFT, and which previous versions of this TaylUR [1,2] cannot handle due to their lack of support for mixed partial derivatives. Solution method: Arithmetic operators and Fortran intrinsics are overloaded to act correctly on objects of a defined type taylor, which encodes a function along with its first few partial derivatives with respect to the user-defined independent variables. Derivatives of products and composite functions are computed using multivariate forms [3] of Leibniz's rule D(fg)=∑{ν!}/{μ!(μ-ν)!}DfDg where ν=(ν,…,ν), |ν|=∑j=1dν, ν!=∏j=1dν!, Df=∂f/(∂x⋯∂x), and μ<ν iff either |μ|<|ν| or |μ|=|ν|,μ=ν,…,μ=ν,μ<ν for some k∈{0,…,d-1}, and of Fàa di Bruno's formula D(f○g)=∑p=1|ν|(f○g)∑s=1|ν|∑,…,k;λ,…,λ)}ν!/(∏j=1sk!λ!)(g)k where the sum is over {(k,…,k;λ,…,λ)∈Z:k>0,0<λ<⋯<λ, ∑i=1sk=p,∑i=1skλ=ν}. An indexed storage system is used to store the higher-order derivative tensors in a one-dimensional array. The relevant indices (k,…,k;λ,…,λ) and the weights occurring in the sums in Leibniz's and Fàa di Bruno's formula are precomputed at startup and stored in static arrays for later use. Reasons for new version: The earlier version lacked support for mixed partial derivatives, but a number of projects of interest required them. Summary of revisions: The internal representation of a taylor object has changed to a one-dimensional array which contains the partial derivatives in ascending order, and in lexicographic order of the corresponding multiindex within the same order. The necessary mappings between multiindices and indices into the taylor objects' internal array are computed at startup. To support the change to a genuinely multivariate taylor type, the DERIVATIVE function is now implemented via an interface that accepts both the older format derivative(f,mu,n)=∂μnf and also a new format derivative(f,mu(:))=Df that allows access to mixed partial derivatives. Another related extension to the functionality of the module is the HESSIAN function that returns the Hessian matrix of second derivatives of its argument. Since the calculation of all mixed partial derivatives can be very costly, and in many cases only some subset is actually needed, a masking facility has been added. Calling the subroutine DEACTIVATE_DERIVATIVE with a multiindex as an argument will deactivate the calculation of the partial derivative belonging to that multiindex, and of all partial derivatives it can feed into. Similarly, calling the subroutine ACTIVATE_DERIVATIVE will activate the calculation of the partial derivative belonging to its argument, and of all partial derivatives that can feed into it. Moreover, it is possible to turn off the computation of mixed derivatives altogether by setting Diagonal_taylors to .TRUE.. It should be noted that any change of Diagonal_taylors or Taylor_order invalidates all existing taylor objects. To aid the better integration of TaylUR into the HPSrc library [4], routines SET_DERIVATIVE and SET_ALL_DERIVATIVES are provided as a means of manually constructing a taylor object with given derivatives. Restrictions: Memory and CPU time constraints may restrict the number of variables and Taylor expansion order that can be achieved. Loss of numerical accuracy due to cancellation may become an issue at very high orders. Unusual features: These are the same as in previous versions, but are enumerated again here for clarity. The complex conjugation operation assumes all independent variables to be real. The functions REAL and AIMAG do not convert to real type, but return a result of type taylor (with the real/imaginary part of each derivative taken) instead. The user-defined functions VALUE, REALVALUE and IMAGVALUE, which return the value of a taylor object as a complex number, and the real and imaginary part of this value, respectively, as a real number are also provided. Fortran 95 intrinsics that are defined only for arguments of real type ( ACOS, AINT, ANINT, ASIN, ATAN, ATAN2, CEILING, DIM, FLOOR, INT, LOG10, MAX, MAXLOC, MAXVAL, MIN, MINLOC, MINVAL, MOD, MODULO, NINT, SIGN) will silently take the real part of taylor-valued arguments unless the module variable Real_args_warn is set to .TRUE., in which case they will return a quiet NaN value (if supported by the compiler) when called with a taylor argument whose imaginary part exceeds the module variable Real_args_tol. In those cases where the derivative of a function becomes undefined at certain points (as for ABS, AINT, ANINT, MAX, MIN, MOD, and MODULO), while the value is well defined, the derivative fields will be filled with quiet NaN values (if supported by the compiler). Additional comments: This version of TaylUR is released under the second version of the GNU General Public License (GPLv2). Therefore anyone is free to use or modify the code for their own calculations. As part of the licensing, it is requested that any publications including results from the use of TaylUR or any modification derived from it cite Refs. [1,2] as well as this paper. Finally, users are also requested to communicate to the author details of such publications, as well as of any bugs found or of required or useful modifications made or desired by them. Running time: The running time of TaylUR operations grows rapidly with both the number of variables and the Taylor expansion order. Judicious use of the masking facility to drop unneeded higher derivatives can lead to significant accelerations, as can activation of the Diagonal_taylors variable whenever mixed partial derivatives are not needed. Acknowledgments: The author thanks Alistair Hart for helpful comments and suggestions. This work is supported by the Deutsche Forschungsgemeinschaft in the SFB/TR 09. References:G.M. von Hippel, TaylUR, an arbitrary-order diagonal automatic differentiation package for Fortran 95, Comput. Phys. Comm. 174 (2006) 569. G.M. von Hippel, New version announcement for TaylUR, an arbitrary-order diagonal automatic differentiation package for Fortran 95, Comput. Phys. Comm. 176 (2007) 710. G.M. Constantine, T.H. Savits, A multivariate Faa di Bruno formula with applications, Trans. Amer. Math. Soc. 348 (2) (1996) 503. A. Hart, G.M. von Hippel, R.R. Horgan, E.H. Müller, Automated generation of lattice QCD Feynman rules, Comput. Phys. Comm. 180 (2009) 2698, doi:10.1016/j.cpc.2009.04.021, arXiv:0904.0375.
Trace contaminant control simulation computer program, version 8.1
NASA Technical Reports Server (NTRS)
Perry, J. L.
1994-01-01
The Trace Contaminant Control Simulation computer program is a tool for assessing the performance of various process technologies for removing trace chemical contamination from a spacecraft cabin atmosphere. Included in the simulation are chemical and physical adsorption by activated charcoal, chemical adsorption by lithium hydroxide, absorption by humidity condensate, and low- and high-temperature catalytic oxidation. Means are provided for simulating regenerable as well as nonregenerable systems. The program provides an overall mass balance of chemical contaminants in a spacecraft cabin given specified generation rates. Removal rates are based on device flow rates specified by the user and calculated removal efficiencies based on cabin concentration and removal technology experimental data. Versions 1.0 through 8.0 are documented in NASA TM-108409. TM-108409 also contains a source file listing for version 8.0. Changes to version 8.0 are documented in this technical memorandum and a source file listing for the modified version, version 8.1, is provided. Detailed descriptions for the computer program subprograms are extracted from TM-108409 and modified as necessary to reflect version 8.1. Version 8.1 supersedes version 8.0. Information on a separate user's guide is available from the author.
Aircraft noise prediction program propeller analysis system IBM-PC version user's manual version 2.0
NASA Technical Reports Server (NTRS)
Nolan, Sandra K.
1988-01-01
The IBM-PC version of the Aircraft Noise Prediction Program (ANOPP) Propeller Analysis System (PAS) is a set of computational programs for predicting the aerodynamics, performance, and noise of propellers. The ANOPP-PAS is a subset of a larger version of ANOPP which can be executed on CDC or VAX computers. This manual provides a description of the IBM-PC version of the ANOPP-PAS and its prediction capabilities, and instructions on how to use the system on an IBM-XT or IBM-AT personal computer. Sections within the manual document installation, system design, ANOPP-PAS usage, data entry preprocessors, and ANOPP-PAS functional modules and procedures. Appendices to the manual include a glossary of ANOPP terms and information on error diagnostics and recovery techniques.
VENTURE/PC manual: A multidimensional multigroup neutron diffusion code system. Version 3
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shapiro, A.; Huria, H.C.; Cho, K.W.
1991-12-01
VENTURE/PC is a recompilation of part of the Oak Ridge BOLD VENTURE code system, which will operate on an IBM PC or compatible computer. Neutron diffusion theory solutions are obtained for multidimensional, multigroup problems. This manual contains information associated with operating the code system. The purpose of the various modules used in the code system, and the input for these modules are discussed. The PC code structure is also given. Version 2 included several enhancements not given in the original version of the code. In particular, flux iterations can be done in core rather than by reading and writing tomore » disk, for problems which allow sufficient memory for such in-core iterations. This speeds up the iteration process. Version 3 does not include any of the special processors used in the previous versions. These special processors utilized formatted input for various elements of the code system. All such input data is now entered through the Input Processor, which produces standard interface files for the various modules in the code system. In addition, a Standard Interface File Handbook is included in the documentation which is distributed with the code, to assist in developing the input for the Input Processor.« less
Kim, Stella H; Strutt, Adriana M; Olabarrieta-Landa, Laiene; Lequerica, Anthony H; Rivera, Diego; De Los Reyes Aragon, Carlos Jose; Utria, Oscar; Arango-Lasprilla, Juan Carlos
2018-02-23
The Boston Naming Test (BNT) is a widely used measure of confrontation naming ability that has been criticized for its questionable construct validity for non-English speakers. This study investigated item difficulty and construct validity of the Spanish version of the BNT to assess cultural and linguistic impact on performance. Subjects were 1298 healthy Spanish speaking adults from Colombia. They were administered the 60- and 15-item Spanish version of the BNT. A Rasch analysis was computed to assess dimensionality, item hierarchy, targeting, reliability, and item fit. Both versions of the BNT satisfied requirements for unidimensionality. Although internal consistency was excellent for the 60-item BNT, order of difficulty did not increase consistently with item number and there were a number of items that did not fit the Rasch model. For the 15-item BNT, a total of 5 items changed position on the item hierarchy with 7 poor fitting items. Internal consistency was acceptable. Construct validity of the BNT remains a concern when it is administered to non-English speaking populations. Similar to previous findings, the order of item presentation did not correspond with increasing item difficulty, and both versions were inadequate at assessing high naming ability.
Multi-Strain Deterministic Chaos in Dengue Epidemiology, A Challenge for Computational Mathematics
NASA Astrophysics Data System (ADS)
Aguiar, Maíra; Kooi, Bob W.; Stollenwerk, Nico
2009-09-01
Recently, we have analysed epidemiological models of competing strains of pathogens and hence differences in transmission for first versus secondary infection due to interaction of the strains with previously aquired immunities, as has been described for dengue fever, known as antibody dependent enhancement (ADE). These models show a rich variety of dynamics through bifurcations up to deterministic chaos. Including temporary cross-immunity even enlarges the parameter range of such chaotic attractors, and also gives rise to various coexisting attractors, which are difficult to identify by standard numerical bifurcation programs using continuation methods. A combination of techniques, including classical bifurcation plots and Lyapunov exponent spectra has to be applied in comparison to get further insight into such dynamical structures. Especially, Lyapunov spectra, which quantify the predictability horizon in the epidemiological system, are computationally very demanding. We show ways to speed up computations of such Lyapunov spectra by a factor of more than ten by parallelizing previously used sequential C programs. Such fast computations of Lyapunov spectra will be especially of use in future investigations of seasonally forced versions of the present models, as they are needed for data analysis.
Dynamic Analysis of Spur Gear Transmissions (DANST). PC Version 3.00 User Manual
NASA Technical Reports Server (NTRS)
Oswald, Fred B.; Lin, Hsiang Hsi; Delgado, Irebert R.
1996-01-01
DANST is a FORTRAN computer program for static and dynamic analysis of spur gear systems. The program can be used for parametric studies to predict the static transmission error, dynamic load, tooth bending stress and other properties of spur gears as they are influenced by operating speed, torque, stiffness, damping, inertia, and tooth profile. DANST performs geometric modeling and dynamic analysis for low- or high-contact-ratio spur gears. DANST can simulate gear systems with contact ratios ranging from one to three. It was designed to be easy to use and it is extensively documented in several previous reports and by comments in the source code. This report describes installing and using a new PC version of DANST, covers input data requirements and presents examples.
DFMSPH14: A C-code for the double folding interaction potential of two spherical nuclei
NASA Astrophysics Data System (ADS)
Gontchar, I. I.; Chushnyakova, M. V.
2016-09-01
This is a new version of the DFMSPH code designed to obtain the nucleus-nucleus potential by using the double folding model (DFM) and in particular to find the Coulomb barrier. The new version uses the charge, proton, and neutron density distributions provided by the user. Also we added an option for fitting the DFM potential by the Gross-Kalinowski profile. The main functionalities of the original code (e.g. the nucleus-nucleus potential as a function of the distance between the centers of mass of colliding nuclei, the Coulomb barrier characteristics, etc.) have not been modified. Catalog identifier: AEFH_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFH_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland. Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 7211 No. of bytes in distributed program, including test data, etc.: 114404 Distribution format: tar.gz Programming language: C Computer: PC and Mac Operation system: Windows XP and higher, MacOS, Unix/Linux Memory required to execute with typical data: below 10 Mbyte Classification: 17.9 Catalog identifier of previous version: AEFH_v1_0 Journal reference of previous version: Comp. Phys. Comm. 181 (2010) 168 Does the new version supersede the previous version?: Yes Nature of physical problem: The code calculates in a semimicroscopic way the bare interaction potential between two colliding spherical nuclei as a function of the center of mass distance. The height and the position of the Coulomb barrier are found. The calculated potential is approximated by an analytical profile (Woods-Saxon or Gross-Kalinowski) near the barrier. Dependence of the barrier parameters upon the characteristics of the effective NN forces (like, e.g. the range of the exchange part of the nuclear term) can be investigated. Method of solution: The nucleus-nucleus potential is calculated using the double folding model with the Coulomb and the effective M3Y NN interactions. For the direct parts of the Coulomb and the nuclear terms, the Fourier transform method is used. In order to calculate the exchange parts, the density matrix expansion method is applied. Typical running time: less than 1 minute. Reason for new version: Many users asked us how to implement their own density distributions in the DFMSPH. Now this option has been added. Also we found that the calculated Double-Folding Potential (DFP) is approximated more accurately by the Gross-Kalinowski (GK) profile. This option has been also added.
NASA Technical Reports Server (NTRS)
Burgin, G. H.; Fogel, L. J.; Phelps, J. P.
1975-01-01
A technique for computer simulation of air combat is described. Volume 1 decribes the computer program and its development in general terms. Two versions of the program exist. Both incorporate a logic for selecting and executing air combat maneuvers with performance models of specific fighter aircraft. In the batch processing version the flight paths of two aircraft engaged in interactive aerial combat and controlled by the same logic are computed. The realtime version permits human pilots to fly air-to-air combat against the adaptive maneuvering logic (AML) in Langley Differential Maneuvering Simulator (DMS). Volume 2 consists of a detailed description of the computer programs.
COSMIC monthly progress report
NASA Technical Reports Server (NTRS)
1993-01-01
Activities of the Computer Software Management and Information Center (COSMIC) are summarized for the month of August, 1993. Tables showing the current inventory of programs available from COSMIC are presented and program processing and evaluation activities are discussed. Ten articles were prepared for publication in the NASA Tech Brief Journal. These articles (included in this report) describe the following software items: (1) MOM3D - A Method of Moments Code for Electromagnetic Scattering (UNIX Version); (2) EM-Animate - Computer Program for Displaying and Animating the Steady-State Time-Harmonic Electromagnetic Near Field and Surface-Current Solutions; (3) MOM3D - A Method of Moments Code for Electromagnetic Scattering (IBM PC Version); (4) M414 - MIL-STD-414 Variable Sampling Procedures Computer Program; (5) MEDOF - Minimum Euclidean Distance Optimal Filter; (6) CLIPS 6.0 - C Language Integrated Production System, Version 6.0 (Macintosh Version); (7) CLIPS 6.0 - C Language Integrated Production System, Version 6.0 (IBM PC Version); (8) CLIPS 6.0 - C Language Integrated Production System, Version 6.0 (UNIX Version); (9) CLIPS 6.0 - C Language Integrated Production System, Version 6.0 (DEC VAX VMS Version); and (10) TFSSRA - Thick Frequency Selective Surface with Rectangular Apertures. Activities in the areas of marketing, customer service, benefits identification, maintenance and support, and dissemination are also described along with a budget summary.
AUTO_DERIV: Tool for automatic differentiation of a Fortran code
NASA Astrophysics Data System (ADS)
Stamatiadis, S.; Farantos, S. C.
2010-10-01
AUTO_DERIV is a module comprised of a set of FORTRAN 95 procedures which can be used to calculate the first and second partial derivatives (mixed or not) of any continuous function with many independent variables. The mathematical function should be expressed as one or more FORTRAN 77/90/95 procedures. A new type of variables is defined and the overloading mechanism of functions and operators provided by the FORTRAN 95 language is extensively used to define the differentiation rules. Proper (standard complying) handling of floating-point exceptions is provided by using the IEEE_EXCEPTIONS intrinsic module (Technical Report 15580, incorporated in FORTRAN 2003). New version program summaryProgram title: AUTO_DERIV Catalogue identifier: ADLS_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADLS_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 2963 No. of bytes in distributed program, including test data, etc.: 10 314 Distribution format: tar.gz Programming language: Fortran 95 + (optionally) TR-15580 (Floating-point exception handling) Computer: all platforms with a Fortran 95 compiler Operating system: Linux, Windows, MacOS Classification: 4.12, 6.2 Catalogue identifier of previous version: ADLS_v1_0 Journal reference of previous version: Comput. Phys. Comm. 127 (2000) 343 Does the new version supersede the previous version?: Yes Nature of problem: The need to calculate accurate derivatives of a multivariate function frequently arises in computational physics and chemistry. The most versatile approach to evaluate them by a computer, automatically and to machine precision, is via user-defined types and operator overloading. AUTO_DERIV is a Fortran 95 implementation of them, designed to evaluate the first and second derivatives of a function of many variables. Solution method: The mathematical rules for differentiation of sums, products, quotients, elementary functions in conjunction with the chain rule for compound functions are applied. The function should be expressed as one or more Fortran 77/90/95 procedures. A new type of variables is defined and the overloading mechanism of functions and operators provided by the Fortran 95 language is extensively used to implement the differentiation rules. Reasons for new version: The new version supports Fortran 95, handles properly the floating-point exceptions, and is faster due to internal reorganization. All discovered bugs are fixed. Summary of revisions:The code was rewritten extensively to benefit from features introduced in Fortran 95. Additionally, there was a major internal reorganization of the code, resulting in faster execution. The user interface described in the original paper was not changed. The values that the user must or should specify before compilation (essentially, the number of independent variables) were moved into ad_types module. There were many minor bug fixes. One important bug was found and fixed; the code did not handle correctly the overloading of ∗ in aλ when a=0. The case of division by zero and the discontinuity of the function at the requested point are indicated by standard IEEE exceptions ( IEEE_DIVIDE_BY_ZERO and IEEE_INVALID respectively). If the compiler does not support IEEE exceptions, a module with the appropriate name is provided, imitating the behavior of the 'standard' module in the sense that it raises the corresponding exceptions. It is up to the compiler (through certain flags probably) to detect them. Restrictions: None imposed by the program. There are certain limitations that may appear mostly due to the specific implementation chosen in the user code. They can always be overcome by recoding parts of the routines developed by the user or by modifying AUTO_DERIV according to specific instructions given in [1]. The common restrictions of available memory and the capabilities of the compiler are the same as the original version. Additional comments: The program has been tested using the following compilers: Intel ifort, GNU gfortran, NAGWare f95, g95. Running time: The typical running time for the program depends on the compiler and the complexity of the differentiated function. A rough estimate is that AUTO_DERIV is ten times slower than the evaluation of the analytical ('by hand') function value and derivatives (if they are available). References:S. Stamatiadis, R. Prosmiti, S.C. Farantos, AUTO_DERIV: tool for automatic differentiation of a Fortran code, Comput. Phys. Comm. 127 (2000) 343.
Zheng, Chunmiao; Hill, Mary Catherine; Hsieh, Paul A.
2001-01-01
MODFLOW-2000, the newest version of MODFLOW, is a computer program that numerically solves the three-dimensional ground-water flow equation for a porous medium using a finite-difference method. MT3DMS, the successor to MT3D, is a computer program for modeling multi-species solute transport in three-dimensional ground-water systems using multiple solution techniques, including the finite-difference method, the method of characteristics (MOC), and the total-variation-diminishing (TVD) method. This report documents a new version of the Link-MT3DMS Package, which enables MODFLOW-2000 to produce the information needed by MT3DMS, and also discusses new visualization software for MT3DMS. Unlike the Link-MT3D Packages that coordinated previous versions of MODFLOW and MT3D, the new Link-MT3DMS Package requires an input file that, among other things, provides enhanced support for additional MODFLOW sink/source packages and allows list-directed (free) format for the flow model produced flow-transport link file. The report contains four parts: (a) documentation of the Link-MT3DMS Package Version 6 for MODFLOW-2000; (b) discussion of several issues related to simulation setup and input data preparation for running MT3DMS with MODFLOW-2000; (c) description of two test example problems, with comparison to results obtained using another MODFLOW-based transport program; and (d) overview of post-simulation visualization and animation using the U.S. Geological Survey?s Model Viewer.
Tool for Analysis and Reduction of Scientific Data
NASA Technical Reports Server (NTRS)
James, Mark
2006-01-01
The Automated Scheduling and Planning Environment (ASPEN) computer program has been updated to version 3.0. ASPEN as a whole (up to version 2.0) has been summarized, and selected aspects of ASPEN have been discussed in several previous NASA Tech Briefs articles. Restated briefly, ASPEN is a modular, reconfigurable, application software framework for solving batch problems that involve reasoning about time, activities, states, and resources. Applications of ASPEN can include planning spacecraft missions, scheduling of personnel, and managing supply chains, inventories, and production lines. ASPEN 3.0 can be customized for a wide range of applications and for a variety of computing environments that include various central processing units and randomaccess memories. Domain-specific reasoning modules (e.g., modules for determining orbits for spacecraft) can easily be plugged into ASPEN 3.0. Improvements over other, similar software that have been incorporated into ASPEN 3.0 include a provision for more expressive time-line values, new parsing capabilities afforded by an ASPEN language based on Extensible Markup Language, improved search capabilities, and improved interfaces to other, utility-type software (notably including MATLAB).
Implementing Journaling in a Linux Shared Disk File System
NASA Technical Reports Server (NTRS)
Preslan, Kenneth W.; Barry, Andrew; Brassow, Jonathan; Cattelan, Russell; Manthei, Adam; Nygaard, Erling; VanOort, Seth; Teigland, David; Tilstra, Mike; O'Keefe, Matthew;
2000-01-01
In computer systems today, speed and responsiveness is often determined by network and storage subsystem performance. Faster, more scalable networking interfaces like Fibre Channel and Gigabit Ethernet provide the scaffolding from which higher performance computer systems implementations may be constructed, but new thinking is required about how machines interact with network-enabled storage devices. In this paper we describe how we implemented journaling in the Global File System (GFS), a shared-disk, cluster file system for Linux. Our previous three papers on GFS at the Mass Storage Symposium discussed our first three GFS implementations, their performance, and the lessons learned. Our fourth paper describes, appropriately enough, the evolution of GFS version 3 to version 4, which supports journaling and recovery from client failures. In addition, GFS scalability tests extending to 8 machines accessing 8 4-disk enclosures were conducted: these tests showed good scaling. We describe the GFS cluster infrastructure, which is necessary for proper recovery from machine and disk failures in a collection of machines sharing disks using GFS. Finally, we discuss the suitability of Linux for handling the big data requirements of supercomputing centers.
2007-09-01
example, an application developed in Sun’s Netbeans [2007] integrated development environment (IDE) uses Swing class object for graphical user... Netbeans Version 5.5.1 [Computer Software]. Santa Clara, CA: Sun Microsystems. Process Modeler Version 7.0 [Computer Software]. Santa Clara, Ca
METLIN-PC: An applications-program package for problems of mathematical programming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pshenichnyi, B.N.; Sobolenko, L.A.; Sosnovskii, A.A.
1994-05-01
The METLIN-PC applications-program package (APP) was developed at the V.M. Glushkov Institute of Cybernetics of the Academy of Sciences of Ukraine on IBM PC XT and AT computers. The present version of the package was written in Turbo Pascal and Fortran-77. The METLIN-PC is chiefly designed for the solution of smooth problems of mathematical programming and is a further development of the METLIN prototype, which was created earlier on a BESM-6 computer. The principal property of the previous package is retained - the applications modules employ a single approach based on the linearization method of B.N. Pschenichnyi. Hence the namemore » {open_quotes}METLIN.{close_quotes}« less
NASA Astrophysics Data System (ADS)
Nikitin, A. V.; Rey, M.; Champion, J. P.; Tyuterev, Vl. G.
2012-07-01
The MIRS software for the modeling of ro-vibrational spectra of polyatomic molecules was considerably extended and improved. The original version [Nikitin AV, Champion JP, Tyuterev VlG. The MIRS computer package for modeling the rovibrational spectra of polyatomic molecules. J Quant Spectrosc Radiat Transf 2003;82:239-49.] was especially designed for separate or simultaneous treatments of complex band systems of polyatomic molecules. It was set up in the frame of effective polyad models by using algorithms based on advanced group theory algebra to take full account of symmetry properties. It has been successfully used for predictions and data fitting (positions and intensities) of numerous spectra of symmetric and spherical top molecules within the vibration extrapolation scheme. The new version offers more advanced possibilities for spectra calculations and modeling by getting rid of several previous limitations particularly for the size of polyads and the number of tensors involved. It allows dealing with overlapping polyads and includes more efficient and faster algorithms for the calculation of coefficients related to molecular symmetry properties (6C, 9C and 12C symbols for C3v, Td, and Oh point groups) and for better convergence of least-square-fit iterations as well. The new version is not limited to polyad effective models. It also allows direct predictions using full ab initio ro-vibrational normal mode Hamiltonians converted into the irreducible tensor form. Illustrative examples on CH3D, CH4, CH3Cl, CH3F and PH3 are reported reflecting the present status of data available. It is written in C++ for standard PC computer operating under Windows. The full package including on-line documentation and recent data are freely available at http://www.iao.ru/mirs/mirs.htm or http://xeon.univ-reims.fr/Mirs/ or http://icb.u-bourgogne.fr/OMR/SMA/SHTDS/MIRS.html and as supplementary data from the online version of the article.
Assimilation of SBUV Version 8 Radiances into the GEOS Ozone DAS
NASA Technical Reports Server (NTRS)
Mueller, Martin D.; Stajner, Ivanka; Bhartia, Pawan K.
2004-01-01
In operational weather forecasting, the assimilation of brightness temperatures from satellite sounders, instead of assimilation of 1D-retrievals has become increasingly common practice over the last two decades. Compared to these systems, assimilation of trace gases is still at a relatively early stage of development, and efforts to directly assimilate radiances instead of retrieved products have just begun a few years ago, partially because it requires much more computation power due to the employment of a radiative transport forward model (FM). This paper will focus on a method to assimilate SBUV/2 radiances (albedos) into the Global Earth Observation System Ozone Data Assimilation Scheme (GEOS-03DAS). While SBUV-type instruments cannot compete with newer sensors in terms of spectral and horizontal resolution, they feature a continuous data record back to 1978, which makes them very valuable for trend studies. Assimilation can help spreading their ground coverage over the whole globe, as has been previously demonstrated with the GEOS-03DAS using SBUV Version 6 ozone profiles. Now, the DAS has been updated to use the newly released SBUV Version 8 data. We will compare pre]lmlnarv results of SBUV radiance assimilation with the assimilation of retrieved ozone profiles, discuss methods to deal with the increased computational load, and try to assess the error characteristics and future potential of the new approach.
VENTURE/PC manual: A multidimensional multigroup neutron diffusion code system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shapiro, A.; Huria, H.C.; Cho, K.W.
1991-12-01
VENTURE/PC is a recompilation of part of the Oak Ridge BOLD VENTURE code system, which will operate on an IBM PC or compatible computer. Neutron diffusion theory solutions are obtained for multidimensional, multigroup problems. This manual contains information associated with operating the code system. The purpose of the various modules used in the code system, and the input for these modules are discussed. The PC code structure is also given. Version 2 included several enhancements not given in the original version of the code. In particular, flux iterations can be done in core rather than by reading and writing tomore » disk, for problems which allow sufficient memory for such in-core iterations. This speeds up the iteration process. Version 3 does not include any of the special processors used in the previous versions. These special processors utilized formatted input for various elements of the code system. All such input data is now entered through the Input Processor, which produces standard interface files for the various modules in the code system. In addition, a Standard Interface File Handbook is included in the documentation which is distributed with the code, to assist in developing the input for the Input Processor.« less
Kork, John O.
1983-01-01
Version 1.00 of the Asynchronous Communications Support supplied with the IBM Personal Computer must be modified to be used for communications with Multics. Version 2.00 can be used as supplied, but error checking and screen printing capabilities can be added by using modifications very similar to those required for Version 1.00. This paper describes and lists required programs on Multics and appropriate modifications to both Versions 1.00 and 2.00 of the programs supplied by IBM.
Raptor: An Enterprise Knowledge Discovery Engine Version 2.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
2011-08-31
The Raptor Version 2.0 computer code uses a set of documents as seed documents to recommend documents of interest from a large, target set of documents. The computer code provides results that show the recommended documents with the highest similarity to the seed documents. Version 2.0 was specifically developed to work with SharePoint 2007 and MS SQL server.
Code C# for chaos analysis of relativistic many-body systems with reactions
NASA Astrophysics Data System (ADS)
Grossu, I. V.; Besliu, C.; Jipa, Al.; Stan, E.; Esanu, T.; Felea, D.; Bordeianu, C. C.
2012-04-01
In this work we present a reaction module for “Chaos Many-Body Engine” (Grossu et al., 2010 [1]). Following our goal of creating a customizable, object oriented code library, the list of all possible reactions, including the corresponding properties (particle types, probability, cross section, particle lifetime, etc.), could be supplied as parameter, using a specific XML input file. Inspired by the Poincaré section, we propose also the “Clusterization Map”, as a new intuitive analysis method of many-body systems. For exemplification, we implemented a numerical toy-model for nuclear relativistic collisions at 4.5 A GeV/c (the SKM200 Collaboration). An encouraging agreement with experimental data was obtained for momentum, energy, rapidity, and angular π distributions. Catalogue identifier: AEGH_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGH_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 184 628 No. of bytes in distributed program, including test data, etc.: 7 905 425 Distribution format: tar.gz Programming language: Visual C#.NET 2005 Computer: PC Operating system: Net Framework 2.0 running on MS Windows Has the code been vectorized or parallelized?: Each many-body system is simulated on a separate execution thread. One processor used for each many-body system. RAM: 128 Megabytes Classification: 6.2, 6.5 Catalogue identifier of previous version: AEGH_v1_0 Journal reference of previous version: Comput. Phys. Comm. 181 (2010) 1464 External routines: Net Framework 2.0 Library Does the new version supersede the previous version?: Yes Nature of problem: Chaos analysis of three-dimensional, relativistic many-body systems with reactions. Solution method: Second order Runge-Kutta algorithm for simulating relativistic many-body systems with reactions. Object oriented solution, easy to reuse, extend and customize, in any development environment which accepts .Net assemblies or COM components. Treatment of two particles reactions and decays. For each particle, calculation of the time measured in the particle reference frame, according to the instantaneous velocity. Possibility to dynamically add particle properties (spin, isospin, etc.), and reactions/decays, using a specific XML input file. Basic support for Monte Carlo simulations. Implementation of: Lyapunov exponent, “fragmentation level”, “average system radius”, “virial coefficient”, “clusterization map”, and energy conservation precision test. As an example of use, we implemented a toy-model for nuclear relativistic collisions at 4.5 A GeV/c. Reasons for new version: Following our goal of applying chaos theory to nuclear relativistic collisions at 4.5 A GeV/c, we developed a reaction module integrated with the Chaos Many-Body Engine. In the previous version, inheriting the Particle class was the only possibility of implementing more particle properties (spin, isospin, and so on). In the new version, particle properties can be dynamically added using a dictionary object. The application was improved in order to calculate the time measured in the own reference frame of each particle. two particles reactions: a+b→c+d, decays: a→c+d, stimulated decays, more complicated schemas, implemented as various combinations of previous reactions. Following our goal of creating a flexible application, the reactions list, including the corresponding properties (cross sections, particles lifetime, etc.), could be supplied as parameter, using a specific XML configuration file. The simulation output files were modified for systems with reactions, assuring also the backward compatibility. We propose the “Clusterization Map” as a new investigation method of many-body systems. The multi-dimensional Lyapunov Exponent was adapted in order to be used for systems with variable structure. Basic support for Monte Carlo simulations was also added. Additional comments: Windows forms application for testing the engine. Easy copy/paste based deployment method. Running time: Quadratic complexity.
Berlin, Konstantin; Longhini, Andrew; Dayie, T Kwaku; Fushman, David
2013-12-01
To facilitate rigorous analysis of molecular motions in proteins, DNA, and RNA, we present a new version of ROTDIF, a program for determining the overall rotational diffusion tensor from single- or multiple-field nuclear magnetic resonance relaxation data. We introduce four major features that expand the program's versatility and usability. The first feature is the ability to analyze, separately or together, (13)C and/or (15)N relaxation data collected at a single or multiple fields. A significant improvement in the accuracy compared to direct analysis of R2/R1 ratios, especially critical for analysis of (13)C relaxation data, is achieved by subtracting high-frequency contributions to relaxation rates. The second new feature is an improved method for computing the rotational diffusion tensor in the presence of biased errors, such as large conformational exchange contributions, that significantly enhances the accuracy of the computation. The third new feature is the integration of the domain alignment and docking module for relaxation-based structure determination of multi-domain systems. Finally, to improve accessibility to all the program features, we introduced a graphical user interface that simplifies and speeds up the analysis of the data. Written in Java, the new ROTDIF can run on virtually any computer platform. In addition, the new ROTDIF achieves an order of magnitude speedup over the previous version by implementing a more efficient deterministic minimization algorithm. We not only demonstrate the improvement in accuracy and speed of the new algorithm for synthetic and experimental (13)C and (15)N relaxation data for several proteins and nucleic acids, but also show that careful analysis required especially for characterizing RNA dynamics allowed us to uncover subtle conformational changes in RNA as a function of temperature that were opaque to previous analysis.
Finding higher symmetries of differential equations using the MAPLE package DESOLVII
NASA Astrophysics Data System (ADS)
Vu, K. T.; Jefferson, G. F.; Carminati, J.
2012-04-01
We present and describe, with illustrative examples, the MAPLE computer algebra package DESOLVII, which is a major upgrade of DESOLV. DESOLVII now includes new routines allowing the determination of higher symmetries (contact and Lie-Bäcklund) for systems of both ordinary and partial differential equations. Catalogue identifier: ADYZ_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADYZ_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 10 858 No. of bytes in distributed program, including test data, etc.: 112 515 Distribution format: tar.gz Programming language: MAPLE internal language Computer: PCs and workstations Operating system: Linux, Windows XP and Windows 7 RAM: Depends on the type of problem and the complexity of the system (small ≈ MB, large ≈ GB) Classification: 4.3, 5 Catalogue identifier of previous version: ADYZ_v1_0 Journal reference of previous version: Comput. Phys. Comm. 176 (2007) 682 Does the new version supersede the previous version?: Yes Nature of problem: There are a number of approaches one may use to find solutions to systems of differential equations. These include numerical, perturbative, and algebraic methods. Unfortunately, approximate or numerical solution methods may be inappropriate in many cases or even impossible due to the nature of the system and hence exact methods are important. In their own right, exact solutions are valuable not only as a yardstick for approximate/numerical solutions but also as a means of elucidating the physical meaning of fundamental quantities in systems. One particular method of finding special exact solutions is afforded by the work of Sophus Lie and the use of continuous transformation groups. The power of Lie's group theoretic method lies in its ability to unify a number of ad hoc integration methods through the use of symmetries, that is, continuous groups of transformations which leave the differential system “unchanged”. These symmetry groups may then be used to find special solutions. Solutions found in this manner are called similarity or invariant solutions. The method of finding symmetry transformations initially requires the generation of a large overdetermined system of linear, homogeneous, coupled PDEs. The integration of this system is usually reasonably straightforward requiring the (often elementary) integration of equations by splitting the system according to dependency on different orders and degrees of the dependent variable/s. Unfortunately, in the case of contact and Lie-Bäcklund symmetries, the integration of the determining system becomes increasingly more difficult as the order of the symmetry is increased. This is because the symmetry generating functions become dependent on higher orders of the derivatives of the dependent variables and this diminishes the overall resulting “separable” differential conditions derived from the main determining system. Furthermore, typical determining systems consist of tens to hundreds of equations and this, combined with standard mechanical solution methods, makes the process well suited to automation using computer algebra systems. The new MAPLE package DESOLVII, which is a major upgrade of DESOLV, now includes routines allowing the determination of higher symmetries (contact and Lie-Bäcklund) for systems of both ordinary and partial differential equations. In addition, significant improvements have been implemented to the algorithm for PDE solution. Finally, we have made some improvements in the overall automated process so as to improve user friendliness by reducing user intervention where possible. Solution method: See “Nature of problem” above. Reasons for new version: New and improved functionality. New functionality - can now compute generalised symmetries. Much improved efficiency (speed and memory use) of existing routines. Restrictions: Sufficient memory may be required for complex systems. Running time: Depends on the type of problem and the complexity of the system (small ≈ seconds, large ≈ hours).
Numerical Arc Segmentation Algorithm for a Radio Conference-NASARC (version 4.0) technical manual
NASA Technical Reports Server (NTRS)
Whyte, Wayne A., Jr.; Heyward, Ann O.; Ponchak, Denise S.; Spence, Rodney L.; Zuzek, John E.
1988-01-01
The information contained in the NASARC (Version 4.0) Technical Manual and NASARC (Version 4.0) User's Manual relates to the Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) software development through November 1, 1988. The Technical Manual describes the NASARC concept and the algorithms used to implement the concept. The User's Manual provides information on computer system considerations, installation instructions, description of input files, and program operation instructions. Significant revisions were incorporated in the Version 4.0 software over prior versions. These revisions have further enhanced the modeling capabilities of the NASARC procedure and provide improved arrangements of predetermined arcs within the geostationary orbits. Array dimensions within the software were structured to fit within the currently available 12 megabyte memory capacity of the International Frequency Registration Board (IFRB) computer facility. A piecewise approach to predetermined arc generation in NASARC (Version 4.0) allows worldwide planning problem scenarios to be accommodated within computer run time and memory constraints with enhanced likelihood and ease of solution.
Efficient self-consistency for magnetic tight binding
NASA Astrophysics Data System (ADS)
Soin, Preetma; Horsfield, A. P.; Nguyen-Manh, D.
2011-06-01
Tight binding can be extended to magnetic systems by including an exchange interaction on an atomic site that favours net spin polarisation. We have used a published model, extended to include long-ranged Coulomb interactions, to study defects in iron. We have found that achieving self-consistency using conventional techniques was either unstable or very slow. By formulating the problem of achieving charge and spin self-consistency as a search for stationary points of a Harris-Foulkes functional, extended to include spin, we have derived a much more efficient scheme based on a Newton-Raphson procedure. We demonstrate the capabilities of our method by looking at vacancies and self-interstitials in iron. Self-consistency can indeed be achieved in a more efficient and stable manner, but care needs to be taken to manage this. The algorithm is implemented in the code PLATO. Program summaryProgram title:PLATO Catalogue identifier: AEFC_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFC_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 228 747 No. of bytes in distributed program, including test data, etc.: 1 880 369 Distribution format: tar.gz Programming language: C and PERL Computer: Apple Macintosh, PC, Unix machines Operating system: Unix, Linux, Mac OS X, Windows XP Has the code been vectorised or parallelised?: Yes. Up to 256 processors tested RAM: Up to 2 Gbytes per processor Classification: 7.3 External routines: LAPACK, BLAS and optionally ScaLAPACK, BLACS, PBLAS, FFTW Catalogue identifier of previous version: AEFC_v1_0 Journal reference of previous version: Comput. Phys. Comm. 180 (2009) 2616 Does the new version supersede the previous version?: Yes Nature of problem: Achieving charge and spin self-consistency in magnetic tight binding can be very difficult. Our existing schemes failed altogether, or were very slow. Solution method: A new scheme for achieving self-consistency in orthogonal tight binding has been introduced that explicitly evaluates the first and second derivatives of the energy with respect to input charge and spin, and then uses these to search for stationary values of the energy. Reasons for new version: Bug fixes and new functionality. Summary of revisions: New charge and spin mixing scheme for orthogonal tight binding. Numerous small bug fixes. Restrictions: The new mixing scheme scales poorly with system size. In particular the memory usage scales as number of atoms to the power 4. It is restricted to systems with about 200 atoms or less. Running time: Test cases will run in a few minutes, large calculations may run for several days.
STAR adaptation of QR algorithm. [program for solving over-determined systems of linear equations
NASA Technical Reports Server (NTRS)
Shah, S. N.
1981-01-01
The QR algorithm used on a serial computer and executed on the Control Data Corporation 6000 Computer was adapted to execute efficiently on the Control Data STAR-100 computer. How the scalar program was adapted for the STAR-100 and why these adaptations yielded an efficient STAR program is described. Program listings of the old scalar version and the vectorized SL/1 version are presented in the appendices. Execution times for the two versions applied to the same system of linear equations, are compared.
Modified computation of the nozzle damping coefficient in solid rocket motors
NASA Astrophysics Data System (ADS)
Liu, Peijin; Wang, Muxin; Yang, Wenjing; Gupta, Vikrant; Guan, Yu; Li, Larry K. B.
2018-02-01
In solid rocket motors, the bulk advection of acoustic energy out of the nozzle constitutes a significant source of damping and can thus influence the thermoacoustic stability of the system. In this paper, we propose and test a modified version of a historically accepted method of calculating the nozzle damping coefficient. Building on previous work, we separate the nozzle from the combustor, but compute the acoustic admittance at the nozzle entry using the linearized Euler equations (LEEs) rather than with short nozzle theory. We compute the combustor's acoustic modes also with the LEEs, taking the nozzle admittance as the boundary condition at the combustor exit while accounting for the mean flow field in the combustor using an analytical solution to Taylor-Culick flow. We then compute the nozzle damping coefficient via a balance of the unsteady energy flux through the nozzle. Compared with established methods, the proposed method offers competitive accuracy at reduced computational costs, helping to improve predictions of thermoacoustic instability in solid rocket motors.
Software organization for a prolog-based prototyping system for machine vision
NASA Astrophysics Data System (ADS)
Jones, Andrew C.; Hack, Ralf; Batchelor, Bruce G.
1996-11-01
We describe PIP (prolog image processing)--a prototype system for interactive image processing using Prolog, implemented on an Apple Macintosh computer. PIP is the latest in a series of products that the third author has been involved in the implementation of, under the collective title Prolog+. PIP differs from our previous systems in two particularly important respects. The first is that whereas we previously required dedicated image processing hardware, the present system implements image processing routines using software. The second difference is that our present system is hierarchical in structure, where the top level of the hierarchy emulates Prolog+, but there is a flexible infrastructure which supports more sophisticated image manipulation which we will be able to exploit in due course . We discuss the impact of the Apple Macintosh operating system upon the implementation of the image processing functions, and the interface between these functions and the Prolog system. We also explain how the existing set of Prolog+ commands has been implemented. PIP is now nearing maturity, and we will make a version of it generally available in the near future. However, although the represent version of PIP constitutes a complete image processing tool, there are a number of ways in which we are intending to enhance future versions, with a view to added flexibility and efficiency: we discuss these ideas briefly near the end of the present paper.
MPI implementation of PHOENICS: A general purpose computational fluid dynamics code
NASA Astrophysics Data System (ADS)
Simunovic, S.; Zacharia, T.; Baltas, N.; Spalding, D. B.
1995-03-01
PHOENICS is a suite of computational analysis programs that are used for simulation of fluid flow, heat transfer, and dynamical reaction processes. The parallel version of the solver EARTH for the Computational Fluid Dynamics (CFD) program PHOENICS has been implemented using Message Passing Interface (MPI) standard. Implementation of MPI version of PHOENICS makes this computational tool portable to a wide range of parallel machines and enables the use of high performance computing for large scale computational simulations. MPI libraries are available on several parallel architectures making the program usable across different architectures as well as on heterogeneous computer networks. The Intel Paragon NX and MPI versions of the program have been developed and tested on massively parallel supercomputers Intel Paragon XP/S 5, XP/S 35, and Kendall Square Research, and on the multiprocessor SGI Onyx computer at Oak Ridge National Laboratory. The preliminary testing results of the developed program have shown scalable performance for reasonably sized computational domains.
MPI implementation of PHOENICS: A general purpose computational fluid dynamics code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simunovic, S.; Zacharia, T.; Baltas, N.
1995-04-01
PHOENICS is a suite of computational analysis programs that are used for simulation of fluid flow, heat transfer, and dynamical reaction processes. The parallel version of the solver EARTH for the Computational Fluid Dynamics (CFD) program PHOENICS has been implemented using Message Passing Interface (MPI) standard. Implementation of MPI version of PHOENICS makes this computational tool portable to a wide range of parallel machines and enables the use of high performance computing for large scale computational simulations. MPI libraries are available on several parallel architectures making the program usable across different architectures as well as on heterogeneous computer networks. Themore » Intel Paragon NX and MPI versions of the program have been developed and tested on massively parallel supercomputers Intel Paragon XP/S 5, XP/S 35, and Kendall Square Research, and on the multiprocessor SGI Onyx computer at Oak Ridge National Laboratory. The preliminary testing results of the developed program have shown scalable performance for reasonably sized computational domains.« less
Ackermann, Hans D.; Pankratz, Leroy W.; Dansereau, Danny A.
1983-01-01
The computer programs published in Open-File Report 82-1065, A comprehensive system for interpreting seismic-refraction arrival-time data using interactive computer methods (Ackermann, Pankratz, and Dansereau, 1982), have been modified to run on a mini-computer. The new version uses approximately 1/10 of the memory of the initial version, is more efficient and gives the same results.
COOL: A code for Dynamic Monte Carlo Simulation of molecular dynamics
NASA Astrophysics Data System (ADS)
Barletta, Paolo
2012-02-01
Cool is a program to simulate evaporative and sympathetic cooling for a mixture of two gases co-trapped in an harmonic potential. The collisions involved are assumed to be exclusively elastic, and losses are due to evaporation from the trap. Each particle is followed individually in its trajectory, consequently properties such as spatial densities or energy distributions can be readily evaluated. The code can be used sequentially, by employing one output as input for another run. The code can be easily generalised to describe more complicated processes, such as the inclusion of inelastic collisions, or the possible presence of more than two species in the trap. New version program summaryProgram title: COOL Catalogue identifier: AEHJ_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHJ_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 1 097 733 No. of bytes in distributed program, including test data, etc.: 18 425 722 Distribution format: tar.gz Programming language: C++ Computer: Desktop Operating system: Linux RAM: 500 Mbytes Classification: 16.7, 23 Catalogue identifier of previous version: AEHJ_v1_0 Journal reference of previous version: Comput. Phys. Comm. 182 (2011) 388 Does the new version supersede the previous version?: Yes Nature of problem: Simulation of the sympathetic process occurring for two molecular gases co-trapped in a deep optical trap. Solution method: The Direct Simulation Monte Carlo method exploits the decoupling, over a short time period, of the inter-particle interaction from the trapping potential. The particle dynamics is thus exclusively driven by the external optical field. The rare inter-particle collisions are considered with an acceptance/rejection mechanism, that is, by comparing a random number to the collisional probability defined in terms of the inter-particle cross section and centre-of-mass energy. All particles in the trap are individually simulated so that at each time step a number of useful quantities, such as the spatial densities or the energy distributions, can be readily evaluated. Reasons for new version: A number of issues made the old version very difficult to be ported on different architectures, and impossible to compile on Windows. Furthermore, the test runs results could only be replicated poorly, as a consequence of the simulations being very sensitive to the machine background noise. In practise, as the particles are simulated for billions and billions of steps, the consequence of a small difference in the initial conditions due to the finiteness of double precision real can have macroscopic effects in the output. This is not a problem in its own right, but a feature of such simulations. However, for sake of completeness we have introduced a quadruple precision version of the code which yields the same results independently of the software used to compile it, or the hardware architecture where the code is run. Summary of revisions: A number of bugs in the dynamic memory allocation have been detected and removed, mostly in the cool.cpp file. All files have been renamed with a .cpp ending, rather than .c++, to make them compatible with Windows. The Random Number Generator routine, which is the computational core of the algorithm, has been re-written in C++, and there is no need any longer for cross FORTRAN-C++ compilation. A quadruple precision version of the code is provided alongside the original double precision one. The makefile allows the user to choose which one to compile by setting the switch PRECISION to either double or quad. The source code and header files have been organised into directories to make the code file system look neater. Restrictions: The in-trap motion of the particles is treated classically. Running time: The running time is relatively short, 1-2 hours. However it is convenient to replicate each simulation several times with different initialisations of the random sequence.
Migration Stories: Upgrading a PDS Archive to PDS4
NASA Astrophysics Data System (ADS)
Kazden, D. P.; Walker, R. J.; Mafi, J. N.; King, T. A.; Joy, S. P.; Moon, I. S.
2015-12-01
Increasing bandwidth, storage capacity and computational capabilities have greatly increased our ability to access data and use them. A significant challenge, however, is to make data archived under older standards useful in the new data environments. NASA's Planetary Data System (PDS) recently released version 4 of its information model (PDS4). PDS4 is an improvement and has advantages over previous versions. PDS4 adopts the XML standard for metadata and expresses structural requirements with XML Schema and content constraints by using Schematron. This allows for thorough validation by using off the shelf tools. This is a substantial improvement over previous PDS versions. PDS4 was designed to improve discoverability of products (resources) in a PDS archive. These additions allow for more uniform metadata harvesting from the collection level to the product level. New tools and services are being deployed that depend on the data adhering to the PDS4 model. However, the PDS has been an operational archive since 1989 and has large holdings that are compliant with previous versions of the PDS information model. The challenge is the make the older data accessible and useable with the new PDS4 based tools. To provide uniform utility and access to the entire archive the older data must be migrated to the PDS4 model. At the Planetary Plasma Interactions (PPI) Node of the PDS we've been actively planning and preparing to migrate our legacy archive to the new PDS4 standards for several years. With the release of the PDS4 standards we have begun the migration of our archive. In this presentation we will discuss the preparation of the data for the migration and how we are approaching this task. The presentation will consist of a series of stories to describe our experiences and the best practices we have learned.
Numerical arc segmentation algorithm for a radio conference-NASARC (version 2.0) technical manual
NASA Technical Reports Server (NTRS)
Whyte, Wayne A., Jr.; Heyward, Ann O.; Ponchak, Denise S.; Spence, Rodney L.; Zuzek, John E.
1987-01-01
The information contained in the NASARC (Version 2.0) Technical Manual (NASA TM-100160) and NASARC (Version 2.0) User's Manual (NASA TM-100161) relates to the state of NASARC software development through October 16, 1987. The Technical Manual describes the Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) concept and the algorithms used to implement the concept. The User's Manual provides information on computer system considerations, installation instructions, description of input files, and program operating instructions. Significant revisions have been incorporated in the Version 2.0 software. These revisions have enhanced the modeling capabilities of the NASARC procedure while greatly reducing the computer run time and memory requirements. Array dimensions within the software have been structured to fit within the currently available 6-megabyte memory capacity of the International Frequency Registration Board (IFRB) computer facility. A piecewise approach to predetermined arc generation in NASARC (Version 2.0) allows worldwide scenarios to be accommodated within these memory constraints while at the same time effecting an overall reduction in computer run time.
micrOMEGAs 2.0: A program to calculate the relic density of dark matter in a generic model
NASA Astrophysics Data System (ADS)
Bélanger, G.; Boudjema, F.; Pukhov, A.; Semenov, A.
2007-03-01
micrOMEGAs 2.0 is a code which calculates the relic density of a stable massive particle in an arbitrary model. The underlying assumption is that there is a conservation law like R-parity in supersymmetry which guarantees the stability of the lightest odd particle. The new physics model must be incorporated in the notation of CalcHEP, a package for the automatic generation of squared matrix elements. Once this is done, all annihilation and coannihilation channels are included automatically in any model. Cross-sections at v=0, relevant for indirect detection of dark matter, are also computed automatically. The package includes three sample models: the minimal supersymmetric standard model (MSSM), the MSSM with complex phases and the NMSSM. Extension to other models, including non-supersymmetric models, is described. Program summaryTitle of program:micrOMEGAs2.0 Catalogue identifier:ADQR_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADQR_v2_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computers for which the program is designed and others on which it has been tested:PC, Alpha, Mac, Sun Operating systems under which the program has been tested:UNIX (Linux, OSF1, SunOS, Darwin, Cygwin) Programming language used:C and Fortran Memory required to execute with typical data:17 MB depending on the number of processes required No. of processors used:1 Has the code been vectorized or parallelized:no No. of lines in distributed program, including test data, etc.:91 778 No. of bytes in distributed program, including test data, etc.:1 306 726 Distribution format:tar.gz External routines/libraries used:no Catalogue identifier of previous version:ADQR_v1_3 Journal reference of previous version:Comput. Phys. Comm. 174 (2006) 577 Does the new version supersede the previous version:yes Nature of physical problem:Calculation of the relic density of the lightest stable particle in a generic new model of particle physics. Method of solution: In numerically solving the evolution equation for the density of dark matter, relativistic formulae for the thermal average are used. All tree-level processes for annihilation and coannihilation of new particles in the model are included. The cross-sections for all processes are calculated exactly with CalcHEP after definition of a model file. Higher-order QCD corrections to Higgs couplings to quark pairs are included. Reasons for the new version:There are many models of new physics that propose a candidate for dark matter besides the much studied minimal supersymmetric standard model. This new version not only incorporates extensions of the MSSM, such as the MSSM with complex phases, or the NMSSM which contains an extra singlet superfield but also gives the possibility for the user to incorporate easily a new model. For this the user only needs to redefine appropriately a new model file. Summary of revisions:Possibility to include in the package any particle physics model with a discrete symmetry that guarantees the stability of the cold dark matter candidate (LOP) and to compute the relic density of CDM. Compute automatically the cross-sections for annihilation of the LOP at small velocities into SM final states and provide the energy spectra for γ,e,p¯,ν final states. For the MSSM with input parameters defined at the GUT scale, the interface with any of the spectrum calculator codes reads an input file in the SUSY Les Houches Accord format (SLHA). Implementation of the MSSM with complex parameters (CPV-MSSM) with an interface to CPsuperH to calculate the spectrum. Routine to calculate the electric dipole moment of the electron in the CPV-MSSM. In the NMSSM, new interface compatible with NMHDECAY2.1. Typical running time:0.2 sec Unusual features of the program:Depending on the parameters of the model, the program generates additional new code, compiles it and loads it dynamically.
NCRETURN Computer Program for Evaluating Investments Revised to Provide Additional Information
Allen L. Lundgren; Dennis L. Schweitzer
1971-01-01
Reports a modified version of NCRETURN, a computer program for evauating forestry investments. The revised version, RETURN, provides additional information about each investment, including future net worths and benefit-cost ratios, with no added input.
Improving fast generation of halo catalogues with higher order Lagrangian perturbation theory
NASA Astrophysics Data System (ADS)
Munari, Emiliano; Monaco, Pierluigi; Sefusatti, Emiliano; Castorina, Emanuele; Mohammad, Faizan G.; Anselmi, Stefano; Borgani, Stefano
2017-03-01
We present the latest version of PINOCCHIO, a code that generates catalogues of dark matter haloes in an approximate but fast way with respect to an N-body simulation. This code version implements a new on-the-fly production of halo catalogue on the past light cone with continuous time sampling, and the computation of particle and halo displacements are extended up to third-order Lagrangian perturbation theory (LPT), in contrast with previous versions that used Zel'dovich approximation. We run PINOCCHIO on the same initial configuration of a reference N-body simulation, so that the comparison extends to the object-by-object level. We consider haloes at redshifts 0 and 1, using different LPT orders either for halo construction or to compute halo final positions. We compare the clustering properties of PINOCCHIO haloes with those from the simulation by computing the power spectrum and two-point correlation function in real and redshift space (monopole and quadrupole), the bispectrum and the phase difference of halo distributions. We find that 2LPT and 3LPT give noticeable improvement. 3LPT provides the best agreement with N-body when it is used to displace haloes, while 2LPT gives better results for constructing haloes. At the highest orders, linear bias is typically recovered at a few per cent level. In Fourier space and using 3LPT for halo displacements, the halo power spectrum is recovered to within 10 per cent up to kmax ∼ 0.5 h Mpc-1. The results presented in this paper have interesting implications for the generation of large ensemble of mock surveys for the scientific exploitation of data from big surveys.
VACTIV: A graphical dialog based program for an automatic processing of line and band spectra
NASA Astrophysics Data System (ADS)
Zlokazov, V. B.
2013-05-01
The program VACTIV-Visual ACTIV-has been developed for an automatic analysis of spectrum-like distributions, in particular gamma-ray spectra or alpha-spectra and is a standard graphical dialog based Windows XX application, driven by a menu, mouse and keyboard. On the one hand, it was a conversion of an existing Fortran program ACTIV [1] to the DELPHI language; on the other hand, it is a transformation of the sequential syntax of Fortran programming to a new object-oriented style, based on the organization of event interactions. New features implemented in the algorithms of both the versions consisted in the following as peak model both an analytical function and a graphical curve could be used; the peak search algorithm was able to recognize not only Gauss peaks but also peaks with an irregular form; both narrow peaks (2-4 channels) and broad ones (50-100 channels); the regularization technique in the fitting guaranteed a stable solution in the most complicated cases of strongly overlapping or weak peaks. The graphical dialog interface of VACTIV is much more convenient than the batch mode of ACTIV. [1] V.B. Zlokazov, Computer Physics Communications, 28 (1982) 27-37. NEW VERSION PROGRAM SUMMARYProgram Title: VACTIV Catalogue identifier: ABAC_v2_0 Licensing provisions: no Programming language: DELPHI 5-7 Pascal. Computer: IBM PC series. Operating system: Windows XX. RAM: 1 MB Keywords: Nuclear physics, spectrum decomposition, least squares analysis, graphical dialog, object-oriented programming. Classification: 17.6. Catalogue identifier of previous version: ABAC_v1_0 Journal reference of previous version: Comput. Phys. Commun. 28 (1982) 27 Does the new version supersede the previous version?: Yes. Nature of problem: Program VACTIV is intended for precise analysis of arbitrary spectrum-like distributions, e.g. gamma-ray and X-ray spectra and allows the user to carry out the full cycle of automatic processing of such spectra, i.e. calibration, automatic peak search and estimation of parameters of interest. VACTIV can run on any standard modern laptop. Reasons for the new version: At the time of its creation (1999) VACTIV was seemingly the first attempt to apply the newest programming languages and styles to systems of spectrum analysis. Its goal was to both get a convenient and efficient technique for data processing, and to elaborate the formalism of spectrum analysis in terms of classes, their properties, their methods and events of an object-oriented programming language. Summary of revisions: Compared with ACTIV, VACTIV preserves all the mathematical algorithms, but provides the user with all the benefits of an interface, based on a graphical dialog. It allows him to make a quick intervention in the work of the program; in particular, to carry out the on-line control of the fitting process: depending on the intermediate results and using the visual form of data representation, to change the conditions for the fitting and so achieve the optimum performance, selecting the optimum strategy. To find the best conditions for the fitting one can compress the spectrum, delete the blunders from it, smooth it using a high-frequency spline filter and build the background using a low-frequency spline filter; use not only automatic methods for the blunder deletion, the peak search, the peak model forming and the calibration, but also use manual mouse clicking on the spectrum graph. Restrictions: To enhance the reliability and portability of the program the majority of the most important arrays have a static allocation; all the arrays are allocated with a surplus, and the total pool of the program is restricted only by the size of the computer virtual memory. A spectrum has the static size of 32 K real words. The maximum size of the least-square matrix is 314 (the maximum number of fitted parameters per one analyzed spectrum interval, not for the whole spectrum), from which it follows that the maximum number of peaks in one spectrum interval is 154. The maximum total number of peaks in the spectrum is not restricted. Running time: The calculation time is negligibly small compared with the time for the dialog; using ini-files the program can be partly used in a semi-dialog mode.
tweezercalib 2.1: Faster version of MatLab package for precise calibration of optical tweezers
NASA Astrophysics Data System (ADS)
Hansen, Poul Martin; Tolic-Nørrelykke, Iva Marija; Flyvbjerg, Henrik; Berg-Sørensen, Kirstine
2006-10-01
New version program summaryTitle of program: tweezercalib Catalogue identifier:ADTV_v2_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADTV_v2_1 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing provisions:no No. of lines in distributed program, including test data, etc.: 134 188 No. of bytes in distributed program, including test data, etc.: 1 050 368 Distribution format: tar.gz Programming language: MatLab (Mathworks Inc.), standard license Computer:General computer running MatLab (Mathworks Inc.) Operating system:Windows2000, Windows-XP, Linux RAM:Of order four times the size of the data file Classification:3, 4.14, 18, 23 Catalogue identifier of previous version: ADTV_v2_0 Journal reference of previous version: Comput. Phys. Comm. 174 (2006) 518 Does the new version supersede the previous version?: yes Nature of problem:Calibrate optical tweezers with precision by fitting theory to experimental power spectrum of position of bead doing Brownian motion in incompressible fluid, possibly near microscope cover slip, while trapped in optical tweezers. Thereby determine spring constant of optical trap and conversion factor for arbitrary-units-to-nanometers for detection system. The theoretical underpinnings of the procedure may be found in Ref. [3]. Solution method:Elimination of cross-talk between quadrant photo-diodes, output channels for positions (optional). Check that distribution of recorded positions agrees with Boltzmann distribution of bead in harmonic trap. Data compression and noise reduction by blocking method applied to power spectrum. Full accounting for hydrodynamic effects; Frequency-dependent drag force and interaction with nearby cover slip (optional). Full accounting for electronic filters (optional), for "virtual filtering" caused by detection system (optional). Full accounting for aliasing caused by finite sampling rate (optional). Standard non-linear least-squares fitting with custom written routines based on Refs. [1,2]. Statistical support for fit is given, with several plots facilitating inspection of consistency and quality of data and fit. Reasons for the new version:Recent progress in the field has demonstrated a better approximation of the formula for the theoretical power spectrum with corrections due to frequency dependence of motion and distance to a surface nearby. Summary of revisions:The expression for the theoretical power spectrum when accounting for corrections to Stokes law, P(f), has been updated to agree with a better approximation of the theoretical spectrum, as discussed in Ref. [4] The units of the kinematic viscosity applied in the program is now stated in the input window. Greek letters and exponents are inserted in the input window. The graphical output has improved: The figures now bear a meaningful title and four figures that test the quality of the fit are now combined in one figure with four parts. Restrictions: Data should be positions of bead doing Brownian motion while held by optical tweezers. For high precision in final results, data should be time series measured over a long time, with sufficiently high experimental sampling rate; The sampling rate should be well above the characteristic frequency of the trap, the so-called corner frequency. Thus, the sampling frequency should typically be larger than 10 kHz. The Fast Fourier Transform used works optimally when the time series contain 2 data points, and long measurement time is obtained with n>12-15. Finally, the optics should be set to ensure a harmonic trapping potential in the range of positions visited by the bead. The fitting procedure checks for harmonic potential. Running time:seconds ReferencesJ. Nocedal, Y.x. Yuan, Combining trust region and line search techniques, Technical Report OTC 98/04, Optimization Technology Center, 1998. W.H. Press, B.P. Flannery, S.A. Teukolsky, W.T. Vetterling, Numerical Recipes. The Art of Scientific Computing, Cambridge University Press, Cambridge, 1986. (The theoretical underpinnings for the procedure) K. Berg-Sørensen and Henrik Flyvbjerg, Power spectrum analysis for optical tweezers, Rev. Sci. Ins. 75 (2004) 594-612. S.F. Tolic-Nørrelykke, et al., Calibration of optical tweezers with positions detection in the back-focal-plane, arXiv:physics/0603037 v2, 2006.
Design of a real-time wind turbine simulator using a custom parallel architecture
NASA Technical Reports Server (NTRS)
Hoffman, John A.; Gluck, R.; Sridhar, S.
1995-01-01
The design of a new parallel-processing digital simulator is described. The new simulator has been developed specifically for analysis of wind energy systems in real time. The new processor has been named: the Wind Energy System Time-domain simulator, version 3 (WEST-3). Like previous WEST versions, WEST-3 performs many computations in parallel. The modules in WEST-3 are pure digital processors, however. These digital processors can be programmed individually and operated in concert to achieve real-time simulation of wind turbine systems. Because of this programmability, WEST-3 is very much more flexible and general than its two predecessors. The design features of WEST-3 are described to show how the system produces high-speed solutions of nonlinear time-domain equations. WEST-3 has two very fast Computational Units (CU's) that use minicomputer technology plus special architectural features that make them many times faster than a microcomputer. These CU's are needed to perform the complex computations associated with the wind turbine rotor system in real time. The parallel architecture of the CU causes several tasks to be done in each cycle, including an IO operation and the combination of a multiply, add, and store. The WEST-3 simulator can be expanded at any time for additional computational power. This is possible because the CU's interfaced to each other and to other portions of the simulation using special serial buses. These buses can be 'patched' together in essentially any configuration (in a manner very similar to the programming methods used in analog computation) to balance the input/ output requirements. CU's can be added in any number to share a given computational load. This flexible bus feature is very different from many other parallel processors which usually have a throughput limit because of rigid bus architecture.
A study on the use of Gumbel approximation with the Bernoulli spatial scan statistic.
Read, S; Bath, P A; Willett, P; Maheswaran, R
2013-08-30
The Bernoulli version of the spatial scan statistic is a well established method of detecting localised spatial clusters in binary labelled point data, a typical application being the epidemiological case-control study. A recent study suggests the inferential accuracy of several versions of the spatial scan statistic (principally the Poisson version) can be improved, at little computational cost, by using the Gumbel distribution, a method now available in SaTScan(TM) (www.satscan.org). We study in detail the effect of this technique when applied to the Bernoulli version and demonstrate that it is highly effective, albeit with some increase in false alarm rates at certain significance thresholds. We explain how this increase is due to the discrete nature of the Bernoulli spatial scan statistic and demonstrate that it can affect even small p-values. Despite this, we argue that the Gumbel method is actually preferable for very small p-values. Furthermore, we extend previous research by running benchmark trials on 12 000 synthetic datasets, thus demonstrating that the overall detection capability of the Bernoulli version (i.e. ratio of power to false alarm rate) is not noticeably affected by the use of the Gumbel method. We also provide an example application of the Gumbel method using data on hospital admissions for chronic obstructive pulmonary disease. Copyright © 2013 John Wiley & Sons, Ltd.
The use of self checks and voting in software error detection - An empirical study
NASA Technical Reports Server (NTRS)
Leveson, Nancy G.; Cha, Stephen S.; Knight, John C.; Shimeall, Timothy J.
1990-01-01
The results of an empirical study of software error detection using self checks and N-version voting are presented. Working independently, each of 24 programmers first prepared a set of self checks using just the requirements specification of an aerospace application, and then each added self checks to an existing implementation of that specification. The modified programs were executed to measure the error-detection performance of the checks and to compare this with error detection using simple voting among multiple versions. The analysis of the checks revealed that there are great differences in the ability of individual programmers to design effective checks. It was found that some checks that might have been effective failed to detect an error because they were badly placed, and there were numerous instances of checks signaling nonexistent errors. In general, specification-based checks alone were not as effective as specification-based checks combined with code-based checks. Self checks made it possible to identify faults that had not been detected previously by voting 28 versions of the program over a million randomly generated inputs. This appeared to result from the fact that the self checks could examine the internal state of the executing program, whereas voting examines only final results of computations. If internal states had to be identical in N-version voting systems, then there would be no reason to write multiple versions.
micrOMEGAs 2.0.7: a program to calculate the relic density of dark matter in a generic model
NASA Astrophysics Data System (ADS)
Bélanger, G.; Boudjema, F.; Pukhov, A.; Semenov, A.
2007-12-01
micrOMEGAs2.0.7 is a code which calculates the relic density of a stable massive particle in an arbitrary model. The underlying assumption is that there is a conservation law like R-parity in supersymmetry which guarantees the stability of the lightest odd particle. The new physics model must be incorporated in the notation of CalcHEP, a package for the automatic generation of squared matrix elements. Once this is done, all annihilation and coannihilation channels are included automatically in any model. Cross-sections at v=0, relevant for indirect detection of dark matter, are also computed automatically. The package includes three sample models: the minimal supersymmetric standard model (MSSM), the MSSM with complex phases and the NMSSM. Extension to other models, including non supersymmetric models, is described. Program summaryTitle of program:micrOMEGAs2.0.7 Catalogue identifier:ADQR_v2_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADQR_v2_1.html Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing provisions:Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.:216 529 No. of bytes in distributed program, including test data, etc.:1 848 816 Distribution format:tar.gz Programming language used:C and Fortran Computer:PC, Alpha, Mac, Sun Operating system:UNIX (Linux, OSF1, SunOS, Darwin, Cygwin) RAM:17 MB depending on the number of processes required Classification:1.9, 11.6 Catalogue identifier of previous version:ADQR_v2_0 Journal version of previous version:Comput. Phys. Comm. 176 (2007) 367 Does the new version supersede the previous version?:Yes Nature of problem:Calculation of the relic density of the lightest stable particle in a generic new model of particle physics. Solution method:In numerically solving the evolution equation for the density of dark matter, relativistic formulae for the thermal average are used. All tree-level processes for annihilation and coannihilation of new particles in the model are included. The cross-sections for all processes are calculated exactly with CalcHEP after definition of a model file. Higher-order QCD corrections to Higgs couplings to quark pairs are included. Reasons for new version:The main changes in this new version consist, on the one hand, in improvements of the user interface and treatment of error codes when using spectrum calculators in the MSSM and, on the other hand, on a completely revised code for the calculation of the relic density in the NMSSM based on the code NMSSMTools1.0.2 for the computation of the spectrum. Summary of revisions:The version of CalcHEP was updated to CalcHEP 2.4. The procedure for shared library generation has been improved. Now the libraries are recalculated each time the model is modified. The default value for the top quark mass has been set to 171.4 GeV. Changes specific to the MSSM model. The deltaMb correction is now included in the B,t,H-vertex and is always included for other Higgs vertices. In case of a fatal error in an RGE program, micrOMEGAs now continues operation while issuing a warning that the given point is not valid. This is important when running scans over parameter space. However this means that the standard ˆC command that could be used to cancel a job now only cancels the RGE program. To cancel a job, use "kill -9 -N" where N is the micrOMEGAs process id, all child processes launched by micrOMEGAs will be killed at once. Following the last SLHA2 release, we use key=26 item of EXTPAR block for the pole mass of the CP-odd Higgs so that micrOMEGAs can now use SoftSUSY for spectrum calculation with EWSB input. The Isajet interface was corrected too, so the user has to recompile the isajet_slha executable. For SuSpect we still support an old "wrong" interface where key=24 is used for the mass of the CP-odd Higgs. In the non-universal SUGRA model, we set the value of M ( M,A) to the value of the largest subset of equal parameters among scalar masses (gaugino masses, trilinear couplings). In the previous version these parameters were set arbitrarily to be equal to MH2, MG2 and At respectively. The spectrum calculators need an input value for M,M and A for initialisation purposes. We have removed bugs in micrOMEGAs-Isajet interface in case of non-universal SUGRA. $(FFLAGS) is added to compilation instruction of suspect.exe. It was omitted in version 2.0. The treatment of errors in reading of the LesHouches accord file is improved. Now, if the SPINFO block is absent in the SLHA output it is considered as a fatal error. Instructions for calculation of Δ, (, Br(b→sγ) and Br(B→μμ) constraints are included in EWSB sample main programs omg.c/omg.cpp/omg.F. We have corrected the name of the library for neutralino-neutralino annihilation in our sample files MSSM/cs br.*. Changes specific to the NMSSM model. The NMSSM has been completely revised. Now it is based on NMSSMTools_1.0.2. The deltaMb corrections in the NMSSM are included in the Higgs potential. CP violation model. We have included in our package the MSSM with CP violation. Our implementation was described in Phys. Rev. D 73 (2006) 115007. It is based on the CPSUPERH package published in Comput. Phys. Comm. 156 (2004) 283. Unusual features:Depending on the parameters of the model, the program generates additional new code, compiles it and loads it dynamically. Running time:0.2 seconds
CRITIC2: A program for real-space analysis of quantum chemical interactions in solids
NASA Astrophysics Data System (ADS)
Otero-de-la-Roza, A.; Johnson, Erin R.; Luaña, Víctor
2014-03-01
We present CRITIC2, a program for the analysis of quantum-mechanical atomic and molecular interactions in periodic solids. This code, a greatly improved version of the previous CRITIC program (Otero-de-la Roza et al., 2009), can: (i) find critical points of the electron density and related scalar fields such as the electron localization function (ELF), Laplacian, … (ii) integrate atomic properties in the framework of Bader’s Atoms-in-Molecules theory (QTAIM), (iii) visualize non-covalent interactions in crystals using the non-covalent interactions (NCI) index, (iv) generate relevant graphical representations including lines, planes, gradient paths, contour plots, atomic basins, … and (v) perform transformations between file formats describing scalar fields and crystal structures. CRITIC2 can interface with the output produced by a variety of electronic structure programs including WIEN2k, elk, PI, abinit, Quantum ESPRESSO, VASP, Gaussian, and, in general, any other code capable of writing the scalar field under study to a three-dimensional grid. CRITIC2 is parallelized, completely documented (including illustrative test cases) and publicly available under the GNU General Public License. Catalogue identifier: AECB_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECB_v2_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: yes No. of lines in distributed program, including test data, etc.: 11686949 No. of bytes in distributed program, including test data, etc.: 337020731 Distribution format: tar.gz Programming language: Fortran 77 and 90. Computer: Workstations. Operating system: Unix, GNU/Linux. Has the code been vectorized or parallelized?: Shared-memory parallelization can be used for most tasks. Classification: 7.3. Catalogue identifier of previous version: AECB_v1_0 Journal reference of previous version: Comput. Phys. Comm. 180 (2009) 157 Nature of problem: Analysis of quantum-chemical interactions in periodic solids by means of atoms-in-molecules and related formalisms. Solution method: Critical point search using Newton’s algorithm, atomic basin integration using bisection, qtree and grid-based algorithms, diverse graphical representations and computation of the non-covalent interactions index on a three-dimensional grid. Additional comments: !!!!! The distribution file for this program is over 330 Mbytes and therefore is not delivered directly when download or Email is requested. Instead a html file giving details of how the program can be obtained is sent. !!!!! Running time: Variable, depending on the crystal and the source of the underlying scalar field.
SMMP v. 3.0—Simulating proteins and protein interactions in Python and Fortran
NASA Astrophysics Data System (ADS)
Meinke, Jan H.; Mohanty, Sandipan; Eisenmenger, Frank; Hansmann, Ulrich H. E.
2008-03-01
We describe a revised and updated version of the program package SMMP. SMMP is an open-source FORTRAN package for molecular simulation of proteins within the standard geometry model. It is designed as a simple and inexpensive tool for researchers and students to become familiar with protein simulation techniques. SMMP 3.0 sports a revised API increasing its flexibility, an implementation of the Lund force field, multi-molecule simulations, a parallel implementation of the energy function, Python bindings, and more. Program summaryTitle of program:SMMP Catalogue identifier:ADOJ_v3_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADOJ_v3_0.html Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing provisions:Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html Programming language used:FORTRAN, Python No. of lines in distributed program, including test data, etc.:52 105 No. of bytes in distributed program, including test data, etc.:599 150 Distribution format:tar.gz Computer:Platform independent Operating system:OS independent RAM:2 Mbytes Classification:3 Does the new version supersede the previous version?:Yes Nature of problem:Molecular mechanics computations and Monte Carlo simulation of proteins. Solution method:Utilizes ECEPP2/3, FLEX, and Lund potentials. Includes Monte Carlo simulation algorithms for canonical, as well as for generalized ensembles. Reasons for new version:API changes and increased functionality. Summary of revisions:Added Lund potential; parameters used in subroutines are now passed as arguments; multi-molecule simulations; parallelized energy calculation for ECEPP; Python bindings. Restrictions:The consumed CPU time increases with the size of protein molecule. Running time:Depends on the size of the simulated molecule.
Learning to perceive and recognize a second language: the L2LP model revised.
van Leussen, Jan-Willem; Escudero, Paola
2015-01-01
We present a test of a revised version of the Second Language Linguistic Perception (L2LP) model, a computational model of the acquisition of second language (L2) speech perception and recognition. The model draws on phonetic, phonological, and psycholinguistic constructs to explain a number of L2 learning scenarios. However, a recent computational implementation failed to validate a theoretical proposal for a learning scenario where the L2 has less phonemic categories than the native language (L1) along a given acoustic continuum. According to the L2LP, learners faced with this learning scenario must not only shift their old L1 phoneme boundaries but also reduce the number of categories employed in perception. Our proposed revision to L2LP successfully accounts for this updating in the number of perceptual categories as a process driven by the meaning of lexical items, rather than by the learners' awareness of the number and type of phonemes that are relevant in their new language, as the previous version of L2LP assumed. Results of our simulations show that meaning-driven learning correctly predicts the developmental path of L2 phoneme perception seen in empirical studies. Additionally, and to contribute to a long-standing debate in psycholinguistics, we test two versions of the model, with the stages of phonemic perception and lexical recognition being either sequential or interactive. Both versions succeed in learning to recognize minimal pairs in the new L2, but make diverging predictions on learners' resulting phonological representations. In sum, the proposed revision to the L2LP model contributes to our understanding of L2 acquisition, with implications for speech processing in general.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gorai, Prashun; Toberer, Eric S.; Stevanović, Vladan
Quasi low-dimensional structures are abundant among known thermoelectric materials, primarily because of their low lattice thermal conductivities. In this work, we have computationally assessed the potential of 427 known binary quasi-2D structures in 272 different chemistries for thermoelectric performance. To assess the thermoelectric performance, we employ an improved version of our previously developed descriptor for thermoelectric performance [Yan et al., Energy Environ. Sci., 2015, 8, 983]. The improvement is in the explicit treatment of van der Waals interactions in quasi-2D materials, which leads to significantly better predictions of their crystal structures and lattice thermal conductivities. The improved methodology correctly identifiesmore » known binary quasi-2D thermoelectric materials such as Sb2Te3, Bi2Te3, SnSe, SnS, InSe, and In2Se3. As a result, we propose candidate quasi-2D binary materials, a number of which have not been previously considered for thermoelectric applications.« less
Automated Infrared Inspection Of Jet Engine Turbine Blades
NASA Astrophysics Data System (ADS)
Bantel, T.; Bowman, D.; Halase, J.; Kenue, S.; Krisher, R.; Sippel, T.
1986-03-01
The detection of blocked surface cooling holes in hollow jet engine turbine blades and vanes during either manufacture or overhaul can be crucial to the integrity and longevity of the parts when in service. A fully automated infrared inspection system is being established under a tri-service's Manufacturing Technology (ManTech) contract administered by the Air Force to inspect these surface cooling holes for blockages. The method consists of viewing the surface holes of the blade with a scanning infrared radiometer when heated air is flushed through the blade. As the airfoil heats up, the resultant infrared images are written directly into computer memory where image analysis is performed. The computer then makes a determination of whether or not the holes are open from the inner plenum to the exterior surface and ultimately makes an accept/reject decision based on previously programmed criteria. A semiautomatic version has already been implemented and is more cost effective and more reliable than the previous manual inspection methods.
ERIC Educational Resources Information Center
Whiting, Hal; Kline, Theresa J. B.
2006-01-01
This study examined the equivalency of computer and conventional versions of the Test of Workplace Essential Skills (TOWES), a test of adult literacy skills in Reading Text, Document Use and Numeracy. Seventy-three college students completed the computer version, and their scores were compared with those who had taken the test in the conventional…
Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC), version 4.0: User's manual
NASA Technical Reports Server (NTRS)
Whyte, Wayne A., Jr.; Heyward, Ann O.; Ponchak, Denise S.; Spence, Rodney L.; Zuzek, John E.
1988-01-01
The information in the NASARC (Version 4.0) Technical Manual (NASA-TM-101453) and NASARC (Version 4.0) User's Manual (NASA-TM-101454) relates to the state of Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) software development through November 1, 1988. The Technical Manual describes the NASARC concept and the algorithms used to implement the concept. The User's Manual provides information on computer system considerations, installation instructions, description of input files, and program operation instructions. Significant revisions were incorporated in the Version 4.0 software over prior versions. These revisions have further enhanced the modeling capabilities of the NASARC procedure and provide improved arrangements of predetermined arcs within the geostationary orbit. Array dimensions within the software were structured to fit within the currently available 12-megabyte memory capacity of the International Frequency Registration Board (IFRB) computer facility. A piecewise approach to predetermined arc generation in NASARC (Version 4.) allows worldwide planning problem scenarios to be accommodated within computer run time and memory constraints with enhanced likelihood and ease of solution.
NASA Astrophysics Data System (ADS)
Schunck, N.; Dobaczewski, J.; McDonnell, J.; Satuła, W.; Sheikh, J. A.; Staszczak, A.; Stoitsov, M.; Toivanen, P.
2012-01-01
We describe the new version (v2.49t) of the code HFODD which solves the nuclear Skyrme-Hartree-Fock (HF) or Skyrme-Hartree-Fock-Bogolyubov (HFB) problem by using the Cartesian deformed harmonic-oscillator basis. In the new version, we have implemented the following physics features: (i) the isospin mixing and projection, (ii) the finite-temperature formalism for the HFB and HF + BCS methods, (iii) the Lipkin translational energy correction method, (iv) the calculation of the shell correction. A number of specific numerical methods have also been implemented in order to deal with large-scale multi-constraint calculations and hardware limitations: (i) the two-basis method for the HFB method, (ii) the Augmented Lagrangian Method (ALM) for multi-constraint calculations, (iii) the linear constraint method based on the approximation of the RPA matrix for multi-constraint calculations, (iv) an interface with the axial and parity-conserving Skyrme-HFB code HFBTHO, (v) the mixing of the HF or HFB matrix elements instead of the HF fields. Special care has been paid to using the code on massively parallel leadership class computers. For this purpose, the following features are now available with this version: (i) the Message Passing Interface (MPI) framework, (ii) scalable input data routines, (iii) multi-threading via OpenMP pragmas, (iv) parallel diagonalization of the HFB matrix in the simplex-breaking case using the ScaLAPACK library. Finally, several little significant errors of the previous published version were corrected. New version program summaryProgram title:HFODD (v2.49t) Catalogue identifier: ADFL_v3_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADFL_v3_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public Licence v3 No. of lines in distributed program, including test data, etc.: 190 614 No. of bytes in distributed program, including test data, etc.: 985 898 Distribution format: tar.gz Programming language: FORTRAN-90 Computer: Intel Pentium-III, Intel Xeon, AMD-Athlon, AMD-Opteron, Cray XT4, Cray XT5 Operating system: UNIX, LINUX, Windows XP Has the code been vectorized or parallelized?: Yes, parallelized using MPI RAM: 10 Mwords Word size: The code is written in single-precision for the use on a 64-bit processor. The compiler option -r8 or +autodblpad (or equivalent) has to be used to promote all real and complex single-precision floating-point items to double precision when the code is used on a 32-bit machine. Classification: 17.22 Catalogue identifier of previous version: ADFL_v2_2 Journal reference of previous version: Comput. Phys. Comm. 180 (2009) 2361 External routines: The user must have access to the NAGLIB subroutine f02axe, or LAPACK subroutines zhpev, zhpevx, zheevr, or zheevd, which diagonalize complex hermitian matrices, the LAPACK subroutines dgetri and dgetrf which invert arbitrary real matrices, the LAPACK subroutines dsyevd, dsytrf and dsytri which compute eigenvalues and eigenfunctions of real symmetric matrices, the LINPACK subroutines zgedi and zgeco, which invert arbitrary complex matrices and calculate determinants, the BLAS routines dcopy, dscal, dgeem and dgemv for double-precision linear algebra and zcopy, zdscal, zgeem and zgemv for complex linear algebra, or provide another set of subroutines that can perform such tasks. The BLAS and LAPACK subroutines can be obtained from the Netlib Repository at the University of Tennessee, Knoxville: http://netlib2.cs.utk.edu/. Does the new version supersede the previous version?: Yes Nature of problem: The nuclear mean field and an analysis of its symmetries in realistic cases are the main ingredients of a description of nuclear states. Within the Local Density Approximation, or for a zero-range velocity-dependent Skyrme interaction, the nuclear mean field is local and velocity dependent. The locality allows for an effective and fast solution of the self-consistent Hartree-Fock equations, even for heavy nuclei, and for various nucleonic ( n-particle- n-hole) configurations, deformations, excitation energies, or angular momenta. Similarly, Local Density Approximation in the particle-particle channel, which is equivalent to using a zero-range interaction, allows for a simple implementation of pairing effects within the Hartree-Fock-Bogolyubov method. Solution method: The program uses the Cartesian harmonic oscillator basis to expand single-particle or single-quasiparticle wave functions of neutrons and protons interacting by means of the Skyrme effective interaction and zero-range pairing interaction. The expansion coefficients are determined by the iterative diagonalization of the mean-field Hamiltonians or Routhians which depend non-linearly on the local neutron and proton densities. Suitable constraints are used to obtain states corresponding to a given configuration, deformation or angular momentum. The method of solution has been presented in: [J. Dobaczewski, J. Dudek, Comput. Phys. Commun. 102 (1997) 166]. Reasons for new version: Version 2.49s of HFODD provides a number of new options such as the isospin mixing and projection of the Skyrme functional, the finite-temperature HF and HFB formalism and optimized methods to perform multi-constrained calculations. It is also the first version of HFODD to contain threading and parallel capabilities. Summary of revisions: Isospin mixing and projection of the HF states has been implemented. The finite-temperature formalism for the HFB equations has been implemented. The Lipkin translational energy correction method has been implemented. Calculation of the shell correction has been implemented. The two-basis method for the solution to the HFB equations has been implemented. The Augmented Lagrangian Method (ALM) for calculations with multiple constraints has been implemented. The linear constraint method based on the cranking approximation of the RPA matrix has been implemented. An interface between HFODD and the axially-symmetric and parity-conserving code HFBTHO has been implemented. The mixing of the matrix elements of the HF or HFB matrix has been implemented. A parallel interface using the MPI library has been implemented. A scalable model for reading input data has been implemented. OpenMP pragmas have been implemented in three subroutines. The diagonalization of the HFB matrix in the simplex-breaking case has been parallelized using the ScaLAPACK library. Several little significant errors of the previous published version were corrected. Running time: In serial mode, running 6 HFB iterations for 152Dy for conserved parity and signature symmetries in a full spherical basis of N=14 shells takes approximately 8 min on an AMD Opteron processor at 2.6 GHz, assuming standard BLAS and LAPACK libraries. As a rule of thumb, runtime for HFB calculations for parity and signature conserved symmetries roughly increases as N, where N is the number of full HO shells. Using custom-built optimized BLAS and LAPACK libraries (such as in the ATLAS implementation) can bring down the execution time by 60%. Using the threaded version of the code with 12 threads and threaded BLAS libraries can bring an additional factor 2 speed-up, so that the same 6 HFB iterations now take of the order of 2 min 30 s.
NDL-v2.0: A new version of the numerical differentiation library for parallel architectures
NASA Astrophysics Data System (ADS)
Hadjidoukas, P. E.; Angelikopoulos, P.; Voglis, C.; Papageorgiou, D. G.; Lagaris, I. E.
2014-07-01
We present a new version of the numerical differentiation library (NDL) used for the numerical estimation of first and second order partial derivatives of a function by finite differencing. In this version we have restructured the serial implementation of the code so as to achieve optimal task-based parallelization. The pure shared-memory parallelization of the library has been based on the lightweight OpenMP tasking model allowing for the full extraction of the available parallelism and efficient scheduling of multiple concurrent library calls. On multicore clusters, parallelism is exploited by means of TORC, an MPI-based multi-threaded tasking library. The new MPI implementation of NDL provides optimal performance in terms of function calls and, furthermore, supports asynchronous execution of multiple library calls within legacy MPI programs. In addition, a Python interface has been implemented for all cases, exporting the functionality of our library to sequential Python codes. Catalog identifier: AEDG_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDG_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 63036 No. of bytes in distributed program, including test data, etc.: 801872 Distribution format: tar.gz Programming language: ANSI Fortran-77, ANSI C, Python. Computer: Distributed systems (clusters), shared memory systems. Operating system: Linux, Unix. Has the code been vectorized or parallelized?: Yes. RAM: The library uses O(N) internal storage, N being the dimension of the problem. It can use up to O(N2) internal storage for Hessian calculations, if a task throttling factor has not been set by the user. Classification: 4.9, 4.14, 6.5. Catalog identifier of previous version: AEDG_v1_0 Journal reference of previous version: Comput. Phys. Comm. 180(2009)1404 Does the new version supersede the previous version?: Yes Nature of problem: The numerical estimation of derivatives at several accuracy levels is a common requirement in many computational tasks, such as optimization, solution of nonlinear systems, and sensitivity analysis. For a large number of scientific and engineering applications, the underlying functions correspond to simulation codes for which analytical estimation of derivatives is difficult or almost impossible. A parallel implementation that exploits systems with multiple CPUs is very important for large scale and computationally expensive problems. Solution method: Finite differencing is used with a carefully chosen step that minimizes the sum of the truncation and round-off errors. The parallel versions employ both OpenMP and MPI libraries. Reasons for new version: The updated version was motivated by our endeavors to extend a parallel Bayesian uncertainty quantification framework [1], by incorporating higher order derivative information as in most state-of-the-art stochastic simulation methods such as Stochastic Newton MCMC [2] and Riemannian Manifold Hamiltonian MC [3]. The function evaluations are simulations with significant time-to-solution, which also varies with the input parameters such as in [1, 4]. The runtime of the N-body-type of problem changes considerably with the introduction of a longer cut-off between the bodies. In the first version of the library, the OpenMP-parallel subroutines spawn a new team of threads and distribute the function evaluations with a PARALLEL DO directive. This limits the functionality of the library as multiple concurrent calls require nested parallelism support from the OpenMP environment. Therefore, either their function evaluations will be serialized or processor oversubscription is likely to occur due to the increased number of OpenMP threads. In addition, the Hessian calculations include two explicit parallel regions that compute first the diagonal and then the off-diagonal elements of the array. Due to the barrier between the two regions, the parallelism of the calculations is not fully exploited. These issues have been addressed in the new version by first restructuring the serial code and then running the function evaluations in parallel using OpenMP tasks. Although the MPI-parallel implementation of the first version is capable of fully exploiting the task parallelism of the PNDL routines, it does not utilize the caching mechanism of the serial code and, therefore, performs some redundant function evaluations in the Hessian and Jacobian calculations. This can lead to: (a) higher execution times if the number of available processors is lower than the total number of tasks, and (b) significant energy consumption due to wasted processor cycles. Overcoming these drawbacks, which become critical as the time of a single function evaluation increases, was the primary goal of this new version. Due to the code restructure, the MPI-parallel implementation (and the OpenMP-parallel in accordance) avoids redundant calls, providing optimal performance in terms of the number of function evaluations. Another limitation of the library was that the library subroutines were collective and synchronous calls. In the new version, each MPI process can issue any number of subroutines for asynchronous execution. We introduce two library calls that provide global and local task synchronizations, similarly to the BARRIER and TASKWAIT directives of OpenMP. The new MPI-implementation is based on TORC, a new tasking library for multicore clusters [5-7]. TORC improves the portability of the software, as it relies exclusively on the POSIX-Threads and MPI programming interfaces. It allows MPI processes to utilize multiple worker threads, offering a hybrid programming and execution environment similar to MPI+OpenMP, in a completely transparent way. Finally, to further improve the usability of our software, a Python interface has been implemented on top of both the OpenMP and MPI versions of the library. This allows sequential Python codes to exploit shared and distributed memory systems. Summary of revisions: The revised code improves the performance of both parallel (OpenMP and MPI) implementations. The functionality and the user-interface of the MPI-parallel version have been extended to support the asynchronous execution of multiple PNDL calls, issued by one or multiple MPI processes. A new underlying tasking library increases portability and allows MPI processes to have multiple worker threads. For both implementations, an interface to the Python programming language has been added. Restrictions: The library uses only double precision arithmetic. The MPI implementation assumes the homogeneity of the execution environment provided by the operating system. Specifically, the processes of a single MPI application must have identical address space and a user function resides at the same virtual address. In addition, address space layout randomization should not be used for the application. Unusual features: The software takes into account bound constraints, in the sense that only feasible points are used to evaluate the derivatives, and given the level of the desired accuracy, the proper formula is automatically employed. Running time: Running time depends on the function's complexity. The test run took 23 ms for the serial distribution, 25 ms for the OpenMP with 2 threads, 53 ms and 1.01 s for the MPI parallel distribution using 2 threads and 2 processes respectively and yield-time for idle workers equal to 10 ms. References: [1] P. Angelikopoulos, C. Paradimitriou, P. Koumoutsakos, Bayesian uncertainty quantification and propagation in molecular dynamics simulations: a high performance computing framework, J. Chem. Phys 137 (14). [2] H.P. Flath, L.C. Wilcox, V. Akcelik, J. Hill, B. van Bloemen Waanders, O. Ghattas, Fast algorithms for Bayesian uncertainty quantification in large-scale linear inverse problems based on low-rank partial Hessian approximations, SIAM J. Sci. Comput. 33 (1) (2011) 407-432. [3] M. Girolami, B. Calderhead, Riemann manifold Langevin and Hamiltonian Monte Carlo methods, J. R. Stat. Soc. Ser. B (Stat. Methodol.) 73 (2) (2011) 123-214. [4] P. Angelikopoulos, C. Paradimitriou, P. Koumoutsakos, Data driven, predictive molecular dynamics for nanoscale flow simulations under uncertainty, J. Phys. Chem. B 117 (47) (2013) 14808-14816. [5] P.E. Hadjidoukas, E. Lappas, V.V. Dimakopoulos, A runtime library for platform-independent task parallelism, in: PDP, IEEE, 2012, pp. 229-236. [6] C. Voglis, P.E. Hadjidoukas, D.G. Papageorgiou, I. Lagaris, A parallel hybrid optimization algorithm for fitting interatomic potentials, Appl. Soft Comput. 13 (12) (2013) 4481-4492. [7] P.E. Hadjidoukas, C. Voglis, V.V. Dimakopoulos, I. Lagaris, D.G. Papageorgiou, Supporting adaptive and irregular parallelism for non-linear numerical optimization, Appl. Math. Comput. 231 (2014) 544-559.
A computationally tractable version of the collective model
NASA Astrophysics Data System (ADS)
Rowe, D. J.
2004-05-01
A computationally tractable version of the Bohr-Mottelson collective model is presented which makes it possible to diagonalize realistic collective models and obtain convergent results in relatively small appropriately chosen subspaces of the collective model Hilbert space. Special features of the proposed model are that it makes use of the beta wave functions given analytically by the softened-beta version of the Wilets-Jean model, proposed by Elliott et al., and a simple algorithm for computing SO(5)⊃SO(3) spherical harmonics. The latter has much in common with the methods of Chacon, Moshinsky, and Sharp but is conceptually and computationally simpler. Results are presented for collective models ranging from the spherical vibrator to the Wilets-Jean and axially symmetric rotor-vibrator models.
Star adaptation for two-algorithms used on serial computers
NASA Technical Reports Server (NTRS)
Howser, L. M.; Lambiotte, J. J., Jr.
1974-01-01
Two representative algorithms used on a serial computer and presently executed on the Control Data Corporation 6000 computer were adapted to execute efficiently on the Control Data STAR-100 computer. Gaussian elimination for the solution of simultaneous linear equations and the Gauss-Legendre quadrature formula for the approximation of an integral are the two algorithms discussed. A description is given of how the programs were adapted for STAR and why these adaptations were necessary to obtain an efficient STAR program. Some points to consider when adapting an algorithm for STAR are discussed. Program listings of the 6000 version coded in 6000 FORTRAN, the adapted STAR version coded in 6000 FORTRAN, and the STAR version coded in STAR FORTRAN are presented in the appendices.
Student Assistant for Learning from Text (SALT): a hypermedia reading aid.
MacArthur, C A; Haynes, J B
1995-03-01
Student Assistant for Learning from Text (SALT) is a software system for developing hypermedia versions of textbooks designed to help students with learning disabilities and other low-achieving students to compensate for their reading difficulties. In the present study, 10 students with learning disabilities (3 young women and 7 young men ages 15 to 17) in Grades 9 and 10 read passages from a science textbook using a basic computer version and an enhanced computer version. The basic version included the components found in the printed textbook (text, graphics, outline, and questions) and a notebook. The enhanced version added speech synthesis, an on-line glossary, links between questions and text, highlighting of main ideas, and supplementary explanations that summarized important ideas. Students received significantly higher comprehension scores using the enhanced version. Furthermore, students preferred the enhanced version and thought it helped them learn the material better.
Haley, Stephen M; Fragala-Pinkham, Maria; Ni, Pengsheng
2006-07-01
To examine the relative sensitivity to detect functional mobility changes with a full-length parent questionnaire compared with a computerized adaptive testing version of the questionnaire after a 16-week group fitness programme. Prospective, pre- and posttest study with a 16-week group fitness intervention. Three community-based fitness centres. Convenience sample of children (n = 28) with physical or developmental disabilities. A 16-week group exercise programme held twice a week in a community setting. A full-length (161 items) paper version of a mobility parent questionnaire based on the Pediatric Evaluation of Disability Inventory, but expanded to include expected skills of children up to 15 years old was compared with a 15-item computer adaptive testing version. Both measures were administered at pre- and posttest intervals. Both the full-length Pediatric Evaluation of Disability Inventory and the 15-item computer adaptive testing version detected significant changes between pre- and posttest scores, had large effect sizes, and standardized response means, with a modest decrease in the computer adaptive test as compared with the 161-item paper version. Correlations between the computer adaptive and paper formats across pre- and posttest scores ranged from r = 0.76 to 0.86. Both functional mobility test versions were able to detect positive functional changes at the end of the intervention period. Greater variability in score estimates was generated by the computerized adaptive testing version, which led to a relative reduction in sensitivity as defined by the standardized response mean. Extreme scores were generally more difficult for the computer adaptive format to estimate with as much accuracy as scores in the mid-range of the scale. However, the reduction in accuracy and sensitivity, which did not influence the group effect results in this study, is counterbalanced by the large reduction in testing burden.
NASA Technical Reports Server (NTRS)
Juang, Hann-Ming Henry; Tao, Wei-Kuo; Zeng, Xi-Ping; Shie, Chung-Lin; Simpson, Joanne; Lang, Steve
2004-01-01
The capability for massively parallel programming (MPP) using a message passing interface (MPI) has been implemented into a three-dimensional version of the Goddard Cumulus Ensemble (GCE) model. The design for the MPP with MPI uses the concept of maintaining similar code structure between the whole domain as well as the portions after decomposition. Hence the model follows the same integration for single and multiple tasks (CPUs). Also, it provides for minimal changes to the original code, so it is easily modified and/or managed by the model developers and users who have little knowledge of MPP. The entire model domain could be sliced into one- or two-dimensional decomposition with a halo regime, which is overlaid on partial domains. The halo regime requires that no data be fetched across tasks during the computational stage, but it must be updated before the next computational stage through data exchange via MPI. For reproducible purposes, transposing data among tasks is required for spectral transform (Fast Fourier Transform, FFT), which is used in the anelastic version of the model for solving the pressure equation. The performance of the MPI-implemented codes (i.e., the compressible and anelastic versions) was tested on three different computing platforms. The major results are: 1) both versions have speedups of about 99% up to 256 tasks but not for 512 tasks; 2) the anelastic version has better speedup and efficiency because it requires more computations than that of the compressible version; 3) equal or approximately-equal numbers of slices between the x- and y- directions provide the fastest integration due to fewer data exchanges; and 4) one-dimensional slices in the x-direction result in the slowest integration due to the need for more memory relocation for computation.
Simulation of modern climate with the new version of the INM RAS climate model
NASA Astrophysics Data System (ADS)
Volodin, E. M.; Mortikov, E. V.; Kostrykin, S. V.; Galin, V. Ya.; Lykosov, V. N.; Gritsun, A. S.; Diansky, N. A.; Gusev, A. V.; Yakovlev, N. G.
2017-03-01
The INMCM5.0 numerical model of the Earth's climate system is presented, which is an evolution from the previous version, INMCM4.0. A higher vertical resolution for the stratosphere is applied in the atmospheric block. Also, we raised the upper boundary of the calculating area, added the aerosol block, modified parameterization of clouds and condensation, and increased the horizontal resolution in the ocean block. The program implementation of the model was also updated. We consider the simulation of the current climate using the new version of the model. Attention is focused on reducing systematic errors as compared to the previous version, reproducing phenomena that could not be simulated correctly in the previous version, and modeling the problems that remain unresolved.
Nucleation and growth in one dimension. I. The generalized Kolmogorov-Johnson-Mehl-Avrami model
NASA Astrophysics Data System (ADS)
Jun, Suckjoon; Zhang, Haiyang; Bechhoefer, John
2005-01-01
Motivated by a recent application of the Kolmogorov-Johnson-Mehl-Avrami (KJMA) model to the study of DNA replication, we consider the one-dimensional (1D) version of this model. We generalize previous work to the case where the nucleation rate is an arbitrary function I(t) and obtain analytical results for the time-dependent distributions of various quantities (such as the island distribution). We also present improved computer simulation algorithms to study the 1D KJMA model. The analytical results and simulations are in excellent agreement.
An improved multiple linear regression and data analysis computer program package
NASA Technical Reports Server (NTRS)
Sidik, S. M.
1972-01-01
NEWRAP, an improved version of a previous multiple linear regression program called RAPIER, CREDUC, and CRSPLT, allows for a complete regression analysis including cross plots of the independent and dependent variables, correlation coefficients, regression coefficients, analysis of variance tables, t-statistics and their probability levels, rejection of independent variables, plots of residuals against the independent and dependent variables, and a canonical reduction of quadratic response functions useful in optimum seeking experimentation. A major improvement over RAPIER is that all regression calculations are done in double precision arithmetic.
Statistical Analysis of CFD Solutions From the Fifth AIAA Drag Prediction Workshop
NASA Technical Reports Server (NTRS)
Morrison, Joseph H.
2013-01-01
A graphical framework is used for statistical analysis of the results from an extensive N-version test of a collection of Reynolds-averaged Navier-Stokes computational fluid dynamics codes. The solutions were obtained by code developers and users from North America, Europe, Asia, and South America using a common grid sequence and multiple turbulence models for the June 2012 fifth Drag Prediction Workshop sponsored by the AIAA Applied Aerodynamics Technical Committee. The aerodynamic configuration for this workshop was the Common Research Model subsonic transport wing-body previously used for the 4th Drag Prediction Workshop. This work continues the statistical analysis begun in the earlier workshops and compares the results from the grid convergence study of the most recent workshop with previous workshops.
Digital Camera Control for Faster Inspection
NASA Technical Reports Server (NTRS)
Brown, Katharine; Siekierski, James D.; Mangieri, Mark L.; Dekome, Kent; Cobarruvias, John; Piplani, Perry J.; Busa, Joel
2009-01-01
Digital Camera Control Software (DCCS) is a computer program for controlling a boom and a boom-mounted camera used to inspect the external surface of a space shuttle in orbit around the Earth. Running in a laptop computer in the space-shuttle crew cabin, DCCS commands integrated displays and controls. By means of a simple one-button command, a crewmember can view low- resolution images to quickly spot problem areas and can then cause a rapid transition to high- resolution images. The crewmember can command that camera settings apply to a specific small area of interest within the field of view of the camera so as to maximize image quality within that area. DCCS also provides critical high-resolution images to a ground screening team, which analyzes the images to assess damage (if any); in so doing, DCCS enables the team to clear initially suspect areas more quickly than would otherwise be possible and further saves time by minimizing the probability of re-imaging of areas already inspected. On the basis of experience with a previous version (2.0) of the software, the present version (3.0) incorporates a number of advanced imaging features that optimize crewmember capability and efficiency.
NASA Technical Reports Server (NTRS)
Tikidjian, Raffi; Mackey, Ryan
2008-01-01
The DSN Array Simulator (wherein 'DSN' signifies NASA's Deep Space Network) is an updated version of software previously denoted the DSN Receive Array Technology Assessment Simulation. This software (see figure) is used for computational modeling of a proposed DSN facility comprising user-defined arrays of antennas and transmitting and receiving equipment for microwave communication with spacecraft on interplanetary missions. The simulation includes variations in spacecraft tracked and communication demand changes for up to several decades of future operation. Such modeling is performed to estimate facility performance, evaluate requirements that govern facility design, and evaluate proposed improvements in hardware and/or software. The updated version of this software affords enhanced capability for characterizing facility performance against user-defined mission sets. The software includes a Monte Carlo simulation component that enables rapid generation of key mission-set metrics (e.g., numbers of links, data rates, and date volumes), and statistical distributions thereof as functions of time. The updated version also offers expanded capability for mixed-asset network modeling--for example, for running scenarios that involve user-definable mixtures of antennas having different diameters (in contradistinction to a fixed number of antennas having the same fixed diameter). The improved version also affords greater simulation fidelity, sufficient for validation by comparison with actual DSN operations and analytically predictable performance metrics.
Simulation of ultra-high energy photon propagation with PRESHOWER 2.0
NASA Astrophysics Data System (ADS)
Homola, P.; Engel, R.; Pysz, A.; Wilczyński, H.
2013-05-01
In this paper we describe a new release of the PRESHOWER program, a tool for Monte Carlo simulation of propagation of ultra-high energy photons in the magnetic field of the Earth. The PRESHOWER program is designed to calculate magnetic pair production and bremsstrahlung and should be used together with other programs to simulate extensive air showers induced by photons. The main new features of the PRESHOWER code include a much faster algorithm applied in the procedures of simulating the processes of gamma conversion and bremsstrahlung, update of the geomagnetic field model, and a minor correction. The new simulation procedure increases the flexibility of the code so that it can also be applied to other magnetic field configurations such as, for example, encountered in the vicinity of the sun or neutron stars. Program summaryProgram title: PRESHOWER 2.0 Catalog identifier: ADWG_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADWG_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 3968 No. of bytes in distributed program, including test data, etc.: 37198 Distribution format: tar.gz Programming language: C, FORTRAN 77. Computer: Intel-Pentium based PC. Operating system: Linux or Unix. RAM:< 100 kB Classification: 1.1. Does the new version supercede the previous version?: Yes Catalog identifier of previous version: ADWG_v1_0 Journal reference of previous version: Comput. Phys. Comm. 173 (2005) 71 Nature of problem: Simulation of a cascade of particles initiated by UHE photon in magnetic field. Solution method: The primary photon is tracked until its conversion into an e+ e- pair. If conversion occurs each individual particle in the resultant preshower is checked for either bremsstrahlung radiation (electrons) or secondary gamma conversion (photons). Reasons for new version: Slow and outdated algorithm in the old version (a significant speed up is possible); Extension of the program to allow simulations also for extraterrestrial magnetic field configurations (e.g. neutron stars) and very long path lengths. Summary of revisions: A veto algorithm was introduced in the gamma conversion and bremsstrahlung tracking procedures. The length of the tracking step is now variable along the track and depends on the probability of the process expected to occur. The new algorithm reduces significantly the number of tracking steps and speeds up the execution of the program. The geomagnetic field model has been updated to IGRF-11, allowing for interpolations up to the year 2015. Numerical Recipes procedures to calculate modified Bessel functions have been replaced with an open source CERN routine DBSKA. One minor bug has been fixed. Restrictions: Gamma conversion into particles other than an electron pair is not considered. Spatial structure of the cascade is neglected. Additional comments: The following routines are supplied in the package, IGRF [1, 2], DBSKA [3], ran2 [4] Running time: 100 preshower events with primary energy 1020 eV require a 2.66 GHz CPU time of about 200 sec.; at the energy of 1021 eV, 600 sec.
CAMAC throughput of a new RISC-based data acquisition computer at the DIII-D tokamak
NASA Astrophysics Data System (ADS)
Vanderlaan, J. F.; Cummings, J. W.
1993-10-01
The amount of experimental data acquired per plasma discharge at DIII-D has continued to grow. The largest shot size in May 1991 was 49 Mbyte; in May 1992, 66 Mbyte; and in April 1993, 80 Mbyte. The increasing load has prompted the installation of a new Motorola 88100-based MODCOMP computer to supplement the existing core of three older MODCOMP data acquisition CPU's. New Kinetic Systems CAMAC serial highway driver hardware runs on the 88100 VME bus. The new operating system is MODCOMP REAL/IX version of AT&T System V UNIX with real-time extensions and networking capabilities; future plans call for installation of additional computers of this type for tokamak and neutral beam control functions. Experiences with the CAMAC hardware and software will be chronicled, including observation of data throughput. The Enhanced Serial Highway crate controller is advertised as twice as fast as the previous crate controller, and computer I/O speeds are expected to also increase data rates.
Online mentalising investigated with functional MRI.
Kircher, Tilo; Blümel, Isabelle; Marjoram, Dominic; Lataster, Tineke; Krabbendam, Lydia; Weber, Jochen; van Os, Jim; Krach, Sören
2009-05-01
For successful interpersonal communication, inferring intentions, goals or desires of others is highly advantageous. Increasingly, humans also interact with computers or robots. In this study, we sought to determine to what degree an interactive task, which involves receiving feedback from social partners that can be used to infer intent, engaged the medial prefrontal cortex, a region previously associated with Theory of Mind processes among others. Participants were scanned using fMRI as they played an adapted version of the Prisoner's Dilemma Game with alleged human and computer partners who were outside the scanner. The medial frontal cortex was activated when both human and computer partner were played, while the direct contrast revealed significantly stronger signal change during the human-human interaction. The results suggest a link between activity in the medial prefrontal cortex and the partner played in a mentalising task. This signal change was also present for to the computers partner. Implying agency or a will to non-human actors might be an innate human resource that could lead to an evolutionary advantage.
NASA Astrophysics Data System (ADS)
Lee, J. S.; Carena, M.; Ellis, J.; Pilaftsis, A.; Wagner, C. E. M.
2009-02-01
We describe the Fortran code CPsuperH2.0, which contains several improvements and extensions of its predecessor CPsuperH. It implements improved calculations of the Higgs-boson pole masses, notably a full treatment of the 4×4 neutral Higgs propagator matrix including the Goldstone boson and a more complete treatment of threshold effects in self-energies and Yukawa couplings, improved treatments of two-body Higgs decays, some important three-body decays, and two-loop Higgs-mediated contributions to electric dipole moments. CPsuperH2.0 also implements an integrated treatment of several B-meson observables, including the branching ratios of B→μμ, B→ττ, B→τν, B→Xγ and the latter's CP-violating asymmetry A, and the supersymmetric contributions to the Bs,d0-B¯s,d0 mass differences. These additions make CPsuperH2.0 an attractive integrated tool for analyzing supersymmetric CP and flavour physics as well as searches for new physics at high-energy colliders such as the Tevatron, LHC and linear colliders. Program summaryProgram title: CPsuperH2.0 Catalogue identifier: ADSR_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADSR_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 13 290 No. of bytes in distributed program, including test data, etc.: 89 540 Distribution format: tar.gz Programming language: Fortran 77 Computer: PC running under Linux and computers in Unix environment Operating system: Linux RAM: 32 Mbytes Classification: 11.1 Catalogue identifier of the previous version: ADSR_v1_0 Journal reference of the previous version: CPC 156 (2004) 283 Does the new version supersede the previous version?: Yes Nature of problem: The calculations of mass spectrum, decay widths and branching ratios of the neutral and charged Higgs bosons in the Minimal Supersymmetric Standard Model with explicit CP violation have been improved. The program is based on recent renormalization-group-improved diagrammatic calculations that include dominant higher-order logarithmic and threshold corrections, b-quark Yukawa-coupling resummation effects and improved treatment of Higgs-boson pole-mass shifts. The couplings of the Higgs bosons to the Standard Model gauge bosons and fermions, to their supersymmetric partners and all the trilinear and quartic Higgs-boson self-couplings are also calculated. The new implementations include a full treatment of the 4×4(2×2) neutral (charged) Higgs propagator matrix together with the center-of-mass dependent Higgs-boson couplings to gluons and photons, two-loop Higgs-mediated contributions to electric dipole moments, and an integrated treatment of several B-meson observables. Solution method: One-dimensional numerical integration for several Higgs-decay modes, iterative treatment of the threshold corrections and Higgs-boson pole masses, and the numerical diagonalization of the neutralino mass matrix. Reasons for new version: Mainly to provide a coherent numerical framework which calculates consistently observables for both low- and high-energy experiments. Summary of revisions: Improved treatment of Higgs-boson masses and propagators. Improved treatment of Higgs-boson couplings and decays. Higgs-mediated two-loop electric dipole moments. B-meson observables. Running time: Less than 0.1 seconds. The program may be obtained from http://www.hep.man.ac.uk/u/jslee/CPsuperH.html.
Rotary engine performance computer program (RCEMAP and RCEMAPPC): User's guide
NASA Technical Reports Server (NTRS)
Bartrand, Timothy A.; Willis, Edward A.
1993-01-01
This report is a user's guide for a computer code that simulates the performance of several rotary combustion engine configurations. It is intended to assist prospective users in getting started with RCEMAP and/or RCEMAPPC. RCEMAP (Rotary Combustion Engine performance MAP generating code) is the mainframe version, while RCEMAPPC is a simplified subset designed for the personal computer, or PC, environment. Both versions are based on an open, zero-dimensional combustion system model for the prediction of instantaneous pressures, temperature, chemical composition and other in-chamber thermodynamic properties. Both versions predict overall engine performance and thermal characteristics, including bmep, bsfc, exhaust gas temperature, average material temperatures, and turbocharger operating conditions. Required inputs include engine geometry, materials, constants for use in the combustion heat release model, and turbomachinery maps. Illustrative examples and sample input files for both versions are included.
Darkhovskii, M B; Pletnev, I V; Tchougréeff, A L
2003-11-15
A computational method targeted to Werner-type complexes is developed on the basis of quantum mechanical effective Hamiltonian crystal field (EHCF) methodology (previously proposed for describing electronic structure of transition metal complexes) combined with the Gillespie-Kepert version of molecular mechanics (MM). It is a special version of the hybrid quantum/MM approach. The MM part is responsible for representing the whole molecule, including ligand atoms and metal ion coordination sphere, but leaving out the effects of the d-shell. The quantum mechanical EHCF part is limited to the metal ion d-shell. The method reproduces with reasonable accuracy geometry and spin states of the Fe(II) complexes with monodentate and polydentate aromatic ligands with nitrogen donor atoms. In this setting a single set of MM parameters set is shown to be sufficient for handling all spin states of the complexes under consideration. Copyright 2003 Wiley Periodicals, Inc.
Supporting ontology adaptation and versioning based on a graph of relevance
NASA Astrophysics Data System (ADS)
Sassi, Najla; Jaziri, Wassim; Alharbi, Saad
2016-11-01
Ontologies recently have become a topic of interest in computer science since they are seen as a semantic support to explicit and enrich data-models as well as to ensure interoperability of data. Moreover, supporting ontology adaptation becomes essential and extremely important, mainly when using ontologies in changing environments. An important issue when dealing with ontology adaptation is the management of several versions. Ontology versioning is a complex and multifaceted problem as it should take into account change management, versions storage and access, consistency issues, etc. The purpose of this paper is to propose an approach and tool for ontology adaptation and versioning. A series of techniques are proposed to 'safely' evolve a given ontology and produce a new consistent version. The ontology versions are ordered in a graph according to their relevance. The relevance is computed based on four criteria: conceptualisation, usage frequency, abstraction and completeness. The techniques to carry out the versioning process are implemented in the Consistology tool, which has been developed to assist users in expressing adaptation requirements and managing ontology versions.
A supertree pipeline for summarizing phylogenetic and taxonomic information for millions of species
Redelings, Benjamin D.
2017-01-01
We present a new supertree method that enables rapid estimation of a summary tree on the scale of millions of leaves. This supertree method summarizes a collection of input phylogenies and an input taxonomy. We introduce formal goals and criteria for such a supertree to satisfy in order to transparently and justifiably represent the input trees. In addition to producing a supertree, our method computes annotations that describe which grouping in the input trees support and conflict with each group in the supertree. We compare our supertree construction method to a previously published supertree construction method by assessing their performance on input trees used to construct the Open Tree of Life version 4, and find that our method increases the number of displayed input splits from 35,518 to 39,639 and decreases the number of conflicting input splits from 2,760 to 1,357. The new supertree method also improves on the previous supertree construction method in that it produces no unsupported branches and avoids unnecessary polytomies. This pipeline is currently used by the Open Tree of Life project to produce all of the versions of project’s “synthetic tree” starting at version 5. This software pipeline is called “propinquity”. It relies heavily on “otcetera”—a set of C++ tools to perform most of the steps of the pipeline. All of the components are free software and are available on GitHub. PMID:28265520
Salomon-Ferrer, Romelia; Götz, Andreas W; Poole, Duncan; Le Grand, Scott; Walker, Ross C
2013-09-10
We present an implementation of explicit solvent all atom classical molecular dynamics (MD) within the AMBER program package that runs entirely on CUDA-enabled GPUs. First released publicly in April 2010 as part of version 11 of the AMBER MD package and further improved and optimized over the last two years, this implementation supports the three most widely used statistical mechanical ensembles (NVE, NVT, and NPT), uses particle mesh Ewald (PME) for the long-range electrostatics, and runs entirely on CUDA-enabled NVIDIA graphics processing units (GPUs), providing results that are statistically indistinguishable from the traditional CPU version of the software and with performance that exceeds that achievable by the CPU version of AMBER software running on all conventional CPU-based clusters and supercomputers. We briefly discuss three different precision models developed specifically for this work (SPDP, SPFP, and DPDP) and highlight the technical details of the approach as it extends beyond previously reported work [Götz et al., J. Chem. Theory Comput. 2012, DOI: 10.1021/ct200909j; Le Grand et al., Comp. Phys. Comm. 2013, DOI: 10.1016/j.cpc.2012.09.022].We highlight the substantial improvements in performance that are seen over traditional CPU-only machines and provide validation of our implementation and precision models. We also provide evidence supporting our decision to deprecate the previously described fully single precision (SPSP) model from the latest release of the AMBER software package.
Simulation of the present-day climate with the climate model INMCM5
NASA Astrophysics Data System (ADS)
Volodin, E. M.; Mortikov, E. V.; Kostrykin, S. V.; Galin, V. Ya.; Lykossov, V. N.; Gritsun, A. S.; Diansky, N. A.; Gusev, A. V.; Iakovlev, N. G.
2017-12-01
In this paper we present the fifth generation of the INMCM climate model that is being developed at the Institute of Numerical Mathematics of the Russian Academy of Sciences (INMCM5). The most important changes with respect to the previous version (INMCM4) were made in the atmospheric component of the model. Its vertical resolution was increased to resolve the upper stratosphere and the lower mesosphere. A more sophisticated parameterization of condensation and cloudiness formation was introduced as well. An aerosol module was incorporated into the model. The upgraded oceanic component has a modified dynamical core optimized for better implementation on parallel computers and has two times higher resolution in both horizontal directions. Analysis of the present-day climatology of the INMCM5 (based on the data of historical run for 1979-2005) shows moderate improvements in reproduction of basic circulation characteristics with respect to the previous version. Biases in the near-surface temperature and precipitation are slightly reduced compared with INMCM4 as well as biases in oceanic temperature, salinity and sea surface height. The most notable improvement over INMCM4 is the capability of the new model to reproduce the equatorial stratospheric quasi-biannual oscillation and statistics of sudden stratospheric warmings.
Application of Aeroelastic Solvers Based on Navier-Stokes Equations
NASA Technical Reports Server (NTRS)
Keith, Theo G., Jr.; Srivastava, Rakesh
1998-01-01
A pre-release version of the Navier-Stokes solver (TURBO) was obtained from MSU. Along with Dr. Milind Bakhle of the University of Toledo, subroutines for aeroelastic analysis were developed and added to the TURBO code to develop versions 1 and 2 of the TURBO-AE code. For specified mode shape, frequency and inter-blade phase angle the code calculates the work done by the fluid on the rotor for a prescribed sinusoidal motion. Positive work on the rotor indicates instability of the rotor. The version 1 of the code calculates the work for in-phase blade motions only. In version 2 of the code, the capability for analyzing all possible inter-blade phase angles, was added. The version 2 of TURBO-AE code was validated and delivered to NASA and the industry partners of the AST project. The capabilities and the features of the code are summarized in Refs. [1] & [2]. To release the version 2 of TURBO-AE, a workshop was organized at NASA Lewis, by Dr. Srivastava and Dr. M. A. Bakhle, both of the University of Toledo, in October of 1996 for the industry partners of NASA Lewis. The workshop provided the potential users of TURBO-AE, all the relevant information required in preparing the input data, executing the code, interpreting the results and bench marking the code on their computer systems. After the code was delivered to the industry partners, user support was also provided. A new version of the Navier-Stokes solver (TURBO) was later released by MSU. This version had significant changes and upgrades over the previous version. This new version was merged with the TURBO-AE code. Also, new boundary conditions for 3-D unsteady non-reflecting boundaries, were developed by researchers from UTRC, Ref. [3]. Time was spent on understanding, familiarizing, executing and implementing the new boundary conditions into the TURBO-AE code. Work was started on the phase lagged (time-shifted) boundary condition version (version 4) of the code. This will allow the users to calculate non-zero interblade phase angles using, only one blade passage for analysis.
Doing It Right: 366 answers to computing questions you didn't know you had
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herring, Stuart Davis
Slides include information on history: version control, version control: branches, version control: Git, releases, requirements, readability, readability control flow, global variables, architecture, architecture redundancy, processes, input/output, unix, etcetera.
Program CALIB. [for computing noise levels for helicopter version of S-191 filter wheel spectrometer
NASA Technical Reports Server (NTRS)
Mendlowitz, M. A.
1973-01-01
The program CALIB, which was written to compute noise levels and average signal levels of aperture radiance for the helicopter version of the S-191 filter wheel spectrometer is described. The program functions, and input description are included along with a compiled program listing.
Optics Program Modified for Multithreaded Parallel Computing
NASA Technical Reports Server (NTRS)
Lou, John; Bedding, Dave; Basinger, Scott
2006-01-01
A powerful high-performance computer program for simulating and analyzing adaptive and controlled optical systems has been developed by modifying the serial version of the Modeling and Analysis for Controlled Optical Systems (MACOS) program to impart capabilities for multithreaded parallel processing on computing systems ranging from supercomputers down to Symmetric Multiprocessing (SMP) personal computers. The modifications included the incorporation of OpenMP, a portable and widely supported application interface software, that can be used to explicitly add multithreaded parallelism to an application program under a shared-memory programming model. OpenMP was applied to parallelize ray-tracing calculations, one of the major computing components in MACOS. Multithreading is also used in the diffraction propagation of light in MACOS based on pthreads [POSIX Thread, (where "POSIX" signifies a portable operating system for UNIX)]. In tests of the parallelized version of MACOS, the speedup in ray-tracing calculations was found to be linear, or proportional to the number of processors, while the speedup in diffraction calculations ranged from 50 to 60 percent, depending on the type and number of processors. The parallelized version of MACOS is portable, and, to the user, its interface is basically the same as that of the original serial version of MACOS.
The quantum computer game: citizen science
NASA Astrophysics Data System (ADS)
Damgaard, Sidse; Mølmer, Klaus; Sherson, Jacob
2013-05-01
Progress in the field of quantum computation is hampered by daunting technical challenges. Here we present an alternative approach to solving these by enlisting the aid of computer players around the world. We have previously examined a quantum computation architecture involving ultracold atoms in optical lattices and strongly focused tweezers of light. In The Quantum Computer Game (see http://www.scienceathome.org/), we have encapsulated the time-dependent Schrödinger equation for the problem in a graphical user interface allowing for easy user input. Players can then search the parameter space with real-time graphical feedback in a game context with a global high-score that rewards short gate times and robustness to experimental errors. The game which is still in a demo version has so far been tried by several hundred players. Extensions of the approach to other models such as Gross-Pitaevskii and Bose-Hubbard are currently under development. The game has also been incorporated into science education at high-school and university level as an alternative method for teaching quantum mechanics. Initial quantitative evaluation results are very positive. AU Ideas Center for Community Driven Research, CODER.
Miao, Yipu; Merz, Kenneth M
2015-04-14
We present an efficient implementation of ab initio self-consistent field (SCF) energy and gradient calculations that run on Compute Unified Device Architecture (CUDA) enabled graphical processing units (GPUs) using recurrence relations. We first discuss the machine-generated code that calculates the electron-repulsion integrals (ERIs) for different ERI types. Next we describe the porting of the SCF gradient calculation to GPUs, which results in an acceleration of the computation of the first-order derivative of the ERIs. However, only s, p, and d ERIs and s and p derivatives could be executed simultaneously on GPUs using the current version of CUDA and generation of NVidia GPUs using a previously described algorithm [Miao and Merz J. Chem. Theory Comput. 2013, 9, 965-976.]. Hence, we developed an algorithm to compute f type ERIs and d type ERI derivatives on GPUs. Our benchmarks shows the performance GPU enable ERI and ERI derivative computation yielded speedups of 10-18 times relative to traditional CPU execution. An accuracy analysis using double-precision calculations demonstrates that the overall accuracy is satisfactory for most applications.
Tabletop computed lighting for practical digital photography.
Mohan, Ankit; Bailey, Reynold; Waite, Jonathan; Tumblin, Jack; Grimm, Cindy; Bodenheimer, Bobby
2007-01-01
We apply simplified image-based lighting methods to reduce the equipment, cost, time, and specialized skills required for high-quality photographic lighting of desktop-sized static objects such as museum artifacts. We place the object and a computer-steered moving-head spotlight inside a simple foam-core enclosure and use a camera to record photos as the light scans the box interior. Optimization, guided by interactive user sketching, selects a small set of these photos whose weighted sum best matches the user-defined target sketch. Unlike previous image-based relighting efforts, our method requires only a single area light source, yet it can achieve high-resolution light positioning to avoid multiple sharp shadows. A reduced version uses only a handheld light and may be suitable for battery-powered field photography equipment that fits into a backpack.
NASA Technical Reports Server (NTRS)
Stahara, S. S.
1984-01-01
An investigation was carried out to complete the preliminary development of a combined perturbation/optimization procedure and associated computational code for designing optimized blade-to-blade profiles of turbomachinery blades. The overall purpose of the procedures developed is to provide demonstration of a rapid nonlinear perturbation method for minimizing the computational requirements associated with parametric design studies of turbomachinery flows. The method combines the multiple parameter nonlinear perturbation method, successfully developed in previous phases of this study, with the NASA TSONIC blade-to-blade turbomachinery flow solver, and the COPES-CONMIN optimization procedure into a user's code for designing optimized blade-to-blade surface profiles of turbomachinery blades. Results of several design applications and a documented version of the code together with a user's manual are provided.
SITE CHARACTERIZATION LIBRARY VERSION 3.0
The Site Characterization Library is a CD that provides a centralized, field-portable source for site characterization information. Version 3 of the Site Characterization Library contains additional (from earlier versions) electronic documents and computer programs related to th...
Surrogate oracles, generalized dependency and simpler models
NASA Technical Reports Server (NTRS)
Wilson, Larry
1990-01-01
Software reliability models require the sequence of interfailure times from the debugging process as input. It was previously illustrated that using data from replicated debugging could greatly improve reliability predictions. However, inexpensive replication of the debugging process requires the existence of a cheap, fast error detector. Laboratory experiments can be designed around a gold version which is used as an oracle or around an n-version error detector. Unfortunately, software developers can not be expected to have an oracle or to bear the expense of n-versions. A generic technique is being investigated for approximating replicated data by using the partially debugged software as a difference detector. It is believed that the failure rate of each fault has significant dependence on the presence or absence of other faults. Thus, in order to discuss a failure rate for a known fault, the presence or absence of each of the other known faults needs to be specified. Also, in simpler models which use shorter input sequences without sacrificing accuracy are of interest. In fact, a possible gain in performance is conjectured. To investigate these propositions, NASA computers running LIC (RTI) versions are used to generate data. This data will be used to label the debugging graph associated with each version. These labeled graphs will be used to test the utility of a surrogate oracle, to analyze the dependent nature of fault failure rates and to explore the feasibility of reliability models which use the data of only the most recent failures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hindmarsh, A.C.; Sloan, L.J.; Dubois, P.F.
1978-12-01
This report supersedes the original version, dated June 1976. It describes four versions of a pair of subroutines for solving N x N systems of linear algebraic equations. In each case, the first routine, DEC, performs an LU decomposition of the matrix with partial pivoting, and the second, SOL, computes the solution vector by back-substitution. The first version is in Fortran IV, and is derived from routines DECOMP and SOLVE written by C.B. Moler. The second is a version for the CDC 7600 computer using STACKLIB. The third is a hand-coded (Compass) version for the 7600. The fourth is amore » vectorized version for the CDC STAR, renamed DECST and SOLST. Comparative tests on these routines are also described. The Compass version is faster than the others on the 7600 by factors of up to 5. The major revisions to the original report, and to the subroutines described, are an updated description of the availability of each version of DEC/SOL; correction of some errors in the Compass version, as altered so as to be compatible with FTN; and a new STAR version, which runs much faster than the earlier one. The standard Fortran version, the Fortran/STACKLIB version, and the object code generated from the Compass version and available in STACKLIB have not been changed.« less
Computer Graphics Research Laboratory Quarterly Progress Report Number 49, July-September 1993
1993-11-22
20 Texture Sampling and Strength Guided Motion: Jeffry S. Nimeroff 23 21 Radiosity : Min-Zhi Shao 24 22 Blended Shape Primitives: Douglas DeCarlo 25 23...placement. "* Extensions of radiosity rendering. "* A discussion of blended shape primitives and the applications in computer vision and computer...user. Radiosity : An improved version of the radiosity renderer is included. This version uses a fast over- relaxation progressive refinement algorithm
The report is a reference manual for RASSMlT Version 2.1, a computer program that was developed to simulate and aid in the design of sub-slab depressurization systems used for indoor radon mitigation. The program was designed to run on DOS-compatible personal computers to ensure ...
USER'S GUIDE TO THE PERSONAL COMPUTER VERSION OF THE BIOGENIC EMISSIONS INVENTORY SYSTEM (PC-BEIS2)
The document is a user's guide for an updated Personal Computer version of the Biogenic Emissions Inventory System (PC-BEIS2), allowing users to estimate hourly emissions of biogenic volatile organic compounds (BVOCs) and soil nitrogen oxide emissions for any county in the contig...
1990-09-01
1988). Current versions of the ADATS have CATE systems insLzlled, but the software is still under development by the radar manufacturer, Contraves ...Italiana, a subcontractor to Martin Marietta (USA). Contraves Italiana will deliver the final version of the software to Martin Marietta in 1991. Until then
A users' guide to the trace contaminant control simulation computer program
NASA Technical Reports Server (NTRS)
Perry, J. L.
1994-01-01
The Trace Contaminant Control Simulation computer program is a tool for assessing the performance of various trace contaminant control technologies for removing trace chemical contamination from a spacecraft cabin atmosphere. The results obtained from the program can be useful in assessing different technology combinations, system sizing, system location with respect to other life support systems, and the overall life cycle economics of a trace contaminant control system. The user's manual is extracted in its entirety from NASA TM-108409 to provide a stand-alone reference for using any version of the program. The first publication of the manual as part of TM-108409 also included a detailed listing of version 8.0 of the program. As changes to the code were necessary, it became apparent that the user's manual should be separate from the computer code documentation and be general enough to provide guidance in using any version of the program. Provided in the guide are tips for input file preparation, general program execution, and output file manipulation. Information concerning source code listings of the latest version of the computer program may be obtained by contacting the author.
Super Cooled Large Droplet Analysis of Several Geometries Using LEWICE3D Version 3
NASA Technical Reports Server (NTRS)
Bidwell, Colin S.
2011-01-01
Super Cooled Large Droplet (SLD) collection efficiency calculations were performed for several geometries using the LEWICE3D Version 3 software. The computations were performed using the NASA Glenn Research Center SLD splashing model which has been incorporated into the LEWICE3D Version 3 software. Comparisons to experiment were made where available. The geometries included two straight wings, a swept 64A008 wing tip, two high lift geometries, and the generic commercial transport DLR-F4 wing body configuration. In general the LEWICE3D Version 3 computations compared well with the 2D LEWICE 3.2.2 results and with experimental data where available.
Berlin, Konstantin; Longhini, Andrew; Dayie, T. Kwaku; Fushman, David
2013-01-01
To facilitate rigorous analysis of molecular motions in proteins, DNA, and RNA, we present a new version of ROTDIF, a program for determining the overall rotational diffusion tensor from single-or multiple-field Nuclear Magnetic Resonance (NMR) relaxation data. We introduce four major features that expand the program’s versatility and usability. The first feature is the ability to analyze, separately or together, 13C and/or 15N relaxation data collected at a single or multiple fields. A significant improvement in the accuracy compared to direct analysis of R2/R1 ratios, especially critical for analysis of 13C relaxation data, is achieved by subtracting high-frequency contributions to relaxation rates. The second new feature is an improved method for computing the rotational diffusion tensor in the presence of biased errors, such as large conformational exchange contributions, that significantly enhances the accuracy of the computation. The third new feature is the integration of the domain alignment and docking module for relaxation-based structure determination of multi-domain systems. Finally, to improve accessibility to all the program features, we introduced a graphical user interface (GUI) that simplifies and speeds up the analysis of the data. Written in Java, the new ROTDIF can run on virtually any computer platform. In addition, the new ROTDIF achieves an order of magnitude speedup over the previous version by implementing a more efficient deterministic minimization algorithm. We not only demonstrate the improvement in accuracy and speed of the new algorithm for synthetic and experimental 13C and 15N relaxation data for several proteins and nucleic acids, but also show that careful analysis required especially for characterizing RNA dynamics allowed us to uncover subtle conformational changes in RNA as a function of temperature that were opaque to previous analysis. PMID:24170368
MPLNET V3 Cloud and Planetary Boundary Layer Detection
NASA Technical Reports Server (NTRS)
Lewis, Jasper R.; Welton, Ellsworth J.; Campbell, James R.; Haftings, Phillip C.
2016-01-01
The NASA Micropulse Lidar Network Version 3 algorithms for planetary boundary layer and cloud detection are described and differences relative to the previous Version 2 algorithms are highlighted. A year of data from the Goddard Space Flight Center site in Greenbelt, MD consisting of diurnal and seasonal trends is used to demonstrate the results. Both the planetary boundary layer and cloud algorithms show significant improvement of the previous version.
Development of a GPU Compatible Version of the Fast Radiation Code RRTMG
NASA Astrophysics Data System (ADS)
Iacono, M. J.; Mlawer, E. J.; Berthiaume, D.; Cady-Pereira, K. E.; Suarez, M.; Oreopoulos, L.; Lee, D.
2012-12-01
The absorption of solar radiation and emission/absorption of thermal radiation are crucial components of the physics that drive Earth's climate and weather. Therefore, accurate radiative transfer calculations are necessary for realistic climate and weather simulations. Efficient radiation codes have been developed for this purpose, but their accuracy requirements still necessitate that as much as 30% of the computational time of a GCM is spent computing radiative fluxes and heating rates. The overall computational expense constitutes a limitation on a GCM's predictive ability if it becomes an impediment to adding new physics to or increasing the spatial and/or vertical resolution of the model. The emergence of Graphics Processing Unit (GPU) technology, which will allow the parallel computation of multiple independent radiative calculations in a GCM, will lead to a fundamental change in the competition between accuracy and speed. Processing time previously consumed by radiative transfer will now be available for the modeling of other processes, such as physics parameterizations, without any sacrifice in the accuracy of the radiative transfer. Furthermore, fast radiation calculations can be performed much more frequently and will allow the modeling of radiative effects of rapid changes in the atmosphere. The fast radiation code RRTMG, developed at Atmospheric and Environmental Research (AER), is utilized operationally in many dynamical models throughout the world. We will present the results from the first stage of an effort to create a version of the RRTMG radiation code designed to run efficiently in a GPU environment. This effort will focus on the RRTMG implementation in GEOS-5. RRTMG has an internal pseudo-spectral vector of length of order 100 that, when combined with the much greater length of the global horizontal grid vector from which the radiation code is called in GEOS-5, makes RRTMG/GEOS-5 particularly suited to achieving a significant speed improvement through GPU technology. This large number of independent cases will allow us to take full advantage of the computational power of the latest GPUs, ensuring that all thread cores in the GPU remain active, a key criterion for obtaining significant speedup. The CUDA (Compute Unified Device Architecture) Fortran compiler developed by PGI and Nvidia will allow us to construct this parallel implementation on the GPU while remaining in the Fortran language. This implementation will scale very well across various CUDA-supported GPUs such as the recently released Fermi Nvidia cards. We will present the computational speed improvements of the GPU-compatible code relative to the standard CPU-based RRTMG with respect to a very large and diverse suite of atmospheric profiles. This suite will also be utilized to demonstrate the minimal impact of the code restructuring on the accuracy of radiation calculations. The GPU-compatible version of RRTMG will be directly applicable to future versions of GEOS-5, but it is also likely to provide significant associated benefits for other GCMs that employ RRTMG.
1983-09-01
6ENFRAL. ELECTROMAGNETIC MODEL FOR THE ANALYSIS OF COMPLEX SYSTEMS **%(GEMA CS) Computer Code Documentation ii( Version 3 ). A the BDM Corporation Dr...ANALYSIS FnlTcnclRpr F COMPLEX SYSTEM (GmCS) February 81 - July 83- I TR CODE DOCUMENTATION (Version 3 ) 6.PROMN N.REPORT NUMBER 5. CONTRACT ORGAT97...the ti and t2 directions on the source patch. 3 . METHOD: The electric field at a segment observation point due to the source patch j is given by 1-- lnA
TEMPEST: A computer code for three-dimensional analysis of transient fluid dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fort, J.A.
TEMPEST (Transient Energy Momentum and Pressure Equations Solutions in Three dimensions) is a powerful tool for solving engineering problems in nuclear energy, waste processing, chemical processing, and environmental restoration because it analyzes and illustrates 3-D time-dependent computational fluid dynamics and heat transfer analysis. It is a family of codes with two primary versions, a N- Version (available to public) and a T-Version (not currently available to public). This handout discusses its capabilities, applications, numerical algorithms, development status, and availability and assistance.
DIALIGN P: fast pair-wise and multiple sequence alignment using parallel processors.
Schmollinger, Martin; Nieselt, Kay; Kaufmann, Michael; Morgenstern, Burkhard
2004-09-09
Parallel computing is frequently used to speed up computationally expensive tasks in Bioinformatics. Herein, a parallel version of the multi-alignment program DIALIGN is introduced. We propose two ways of dividing the program into independent sub-routines that can be run on different processors: (a) pair-wise sequence alignments that are used as a first step to multiple alignment account for most of the CPU time in DIALIGN. Since alignments of different sequence pairs are completely independent of each other, they can be distributed to multiple processors without any effect on the resulting output alignments. (b) For alignments of large genomic sequences, we use a heuristics by splitting up sequences into sub-sequences based on a previously introduced anchored alignment procedure. For our test sequences, this combined approach reduces the program running time of DIALIGN by up to 97%. By distributing sub-routines to multiple processors, the running time of DIALIGN can be crucially improved. With these improvements, it is possible to apply the program in large-scale genomics and proteomics projects that were previously beyond its scope.
CFD Based Computations of Flexible Helicopter Blades for Stability Analysis
NASA Technical Reports Server (NTRS)
Guruswamy, Guru P.
2011-01-01
As a collaborative effort among government aerospace research laboratories an advanced version of a widely used computational fluid dynamics code, OVERFLOW, was recently released. This latest version includes additions to model flexible rotating multiple blades. In this paper, the OVERFLOW code is applied to improve the accuracy of airload computations from the linear lifting line theory that uses displacements from beam model. Data transfers required at every revolution are managed through a Unix based script that runs jobs on large super-cluster computers. Results are demonstrated for the 4-bladed UH-60A helicopter. Deviations of computed data from flight data are evaluated. Fourier analysis post-processing that is suitable for aeroelastic stability computations are performed.
NASA Technical Reports Server (NTRS)
Carlson, H. W.
1994-01-01
This code was developed to aid design engineers in the selection and evaluation of aerodynamically efficient wing-canard and wing-horizontal-tail configurations that may employ simple hinged-flap systems. Rapid estimates of the longitudinal aerodynamic characteristics of conceptual airplane lifting surface arrangements are provided. The method is particularly well suited to configurations which, because of high speed flight requirements, must employ thin wings with highly swept leading edges. The code is applicable to wings with either sharp or rounded leading edges. The code provides theoretical pressure distributions over the wing, the canard or horizontal tail, and the deflected flap surfaces as well as estimates of the wing lift, drag, and pitching moments which account for attainable leading edge thrust and leading edge separation vortex forces. The wing planform information is specified by a series of leading edge and trailing edge breakpoints for a right hand wing panel. Up to 21 pairs of coordinates may be used to describe both the leading edge and the trailing edge. The code has been written to accommodate 2000 right hand panel elements, but can easily be modified to accommodate a larger or smaller number of elements depending on the capacity of the target computer platform. The code provides solutions for wing surfaces composed of all possible combinations of leading edge and trailing edge flap settings provided by the original deflection multipliers and by the flap deflection multipliers. Up to 25 pairs of leading edge and trailing edge flap deflection schedules may thus be treated simultaneously. The code also provides for an improved accounting of hinge-line singularities in determination of wing forces and moments. To determine lifting surface perturbation velocity distributions, the code provides for a maximum of 70 iterations. The program is constructed so that successive runs may be made with a given code entry. To make additional runs, it is necessary only to add an identification record and the namelist data that are to be changed from the previous run. This code was originally developed in 1989 in FORTRAN V on a CDC 6000 computer system, and was later ported to an MS-DOS environment. Both versions are available from COSMIC. There are only a few differences between the PC version (LAR-14458) and CDC version (LAR-14178) of AERO2S distributed by COSMIC. The CDC version has one main source code file while the PC version has two files which are easier to edit and compile on a PC. The PC version does not require a FORTRAN compiler which supports NAMELIST because a special INPUT subroutine has been added. The CDC version includes two MODIFY decks which can be used to improve the code and prevent the possibility of some infrequently occurring errors while PC-version users will have to make these code changes manually. The PC version includes an executable which was generated with the Ryan McFarland/FORTRAN compiler and requires 253K RAM and an 80x87 math co-processor. Using this executable, the sample case requires about four hours to execute on an 8MHz AT-class microcomputer with a co-processor. The source code conforms to the FORTRAN 77 standard except that it uses variables longer than six characters. With two minor modifications, the PC version should be portable to any computer with a FORTRAN compiler and sufficient memory. The CDC version of AERO2S is available in CDC NOS Internal format on a 9-track 1600 BPI magnetic tape. The PC version is available on a set of two 5.25 inch 360K MS-DOS format diskettes. IBM AT is a registered trademark of International Business Machines. MS-DOS is a registered trademark of Microsoft Corporation. CDC is a registered trademark of Control Data Corporation. NOS is a trademark of Control Data Corporation.
NASA Technical Reports Server (NTRS)
Darden, C. M.
1994-01-01
This code was developed to aid design engineers in the selection and evaluation of aerodynamically efficient wing-canard and wing-horizontal-tail configurations that may employ simple hinged-flap systems. Rapid estimates of the longitudinal aerodynamic characteristics of conceptual airplane lifting surface arrangements are provided. The method is particularly well suited to configurations which, because of high speed flight requirements, must employ thin wings with highly swept leading edges. The code is applicable to wings with either sharp or rounded leading edges. The code provides theoretical pressure distributions over the wing, the canard or horizontal tail, and the deflected flap surfaces as well as estimates of the wing lift, drag, and pitching moments which account for attainable leading edge thrust and leading edge separation vortex forces. The wing planform information is specified by a series of leading edge and trailing edge breakpoints for a right hand wing panel. Up to 21 pairs of coordinates may be used to describe both the leading edge and the trailing edge. The code has been written to accommodate 2000 right hand panel elements, but can easily be modified to accommodate a larger or smaller number of elements depending on the capacity of the target computer platform. The code provides solutions for wing surfaces composed of all possible combinations of leading edge and trailing edge flap settings provided by the original deflection multipliers and by the flap deflection multipliers. Up to 25 pairs of leading edge and trailing edge flap deflection schedules may thus be treated simultaneously. The code also provides for an improved accounting of hinge-line singularities in determination of wing forces and moments. To determine lifting surface perturbation velocity distributions, the code provides for a maximum of 70 iterations. The program is constructed so that successive runs may be made with a given code entry. To make additional runs, it is necessary only to add an identification record and the namelist data that are to be changed from the previous run. This code was originally developed in 1989 in FORTRAN V on a CDC 6000 computer system, and was later ported to an MS-DOS environment. Both versions are available from COSMIC. There are only a few differences between the PC version (LAR-14458) and CDC version (LAR-14178) of AERO2S distributed by COSMIC. The CDC version has one main source code file while the PC version has two files which are easier to edit and compile on a PC. The PC version does not require a FORTRAN compiler which supports NAMELIST because a special INPUT subroutine has been added. The CDC version includes two MODIFY decks which can be used to improve the code and prevent the possibility of some infrequently occurring errors while PC-version users will have to make these code changes manually. The PC version includes an executable which was generated with the Ryan McFarland/FORTRAN compiler and requires 253K RAM and an 80x87 math co-processor. Using this executable, the sample case requires about four hours to execute on an 8MHz AT-class microcomputer with a co-processor. The source code conforms to the FORTRAN 77 standard except that it uses variables longer than six characters. With two minor modifications, the PC version should be portable to any computer with a FORTRAN compiler and sufficient memory. The CDC version of AERO2S is available in CDC NOS Internal format on a 9-track 1600 BPI magnetic tape. The PC version is available on a set of two 5.25 inch 360K MS-DOS format diskettes. IBM AT is a registered trademark of International Business Machines. MS-DOS is a registered trademark of Microsoft Corporation. CDC is a registered trademark of Control Data Corporation. NOS is a trademark of Control Data Corporation.
Choice of Human-Computer Interaction Mode in Stroke Rehabilitation.
Mousavi Hondori, Hossein; Khademi, Maryam; Dodakian, Lucy; McKenzie, Alison; Lopes, Cristina V; Cramer, Steven C
2016-03-01
Advances in technology are providing new forms of human-computer interaction. The current study examined one form of human-computer interaction, augmented reality (AR), whereby subjects train in the real-world workspace with virtual objects projected by the computer. Motor performances were compared with those obtained while subjects used a traditional human-computer interaction, that is, a personal computer (PC) with a mouse. Patients used goal-directed arm movements to play AR and PC versions of the Fruit Ninja video game. The 2 versions required the same arm movements to control the game but had different cognitive demands. With AR, the game was projected onto the desktop, where subjects viewed the game plus their arm movements simultaneously, in the same visual coordinate space. In the PC version, subjects used the same arm movements but viewed the game by looking up at a computer monitor. Among 18 patients with chronic hemiparesis after stroke, the AR game was associated with 21% higher game scores (P = .0001), 19% faster reaching times (P = .0001), and 15% less movement variability (P = .0068), as compared to the PC game. Correlations between game score and arm motor status were stronger with the AR version. Motor performances during the AR game were superior to those during the PC game. This result is due in part to the greater cognitive demands imposed by the PC game, a feature problematic for some patients but clinically useful for others. Mode of human-computer interface influences rehabilitation therapy demands and can be individualized for patients. © The Author(s) 2015.
ERIC Educational Resources Information Center
Kim, Scott Sungki
2013-01-01
The present research study investigated the effects of 8 versions of a computer-based vocabulary learning program on receptive and productive knowledge levels of college students. The participants were 106 male and 103 female Korean EFL students from Kyungsung University and Kwandong University in Korea. Students who participated in versions of…
NASA Astrophysics Data System (ADS)
Simmons, Nathan; Myers, Steve
2017-04-01
We continue to develop more advanced models of Earth's global seismic structure with specific focus on improving predictive capabilities for future seismic events. Our most recent version of the model combines high-quality P and S wave body wave travel times and surface-wave group and phase velocities into a joint (simultaneous) inversion process to tomographically image Earth's crust and mantle. The new model adds anisotropy (known as vertical transverse isotropy) to the model, which is necessitated by the addition of surface waves to the tomographic data set. Like previous versions of the model the new model consists of 59 surfaces and 1.6 million model nodes from the surface to the core-mantle boundary, overlaying a 1-D outer and inner core model. The model architecture is aspherical and we directly incorporate Earth's expected hydrostatic shape (ellipticity and mantle stretching). We also explicitly honor surface undulations including the Moho, several internal crustal units, and the upper mantle transition zone undulations as predicated by previous studies. The explicit Earth model design allows for accurate travel time computation using our unique 3-D ray tracing algorithms, capable of 3-D ray tracing more than 20 distinct seismic phases including crustal, regional, teleseismic, and core phases. Thus, we can now incorporate certain secondary (and sometimes exotic) phases into source location determination and other analyses. New work on model uncertainty quantification assesses the error covariance of the model, which when completed will enable calculation of path-specific estimates of uncertainty for travel times computed using our previous model (LLNL-G3D-JPS) which is available to the monitoring and broader research community and we encourage external evaluation and validation. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Allowable residual contamination levels of radionuclides in soil from pathway analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nyquist, J.E.; Baes, C.F. III
1987-01-01
The uncertainty regarding radionuclide distributions among Remedial Action Program (RAP) sites and long-term decommissioning and closure options for these sites requires a flexible approach capable of handling different levels of contamination, dose limits, and closure scenarios. We identified a commercially available pathway analysis model, DECOM, which had been used previously in support of remedial activities involving contaminated soil at the Savannah River Plant. The DECOM computer code, which estimates concentrations of radionuclides uniformly distributed in soil that correspond to an annual effective dose equivalent, is written in BASIC and runs on an IBM PC or compatible microcomputer. We obtained themore » latest version of DECOM and modified it to make it more user friendly and applicable to the Oak Ridge National Laboratory (ORNL) RAP. Some modifications involved changes in default parameters or changes in models based on approaches used by the EPA in regulating remedial actions for hazardous substances. We created a version of DECOM as a LOTUS spreadsheet, using the same models as the BASIC version of DECOM. We discuss the specific modeling approaches taken, the regulatory framework that guided our efforts, the strengths and limitations of each approach, and areas for improvement. We also demonstrate how the LOTUS version of DECOM can be applied to specific problems that may be encountered during ORNL RAP activities. 18 refs., 2 figs., 3 tabs.« less
Heliport noise model (HNM) version 1 user's guide
DOT National Transportation Integrated Search
1988-02-01
This document contains the instructions to execute the Heliport Noise Model (HNM), Version 1. HNM Version 1 is a computer tool for determining the total impact of helicopter noise at and around heliports. The model runs on IBM PC/XT/AT personal compu...
NASA Technical Reports Server (NTRS)
Wright, Jeffrey; Thakur, Siddharth
2006-01-01
Loci-STREAM is an evolving computational fluid dynamics (CFD) software tool for simulating possibly chemically reacting, possibly unsteady flows in diverse settings, including rocket engines, turbomachines, oil refineries, etc. Loci-STREAM implements a pressure- based flow-solving algorithm that utilizes unstructured grids. (The benefit of low memory usage by pressure-based algorithms is well recognized by experts in the field.) The algorithm is robust for flows at all speeds from zero to hypersonic. The flexibility of arbitrary polyhedral grids enables accurate, efficient simulation of flows in complex geometries, including those of plume-impingement problems. The present version - Loci-STREAM version 0.9 - includes an interface with the Portable, Extensible Toolkit for Scientific Computation (PETSc) library for access to enhanced linear-equation-solving programs therein that accelerate convergence toward a solution. The name "Loci" reflects the creation of this software within the Loci computational framework, which was developed at Mississippi State University for the primary purpose of simplifying the writing of complex multidisciplinary application programs to run in distributed-memory computing environments including clusters of personal computers. Loci has been designed to relieve application programmers of the details of programming for distributed-memory computers.
NASA Astrophysics Data System (ADS)
Richa, Tambi; Ide, Soichiro; Suzuki, Ryosuke; Ebina, Teppei; Kuroda, Yutaka
2017-02-01
Efficient and rapid prediction of domain regions from amino acid sequence information alone is often required for swift structural and functional characterization of large multi-domain proteins. Here we introduce Fast H-DROP, a thirty times accelerated version of our previously reported H-DROP (Helical Domain linker pRediction using OPtimal features), which is unique in specifically predicting helical domain linkers (boundaries). Fast H-DROP, analogously to H-DROP, uses optimum features selected from a set of 3000 ones by combining a random forest and a stepwise feature selection protocol. We reduced the computational time from 8.5 min per sequence in H-DROP to 14 s per sequence in Fast H-DROP on an 8 Xeon processor Linux server by using SWISS-PROT instead of Genbank non-redundant (nr) database for generating the PSSMs. The sensitivity and precision of Fast H-DROP assessed by cross-validation were 33.7 and 36.2%, which were merely 2% lower than that of H-DROP. The reduced computational time of Fast H-DROP, without affecting prediction performances, makes it more interactive and user-friendly. Fast H-DROP and H-DROP are freely available from http://domserv.lab.tuat.ac.jp/.
A Spreadsheet for the Mixing of a Row of Jets with a Confined Crossflow
NASA Technical Reports Server (NTRS)
Holderman, J. D.; Smith, T. D.; Clisset, J. R.; Lear, W. E.
2005-01-01
An interactive computer code, written with a readily available software program, Microsoft Excel (Microsoft Corporation, Redmond, WA) is presented which displays 3 D oblique plots of a conserved scalar distribution downstream of jets mixing with a confined crossflow, for a single row, double rows, or opposed rows of jets with or without flow area convergence and/or a non-uniform crossflow scalar distribution. This project used a previously developed empirical model of jets mixing in a confined crossflow to create an Microsoft Excel spreadsheet that can output the profiles of a conserved scalar for jets injected into a confined crossflow given several input variables. The program uses multiple spreadsheets in a single Microsoft Excel notebook to carry out the modeling. The first sheet contains the main program, controls for the type of problem to be solved, and convergence criteria. The first sheet also provides for input of the specific geometry and flow conditions. The second sheet presents the results calculated with this routine to show the effects on the mixing of varying flow and geometric parameters. Comparisons are also made between results from the version of the empirical correlations implemented in the spreadsheet and the versions originally written in Applesoft BASIC (Apple Computer, Cupertino, CA) in the 1980's.
A Spreadsheet for the Mixing of a Row of Jets with a Confined Crossflow. Supplement
NASA Technical Reports Server (NTRS)
Holderman, J. D.; Smith, T. D.; Clisset, J. R.; Lear, W. E.
2005-01-01
An interactive computer code, written with a readily available software program, Microsoft Excel (Microsoft Corporation, Redmond, WA) is presented which displays 3 D oblique plots of a conserved scalar distribution downstream of jets mixing with a confined crossflow, for a single row, double rows, or opposed rows of jets with or without flow area convergence and/or a non-uniform crossflow scalar distribution. This project used a previously developed empirical model of jets mixing in a confined crossflow to create an Microsoft Excel spreadsheet that can output the profiles of a conserved scalar for jets injected into a confined crossflow given several input variables. The program uses multiple spreadsheets in a single Microsoft Excel notebook to carry out the modeling. The first sheet contains the main program, controls for the type of problem to be solved, and convergence criteria. The first sheet also provides for input of the specific geometry and flow conditions. The second sheet presents the results calculated with this routine to show the effects on the mixing of varying flow and geometric parameters. Comparisons are also made between results from the version of the empirical correlations implemented in the spreadsheet and the versions originally written in Applesoft BASIC (Apple Computer, Cupertino, CA) in the 1980's.
Computer-based learning of spelling skills in children with and without dyslexia.
Kast, Monika; Baschera, Gian-Marco; Gross, Markus; Jäncke, Lutz; Meyer, Martin
2011-12-01
Our spelling training software recodes words into multisensory representations comprising visual and auditory codes. These codes represent information about letters and syllables of a word. An enhanced version, developed for this study, contains an additional phonological code and an improved word selection controller relying on a phoneme-based student model. We investigated the spelling behavior of children by means of learning curves based on log-file data of the previous and the enhanced software version. First, we compared the learning progress of children with dyslexia working either with the previous software (n = 28) or the adapted version (n = 37). Second, we investigated the spelling behavior of children with dyslexia (n = 37) and matched children without dyslexia (n = 25). To gain deeper insight into which factors are relevant for acquiring spelling skills, we analyzed the influence of cognitive abilities, such as attention functions and verbal memory skills, on the learning behavior. All investigations of the learning process are based on learning curve analyses of the collected log-file data. The results evidenced that those children with dyslexia benefit significantly from the additional phonological cue and the corresponding phoneme-based student model. Actually, children with dyslexia improve their spelling skills to the same extent as children without dyslexia and were able to memorize phoneme to grapheme correspondence when given the correct support and adequate training. In addition, children with low attention functions benefit from the structured learning environment. Generally, our data showed that memory sources are supportive cognitive functions for acquiring spelling skills and for using the information cues of a multi-modal learning environment.
IGT-Open: An open-source, computerized version of the Iowa Gambling Task.
Dancy, Christopher L; Ritter, Frank E
2017-06-01
The Iowa Gambling Task (IGT) is commonly used to understand the processes involved in decision-making. Though the task was originally run without a computer, using a computerized version of the task has become typical. These computerized versions of the IGT are useful, because they can make the task more standardized across studies and allow for the task to be used in environments where a physical version of the task may be difficult or impossible to use (e.g., while collecting brain imaging data). Though these computerized versions of the IGT have been useful for experimentation, having multiple software implementations of the task could present reliability issues. We present an open-source software version of the Iowa Gambling Task (called IGT-Open) that allows for millisecond visual presentation accuracy and is freely available to be used and modified. This software has been used to collect data from human subjects and also has been used to run model-based simulations with computational process models developed to run in the ACT-R architecture.
BUCKY instruction manual, version 3.3
NASA Technical Reports Server (NTRS)
Smith, James P.
1994-01-01
The computer program BUCKY is a p-version finite element package for the solution of structural problems. The current version of BUCKY solves the 2-D plane stress, 3-D plane stress plasticity, 3-D axisymmetric, Mindlin and Kirchoff plate bending, and buckling problems. The p-version of the finite element method is a highly accurate version of the traditional finite element method. Example cases are presented to show the accuracy and application of BUCKY.
Berent, Jarosław
2010-01-01
This paper presents the new DNAStat version 2.1 for processing genetic profile databases and biostatistical calculations. The popularization of DNA studies employed in the judicial system has led to the necessity of developing appropriate computer programs. Such programs must, above all, address two critical problems, i.e. the broadly understood data processing and data storage, and biostatistical calculations. Moreover, in case of terrorist attacks and mass natural disasters, the ability to identify victims by searching related individuals is very important. DNAStat version 2.1 is an adequate program for such purposes. The DNAStat version 1.0 was launched in 2005. In 2006, the program was updated to 1.1 and 1.2 versions. There were, however, slight differences between those versions and the original one. The DNAStat version 2.0 was launched in 2007 and the major program improvement was an introduction of the group calculation options with the potential application to personal identification of mass disasters and terrorism victims. The last 2.1 version has the option of language selection--Polish or English, which will enhance the usage and application of the program also in other countries.
Long wavelength propagation capacity, version 1.1 (computer diskette)
NASA Astrophysics Data System (ADS)
1994-05-01
File Characteristics: software and data file. (72 files); ASCII character set. Physical Description: 2 computer diskettes; 3 1/2 in.; high density; 1.44 MB. System Requirements: PC compatible; Digital Equipment Corp. VMS; PKZIP (included on diskette). This report describes a revision of the Naval Command, Control and Ocean Surveillance Center RDT&E Division's Long Wavelength Propagation Capability (LWPC). The first version of this capability was a collection of separate FORTRAN programs linked together in operation by a command procedure written in an operating system unique to the Digital Equipment Corporation (Ferguson & Snyder, 1989a, b). A FORTRAN computer program named Long Wavelength Propagation Model (LWPM) was developed to replace the VMS control system (Ferguson & Snyder, 1990; Ferguson, 1990). This was designated version 1 (LWPC-1). This program implemented all the features of the original VMS plus a number of auxiliary programs that provided summaries of the files and graphical displays of the output files. This report describes a revision of the LWPC, designated version 1.1 (LWPC-1.1)
Continuum Absorption Coefficient of Atoms and Ions
NASA Technical Reports Server (NTRS)
Armaly, B. F.
1979-01-01
The rate of heat transfer to the heat shield of a Jupiter probe has been estimated to be one order of magnitude higher than any previously experienced in an outer space exploration program. More than one-third of this heat load is due to an emission of continuum radiation from atoms and ions. The existing computer code for calculating the continuum contribution to the total load utilizes a modified version of Biberman's approximate method. The continuum radiation absorption cross sections of a C - H - O - N ablation system were examined in detail. The present computer code was evaluated and updated by being compared with available exact and approximate calculations and correlations of experimental data. A detailed calculation procedure, which can be applied to other atomic species, is presented. The approximate correlations can be made to agree with the available exact and experimental data.
An efficient parallel algorithm for the calculation of canonical MP2 energies.
Baker, Jon; Pulay, Peter
2002-09-01
We present the parallel version of a previous serial algorithm for the efficient calculation of canonical MP2 energies (Pulay, P.; Saebo, S.; Wolinski, K. Chem Phys Lett 2001, 344, 543). It is based on the Saebo-Almlöf direct-integral transformation, coupled with an efficient prescreening of the AO integrals. The parallel algorithm avoids synchronization delays by spawning a second set of slaves during the bin-sort prior to the second half-transformation. Results are presented for systems with up to 2000 basis functions. MP2 energies for molecules with 400-500 basis functions can be routinely calculated to microhartree accuracy on a small number of processors (6-8) in a matter of minutes with modern PC-based parallel computers. Copyright 2002 Wiley Periodicals, Inc. J Comput Chem 23: 1150-1156, 2002
Differential equations as a tool for community identification.
Krawczyk, Małgorzata J
2008-06-01
We consider the task of identification of a cluster structure in random networks. The results of two methods are presented: (i) the Newman algorithm [M. E. J. Newman and M. Girvan, Phys. Rev. E 69, 026113 (2004)]; and (ii) our method based on differential equations. A series of computer experiments is performed to check if in applying these methods we are able to determine the structure of the network. The trial networks consist initially of well-defined clusters and are disturbed by introducing noise into their connectivity matrices. Further, we show that an improvement of the previous version of our method is possible by an appropriate choice of the threshold parameter beta . With this change, the results obtained by the two methods above are similar, and our method works better, for all the computer experiments we have done.
NASA Technical Reports Server (NTRS)
Dash, S. M.; Sinha, N.; Wolf, D. E.; York, B. J.
1986-01-01
An overview of computational models developed for the complete, design-oriented analysis of a scramjet propulsion system is provided. The modular approach taken involves the use of different PNS models to analyze the individual propulsion system components. The external compression and internal inlet flowfields are analyzed by the SCRAMP and SCRINT components discussed in Part II of this paper. The combustor is analyzed by the SCORCH code which is based upon SPLITP PNS pressure-split methodology formulated by Dash and Sinha. The nozzle is analyzed by the SCHNOZ code which is based upon SCIPVIS PNS shock-capturing methodology formulated by Dash and Wolf. The current status of these models, previous developments leading to this status, and, progress towards future hybrid and 3D versions are discussed in this paper.
MARPLOT 5.x versions include significant changes from both the previous 4.x versions and the 3.x versions. To ensure that your data is successfully transferred from your old MARPLOT, follow these instructions carefully.
Versions of the Waste Reduction Model (WARM)
This page provides a brief chronology of changes made to EPA’s Waste Reduction Model (WARM), organized by WARM version number. The page includes brief summaries of changes and updates since the previous version.
Versions of the Waste Reduction Model (WARM)
2017-02-14
This page provides a brief chronology of changes made to EPA’s Waste Reduction Model (WARM), organized by WARM version number. The page includes brief summaries of changes and updates since the previous version.
Educational interactive multimedia software: The impact of interactivity on learning
NASA Astrophysics Data System (ADS)
Reamon, Derek Trent
This dissertation discusses the design, development, deployment and testing of two versions of educational interactive multimedia software. Both versions of the software are focused on teaching mechanical engineering undergraduates about the fundamentals of direct-current (DC) motor physics and selection. The two versions of Motor Workshop software cover the same basic materials on motors, but differ in the level of interactivity between the students and the software. Here, the level of interactivity refers to the particular role of the computer in the interaction between the user and the software. In one version, the students navigate through information that is organized by topic, reading text, and viewing embedded video clips; this is referred to as "low-level interactivity" software because the computer simply presents the content. In the other version, the students are given a task to accomplish---they must design a small motor-driven 'virtual' vehicle that competes against computer-generated opponents. The interaction is guided by the software which offers advice from 'experts' and provides contextual information; we refer to this as "high-level interactivity" software because the computer is actively participating in the interaction. The software was used in two sets of experiments, where students using the low-level interactivity software served as the 'control group,' and students using the highly interactive software were the 'treatment group.' Data, including pre- and post-performance tests, questionnaire responses, learning style characterizations, activity tracking logs and videotapes were collected for analysis. Statistical and observational research methods were applied to the various data to test the hypothesis that the level of interactivity effects the learning situation, with higher levels of interactivity being more effective for learning. The results show that both the low-level and high-level interactive versions of the software were effective in promoting learning about the subject of motors. The focus of learning varied between users of the two versions, however. The low-level version was more effective for teaching concepts and terminology, while the high-level version seemed to be more effective for teaching engineering applications.
Mohammadi, Amrollah; Ahmadian, Alireza; Rabbani, Shahram; Fattahi, Ehsan; Shirani, Shapour
2017-12-01
Finite element models for estimation of intraoperative brain shift suffer from huge computational cost. In these models, image registration and finite element analysis are two time-consuming processes. The proposed method is an improved version of our previously developed Finite Element Drift (FED) registration algorithm. In this work the registration process is combined with the finite element analysis. In the Combined FED (CFED), the deformation of whole brain mesh is iteratively calculated by geometrical extension of a local load vector which is computed by FED. While the processing time of the FED-based method including registration and finite element analysis was about 70 s, the computation time of the CFED was about 3.2 s. The computational cost of CFED is almost 50% less than similar state of the art brain shift estimators based on finite element models. The proposed combination of registration and structural analysis can make the calculation of brain deformation much faster. Copyright © 2016 John Wiley & Sons, Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, T.F.; Gerhard, M.A.; Trummer, D.J.
CASKS (Computer Analysis of Storage casKS) is a microcomputer-based system of computer programs and databases developed at the Lawrence Livermore National Laboratory (LLNL) for evaluating safety analysis reports on spent-fuel storage casks. The bulk of the complete program and this user`s manual are based upon the SCANS (Shipping Cask ANalysis System) program previously developed at LLNL. A number of enhancements and improvements were added to the original SCANS program to meet requirements unique to storage casks. CASKS is an easy-to-use system that calculates global response of storage casks to impact loads, pressure loads and thermal conditions. This provides reviewers withmore » a tool for an independent check on analyses submitted by licensees. CASKS is based on microcomputers compatible with the IBM-PC family of computers. The system is composed of a series of menus, input programs, cask analysis programs, and output display programs. All data is entered through fill-in-the-blank input screens that contain descriptive data requests.« less
Augmentation of Teaching Tools: Outsourcing the HSD Computing for SPSS Application
ERIC Educational Resources Information Center
Wang, Jianjun
2010-01-01
The widely-used Tukey's HSD index is not produced in the current version of SPSS (i.e., PASW Statistics, version 18), and a computer program named "HSD Calculator" has been chosen to amend this problem. In comparison to hand calculation, this program application does not require table checking, which eliminates potential concern on the size of a…
An Alternative to QUERY: Batch-Searching of the ERIC Information Collections.
ERIC Educational Resources Information Center
Krahmer, Edward; Horne, Kent
A manual describing the RIC computer search program for retrieval of information from ERIC, CIJE, and other collections is presented. It is pointed out that two versions of this program have been developed. The first is for an IBM 360/370 computer. This version has been operational on a production basis for nearly a year. Four installations of…
ERIC Educational Resources Information Center
Mandinach, Ellen B.
This study investigated the degree to which 48 seventh and eighth grade students of different abilities acquired strategic planning knowledge from an intellectual computer game ("Wumpus"). Relationships between ability and student performance with two versions of the game were also investigated. The two versions differed in the structure…
How Much Higher Can HTCondor Fly?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fajardo, E. M.; Dost, J. M.; Holzman, B.
The HTCondor high throughput computing system is heavily used in the high energy physics (HEP) community as the batch system for several Worldwide LHC Computing Grid (WLCG) resources. Moreover, it is the backbone of GlidelnWMS, the pilot system used by the computing organization of the Compact Muon Solenoid (CMS) experiment. To prepare for LHC Run 2, we probed the scalability limits of new versions and configurations of HTCondor with a goal of reaching 200,000 simultaneous running jobs in a single internationally distributed dynamic pool.In this paper, we first describe how we created an opportunistic distributed testbed capable of exercising runsmore » with 200,000 simultaneous jobs without impacting production. This testbed methodology is appropriate not only for scale testing HTCondor, but potentially for many other services. In addition to the test conditions and the testbed topology, we include the suggested configuration options used to obtain the scaling results, and describe some of the changes to HTCondor inspired by our testing that enabled sustained operations at scales well beyond previous limits.« less
Zhan, X.
2005-01-01
A parallel Fortran-MPI (Message Passing Interface) software for numerical inversion of the Laplace transform based on a Fourier series method is developed to meet the need of solving intensive computational problems involving oscillatory water level's response to hydraulic tests in a groundwater environment. The software is a parallel version of ACM (The Association for Computing Machinery) Transactions on Mathematical Software (TOMS) Algorithm 796. Running 38 test examples indicated that implementation of MPI techniques with distributed memory architecture speedups the processing and improves the efficiency. Applications to oscillatory water levels in a well during aquifer tests are presented to illustrate how this package can be applied to solve complicated environmental problems involved in differential and integral equations. The package is free and is easy to use for people with little or no previous experience in using MPI but who wish to get off to a quick start in parallel computing. ?? 2004 Elsevier Ltd. All rights reserved.
Improved Apparatus for Measuring Distance Between Axles
NASA Technical Reports Server (NTRS)
Willard, Douglas E.; Townsend, Ivan I., III
2003-01-01
An improved version of an optoelectronic apparatus for measuring distances of the order of tens of feet with an error no larger than a small fraction of an inch (a few millimeters) has been built. Like the previous version, the present improved version of the apparatus is designed to measure the distance approximately equal to 66 ft (approximately equal to 20 m) between the axes of rotation of the front and rear tires of the space shuttle orbiter as it rests in a ground-based processing facility. Like the previous version, the present version could also be adapted for similar purposes in other settings: Examples include measuring perpendicular distance from a wall in a building, placement of architectural foundations, and general alignment and measurement operations.
COMPUTATIONAL ASPECTS OF THE AIR QUALITY FORECASTING VERSION OF CMAQ (CMAQ-F)
The air quality forecast version of the Community Modeling Air Quality (CMAQ) model (CMAQ-F) was developed from the public release version of CMAQ (available from http://www.cmascenter.org), and is running operationally at the National Weather Service's National Centers for Envir...
Suzukamo, Yoshimi; Oshika, Tetsuro; Yuzawa, Mitsuko; Tokuda, Yoshihiro; Tomidokoro, Atsuo; Oki, Kotaro; Mangione, Carol M; Green, Joseph; Fukuhara, Shunichi
2005-10-26
The importance of evaluating the outcomes of health care from the standpoint of the patient is now widely recognized. The purpose of this study is to develop and test a Japanese version of the National Eye Institute Visual Function Questionnaire (NEI VFQ-25). A Japanese version was developed with a previously standardized method. The questionnaire and optional items were completed by 245 patients with cataracts, glaucoma, or age-related macular degeneration, by 110 others before and after cataract surgery, and by a reference group (n = 31). We computed rates of missing data, measured reproducibility and internal consistency reliability, and tested for convergent and discriminant validity, concurrent validity, known-groups validity, factor structure, and responsiveness to change. Based on information from the participants, some items were changed to 2-step items (asking if an activity was done, and if it was done, then asking how difficult it was). The near-vision and distance-vision subscales each had 1 item that was endorsed by very few participants, so these items were replaced with items that were optional in the English version. For example, more than 60% of participants did not drive, so the driving question was excluded. Reliability and validity were adequate for all subscales except driving, ocular pain, color vision, and peripheral vision. With cataract surgery, most scores improved by at least 20 points. With minor modifications from the English version, the Japanese NEI VFQ-25 can give reliable, valid, responsive data on vision-related quality of life, for group-level comparisons or for tracking therapeutic outcomes.
Neural Computation and the Computational Theory of Cognition
ERIC Educational Resources Information Center
Piccinini, Gualtiero; Bahar, Sonya
2013-01-01
We begin by distinguishing computationalism from a number of other theses that are sometimes conflated with it. We also distinguish between several important kinds of computation: computation in a generic sense, digital computation, and analog computation. Then, we defend a weak version of computationalism--neural processes are computations in the…
DOVIS 2.0: an efficient and easy to use parallel virtual screening tool based on AutoDock 4.0.
Jiang, Xiaohui; Kumar, Kamal; Hu, Xin; Wallqvist, Anders; Reifman, Jaques
2008-09-08
Small-molecule docking is an important tool in studying receptor-ligand interactions and in identifying potential drug candidates. Previously, we developed a software tool (DOVIS) to perform large-scale virtual screening of small molecules in parallel on Linux clusters, using AutoDock 3.05 as the docking engine. DOVIS enables the seamless screening of millions of compounds on high-performance computing platforms. In this paper, we report significant advances in the software implementation of DOVIS 2.0, including enhanced screening capability, improved file system efficiency, and extended usability. To keep DOVIS up-to-date, we upgraded the software's docking engine to the more accurate AutoDock 4.0 code. We developed a new parallelization scheme to improve runtime efficiency and modified the AutoDock code to reduce excessive file operations during large-scale virtual screening jobs. We also implemented an algorithm to output docked ligands in an industry standard format, sd-file format, which can be easily interfaced with other modeling programs. Finally, we constructed a wrapper-script interface to enable automatic rescoring of docked ligands by arbitrarily selected third-party scoring programs. The significance of the new DOVIS 2.0 software compared with the previous version lies in its improved performance and usability. The new version makes the computation highly efficient by automating load balancing, significantly reducing excessive file operations by more than 95%, providing outputs that conform to industry standard sd-file format, and providing a general wrapper-script interface for rescoring of docked ligands. The new DOVIS 2.0 package is freely available to the public under the GNU General Public License.
Accelerating Smith-Waterman Algorithm for Biological Database Search on CUDA-Compatible GPUs
NASA Astrophysics Data System (ADS)
Munekawa, Yuma; Ino, Fumihiko; Hagihara, Kenichi
This paper presents a fast method capable of accelerating the Smith-Waterman algorithm for biological database search on a cluster of graphics processing units (GPUs). Our method is implemented using compute unified device architecture (CUDA), which is available on the nVIDIA GPU. As compared with previous methods, our method has four major contributions. (1) The method efficiently uses on-chip shared memory to reduce the data amount being transferred between off-chip video memory and processing elements in the GPU. (2) It also reduces the number of data fetches by applying a data reuse technique to query and database sequences. (3) A pipelined method is also implemented to overlap GPU execution with database access. (4) Finally, a master/worker paradigm is employed to accelerate hundreds of database searches on a cluster system. In experiments, the peak performance on a GeForce GTX 280 card reaches 8.32 giga cell updates per second (GCUPS). We also find that our method reduces the amount of data fetches to 1/140, achieving approximately three times higher performance than a previous CUDA-based method. Our 32-node cluster version is approximately 28 times faster than a single GPU version. Furthermore, the effective performance reaches 75.6 giga instructions per second (GIPS) using 32 GeForce 8800 GTX cards.
Plis, Sergey M; George, J S; Jun, S C; Paré-Blagoev, J; Ranken, D M; Wood, C C; Schmidt, D M
2007-01-01
We propose a new model to approximate spatiotemporal noise covariance for use in neural electromagnetic source analysis, which better captures temporal variability in background activity. As with other existing formalisms, our model employs a Kronecker product of matrices representing temporal and spatial covariance. In our model, spatial components are allowed to have differing temporal covariances. Variability is represented as a series of Kronecker products of spatial component covariances and corresponding temporal covariances. Unlike previous attempts to model covariance through a sum of Kronecker products, our model is designed to have a computationally manageable inverse. Despite increased descriptive power, inversion of the model is fast, making it useful in source analysis. We have explored two versions of the model. One is estimated based on the assumption that spatial components of background noise have uncorrelated time courses. Another version, which gives closer approximation, is based on the assumption that time courses are statistically independent. The accuracy of the structural approximation is compared to an existing model, based on a single Kronecker product, using both Frobenius norm of the difference between spatiotemporal sample covariance and a model, and scatter plots. Performance of ours and previous models is compared in source analysis of a large number of single dipole problems with simulated time courses and with background from authentic magnetoencephalography data.
GEANT4 Tuning For pCT Development
NASA Astrophysics Data System (ADS)
Yevseyeva, Olga; de Assis, Joaquim T.; Evseev, Ivan; Schelin, Hugo R.; Paschuk, Sergei A.; Milhoretto, Edney; Setti, João A. P.; Díaz, Katherin S.; Hormaza, Joel M.; Lopes, Ricardo T.
2011-08-01
Proton beams in medical applications deal with relatively thick targets like the human head or trunk. Thus, the fidelity of proton computed tomography (pCT) simulations as a tool for proton therapy planning depends in the general case on the accuracy of results obtained for the proton interaction with thick absorbers. GEANT4 simulations of proton energy spectra after passing thick absorbers do not agree well with existing experimental data, as showed previously. Moreover, the spectra simulated for the Bethe-Bloch domain showed an unexpected sensitivity to the choice of low-energy electromagnetic models during the code execution. These observations were done with the GEANT4 version 8.2 during our simulations for pCT. This work describes in more details the simulations of the proton passage through aluminum absorbers with varied thickness. The simulations were done by modifying only the geometry in the Hadrontherapy Example, and for all available choices of the Electromagnetic Physics Models. As the most probable reasons for these effects is some specific feature in the code, or some specific implicit parameters in the GEANT4 manual, we continued our study with version 9.2 of the code. Some improvements in comparison with our previous results were obtained. The simulations were performed considering further applications for pCT development.
1989-06-27
Department of Defense Washington DC 20301-3081 Ada Compiler Validation Summary Report : Compiler Name: Harris Ada, Version 5.0 Certificate Number...890627W1.10103 Host: Harris HIOO0 under VOS, E.i Target: Harris HiO00 under VOS, E.1 Testing Completed June 27, 1989 using ACVC 1.10 This report has been...arris Corporation, Computer Systems Division Harris Ada, Version 5.0, Harris H1000 under VOS, 8.1 (Host & Target), Wright-Patterson AFB, ACVC 1.10 DD
USDA-ARS?s Scientific Manuscript database
Animal facilities are significant contributors of gaseous emissions including ammonia (NH3) and nitrous oxide (N2O). Previous versions of the Integrated Farm System Model (IFSM version 4.0) and Dairy Gas Emissions Model (DairyGEM version 3.0), two whole-farm simulation models developed by USDA-ARS, ...
NASA Astrophysics Data System (ADS)
Zhang, Ling; Nan, Zhuotong; Liang, Xu; Xu, Yi; Hernández, Felipe; Li, Lianxia
2018-03-01
Although process-based distributed hydrological models (PDHMs) are evolving rapidly over the last few decades, their extensive applications are still challenged by the computational expenses. This study attempted, for the first time, to apply the numerically efficient MacCormack algorithm to overland flow routing in a representative high-spatial resolution PDHM, i.e., the distributed hydrology-soil-vegetation model (DHSVM), in order to improve its computational efficiency. The analytical verification indicates that both the semi and full versions of the MacCormack schemes exhibit robust numerical stability and are more computationally efficient than the conventional explicit linear scheme. The full-version outperforms the semi-version in terms of simulation accuracy when a same time step is adopted. The semi-MacCormack scheme was implemented into DHSVM (version 3.1.2) to solve the kinematic wave equations for overland flow routing. The performance and practicality of the enhanced DHSVM-MacCormack model was assessed by performing two groups of modeling experiments in the Mercer Creek watershed, a small urban catchment near Bellevue, Washington. The experiments show that DHSVM-MacCormack can considerably improve the computational efficiency without compromising the simulation accuracy of the original DHSVM model. More specifically, with the same computational environment and model settings, the computational time required by DHSVM-MacCormack can be reduced to several dozen minutes for a simulation period of three months (in contrast with one day and a half by the original DHSVM model) without noticeable sacrifice of the accuracy. The MacCormack scheme proves to be applicable to overland flow routing in DHSVM, which implies that it can be coupled into other PHDMs for watershed routing to either significantly improve their computational efficiency or to make the kinematic wave routing for high resolution modeling computational feasible.
CLIPS - C LANGUAGE INTEGRATED PRODUCTION SYSTEM (IBM PC VERSION)
NASA Technical Reports Server (NTRS)
Riley, G.
1994-01-01
The C Language Integrated Production System, CLIPS, is a shell for developing expert systems. It is designed to allow artificial intelligence research, development, and delivery on conventional computers. The primary design goals for CLIPS are portability, efficiency, and functionality. For these reasons, the program is written in C. CLIPS meets or outperforms most micro- and minicomputer based artificial intelligence tools. CLIPS is a forward chaining rule-based language. The program contains an inference engine and a language syntax that provide a framework for the construction of an expert system. It also includes tools for debugging an application. CLIPS is based on the Rete algorithm, which enables very efficient pattern matching. The collection of conditions and actions to be taken if the conditions are met is constructed into a rule network. As facts are asserted either prior to or during a session, CLIPS pattern-matches the number of fields. Wildcards and variables are supported for both single and multiple fields. CLIPS syntax allows the inclusion of externally defined functions (outside functions which are written in a language other than CLIPS). CLIPS itself can be embedded in a program such that the expert system is available as a simple subroutine call. Advanced features found in CLIPS version 4.3 include an integrated microEMACS editor, the ability to generate C source code from a CLIPS rule base to produce a dedicated executable, binary load and save capabilities for CLIPS rule bases, and the utility program CRSV (Cross-Reference, Style, and Verification) designed to facilitate the development and maintenance of large rule bases. Five machine versions are available. Each machine version includes the source and the executable for that machine. The UNIX version includes the source and binaries for IBM RS/6000, Sun3 series, and Sun4 series computers. The UNIX, DEC VAX, and DEC RISC Workstation versions are line oriented. The PC version and the Macintosh version each contain a windowing variant of CLIPS as well as the standard line oriented version. The mouse/window interface version for the PC works with a Microsoft compatible mouse or without a mouse. This window version uses the proprietary CURSES library for the PC, but a working executable of the window version is provided. The window oriented version for the Macintosh includes a version which uses a full Macintosh-style interface, including an integrated editor. This version allows the user to observe the changing fact base and rule activations in separate windows while a CLIPS program is executing. The IBM PC version is available bundled with CLIPSITS, The CLIPS Intelligent Tutoring System for a special combined price (COS-10025). The goal of CLIPSITS is to provide the student with a tool to practice the syntax and concepts covered in the CLIPS User's Guide. It attempts to provide expert diagnosis and advice during problem solving which is typically not available without an instructor. CLIPSITS is divided into 10 lessons which mirror the first 10 chapters of the CLIPS User's Guide. The program was developed for the IBM PC series with a hard disk. CLIPSITS is also available separately as MSC-21679. The CLIPS program is written in C for interactive execution and has been implemented on an IBM PC computer operating under DOS, a Macintosh and DEC VAX series computers operating under VMS or ULTRIX. The line oriented version should run on any computer system which supports a full (Kernighan and Ritchie) C compiler or the ANSI standard C language. CLIPS was developed in 1986 and Version 4.2 was released in July of 1988. Version 4.3 was released in June of 1989.
CLIPS - C LANGUAGE INTEGRATED PRODUCTION SYSTEM (MACINTOSH VERSION)
NASA Technical Reports Server (NTRS)
Culbert, C.
1994-01-01
The C Language Integrated Production System, CLIPS, is a shell for developing expert systems. It is designed to allow artificial intelligence research, development, and delivery on conventional computers. The primary design goals for CLIPS are portability, efficiency, and functionality. For these reasons, the program is written in C. CLIPS meets or outperforms most micro- and minicomputer based artificial intelligence tools. CLIPS is a forward chaining rule-based language. The program contains an inference engine and a language syntax that provide a framework for the construction of an expert system. It also includes tools for debugging an application. CLIPS is based on the Rete algorithm, which enables very efficient pattern matching. The collection of conditions and actions to be taken if the conditions are met is constructed into a rule network. As facts are asserted either prior to or during a session, CLIPS pattern-matches the number of fields. Wildcards and variables are supported for both single and multiple fields. CLIPS syntax allows the inclusion of externally defined functions (outside functions which are written in a language other than CLIPS). CLIPS itself can be embedded in a program such that the expert system is available as a simple subroutine call. Advanced features found in CLIPS version 4.3 include an integrated microEMACS editor, the ability to generate C source code from a CLIPS rule base to produce a dedicated executable, binary load and save capabilities for CLIPS rule bases, and the utility program CRSV (Cross-Reference, Style, and Verification) designed to facilitate the development and maintenance of large rule bases. Five machine versions are available. Each machine version includes the source and the executable for that machine. The UNIX version includes the source and binaries for IBM RS/6000, Sun3 series, and Sun4 series computers. The UNIX, DEC VAX, and DEC RISC Workstation versions are line oriented. The PC version and the Macintosh version each contain a windowing variant of CLIPS as well as the standard line oriented version. The mouse/window interface version for the PC works with a Microsoft compatible mouse or without a mouse. This window version uses the proprietary CURSES library for the PC, but a working executable of the window version is provided. The window oriented version for the Macintosh includes a version which uses a full Macintosh-style interface, including an integrated editor. This version allows the user to observe the changing fact base and rule activations in separate windows while a CLIPS program is executing. The IBM PC version is available bundled with CLIPSITS, The CLIPS Intelligent Tutoring System for a special combined price (COS-10025). The goal of CLIPSITS is to provide the student with a tool to practice the syntax and concepts covered in the CLIPS User's Guide. It attempts to provide expert diagnosis and advice during problem solving which is typically not available without an instructor. CLIPSITS is divided into 10 lessons which mirror the first 10 chapters of the CLIPS User's Guide. The program was developed for the IBM PC series with a hard disk. CLIPSITS is also available separately as MSC-21679. The CLIPS program is written in C for interactive execution and has been implemented on an IBM PC computer operating under DOS, a Macintosh and DEC VAX series computers operating under VMS or ULTRIX. The line oriented version should run on any computer system which supports a full (Kernighan and Ritchie) C compiler or the ANSI standard C language. CLIPS was developed in 1986 and Version 4.2 was released in July of 1988. Version 4.3 was released in June of 1989.
CLIPS - C LANGUAGE INTEGRATED PRODUCTION SYSTEM (IBM PC VERSION WITH CLIPSITS)
NASA Technical Reports Server (NTRS)
Riley, , .
1994-01-01
The C Language Integrated Production System, CLIPS, is a shell for developing expert systems. It is designed to allow artificial intelligence research, development, and delivery on conventional computers. The primary design goals for CLIPS are portability, efficiency, and functionality. For these reasons, the program is written in C. CLIPS meets or outperforms most micro- and minicomputer based artificial intelligence tools. CLIPS is a forward chaining rule-based language. The program contains an inference engine and a language syntax that provide a framework for the construction of an expert system. It also includes tools for debugging an application. CLIPS is based on the Rete algorithm, which enables very efficient pattern matching. The collection of conditions and actions to be taken if the conditions are met is constructed into a rule network. As facts are asserted either prior to or during a session, CLIPS pattern-matches the number of fields. Wildcards and variables are supported for both single and multiple fields. CLIPS syntax allows the inclusion of externally defined functions (outside functions which are written in a language other than CLIPS). CLIPS itself can be embedded in a program such that the expert system is available as a simple subroutine call. Advanced features found in CLIPS version 4.3 include an integrated microEMACS editor, the ability to generate C source code from a CLIPS rule base to produce a dedicated executable, binary load and save capabilities for CLIPS rule bases, and the utility program CRSV (Cross-Reference, Style, and Verification) designed to facilitate the development and maintenance of large rule bases. Five machine versions are available. Each machine version includes the source and the executable for that machine. The UNIX version includes the source and binaries for IBM RS/6000, Sun3 series, and Sun4 series computers. The UNIX, DEC VAX, and DEC RISC Workstation versions are line oriented. The PC version and the Macintosh version each contain a windowing variant of CLIPS as well as the standard line oriented version. The mouse/window interface version for the PC works with a Microsoft compatible mouse or without a mouse. This window version uses the proprietary CURSES library for the PC, but a working executable of the window version is provided. The window oriented version for the Macintosh includes a version which uses a full Macintosh-style interface, including an integrated editor. This version allows the user to observe the changing fact base and rule activations in separate windows while a CLIPS program is executing. The IBM PC version is available bundled with CLIPSITS, The CLIPS Intelligent Tutoring System for a special combined price (COS-10025). The goal of CLIPSITS is to provide the student with a tool to practice the syntax and concepts covered in the CLIPS User's Guide. It attempts to provide expert diagnosis and advice during problem solving which is typically not available without an instructor. CLIPSITS is divided into 10 lessons which mirror the first 10 chapters of the CLIPS User's Guide. The program was developed for the IBM PC series with a hard disk. CLIPSITS is also available separately as MSC-21679. The CLIPS program is written in C for interactive execution and has been implemented on an IBM PC computer operating under DOS, a Macintosh and DEC VAX series computers operating under VMS or ULTRIX. The line oriented version should run on any computer system which supports a full (Kernighan and Ritchie) C compiler or the ANSI standard C language. CLIPS was developed in 1986 and Version 4.2 was released in July of 1988. Version 4.3 was released in June of 1989.
A multiscale strength model for tantalum over an extended range of strain rates
NASA Astrophysics Data System (ADS)
Barton, N. R.; Rhee, M.
2013-09-01
A strength model for tantalum is developed and exercised across a range of conditions relevant to various types of experimental observations. The model is based on previous multiscale modeling work combined with experimental observations. As such, the model's parameterization includes a hybrid of quantities that arise directly from predictive sub-scale physics models and quantities that are adjusted to align the model with experimental observations. Given current computing and experimental limitations, the response regions for sub-scale physics simulations and detailed experimental observations have been largely disjoint. In formulating the new model and presenting results here, attention is paid to integrated experimental observations that probe strength response at the elevated strain rates where a previous version of the model has generally been successful in predicting experimental data [Barton et al., J. Appl. Phys. 109(7), 073501 (2011)].
LEWICE 2.2 Capabilities and Thermal Validation
NASA Technical Reports Server (NTRS)
Wright, William B.
2002-01-01
A computational model of bleed air anti-icing and electrothermal de-icing have been added to the LEWICE 2.0 software by integrating the capabilities of two previous programs, ANTICE and LEWICE/ Thermal. This combined model has been released as LEWICE version 2.2. Several advancements have also been added to the previous capabilities of each module. This report will present the capabilities of the software package and provide results for both bleed air and electrothermal cases. A comprehensive validation effort has also been performed to compare the predictions to an existing electrothermal database. A quantitative comparison shows that for deicing cases, the average difference is 9.4 F (26%) compared to 3 F for the experimental data while for evaporative cases the average difference is 2 F (32%) compared to an experimental error of 4 F.
Statistical Analysis of CFD Solutions from the 6th AIAA CFD Drag Prediction Workshop
NASA Technical Reports Server (NTRS)
Derlaga, Joseph M.; Morrison, Joseph H.
2017-01-01
A graphical framework is used for statistical analysis of the results from an extensive N- version test of a collection of Reynolds-averaged Navier-Stokes computational uid dynam- ics codes. The solutions were obtained by code developers and users from North America, Europe, Asia, and South America using both common and custom grid sequencees as well as multiple turbulence models for the June 2016 6th AIAA CFD Drag Prediction Workshop sponsored by the AIAA Applied Aerodynamics Technical Committee. The aerodynamic con guration for this workshop was the Common Research Model subsonic transport wing- body previously used for both the 4th and 5th Drag Prediction Workshops. This work continues the statistical analysis begun in the earlier workshops and compares the results from the grid convergence study of the most recent workshop with previous workshops.
NASA Astrophysics Data System (ADS)
Marx, Alain; Lütjens, Hinrich
2017-03-01
A hybrid MPI/OpenMP parallel version of the XTOR-2F code [Lütjens and Luciani, J. Comput. Phys. 229 (2010) 8130] solving the two-fluid MHD equations in full tokamak geometry by means of an iterative Newton-Krylov matrix-free method has been developed. The present work shows that the code has been parallelized significantly despite the numerical profile of the problem solved by XTOR-2F, i.e. a discretization with pseudo-spectral representations in all angular directions, the stiffness of the two-fluid stability problem in tokamaks, and the use of a direct LU decomposition to invert the physical pre-conditioner at every Krylov iteration of the solver. The execution time of the parallelized version is an order of magnitude smaller than the sequential one for low resolution cases, with an increasing speedup when the discretization mesh is refined. Moreover, it allows to perform simulations with higher resolutions, previously forbidden because of memory limitations.
ARES v2: new features and improved performance
NASA Astrophysics Data System (ADS)
Sousa, S. G.; Santos, N. C.; Adibekyan, V.; Delgado-Mena, E.; Israelian, G.
2015-05-01
Aims: We present a new upgraded version of ARES. The new version includes a series of interesting new features such as automatic radial velocity correction, a fully automatic continuum determination, and an estimation of the errors for the equivalent widths. Methods: The automatic correction of the radial velocity is achieved with a simple cross-correlation function, and the automatic continuum determination, as well as the estimation of the errors, relies on a new approach to evaluating the spectral noise at the continuum level. Results: ARES v2 is totally compatible with its predecessor. We show that the fully automatic continuum determination is consistent with the previous methods applied for this task. It also presents a significant improvement on its performance thanks to the implementation of a parallel computation using the OpenMP library. Automatic Routine for line Equivalent widths in stellar Spectra - ARES webpage: http://www.astro.up.pt/~sousasag/ares/Based on observations made with ESO Telescopes at the La Silla Paranal Observatory under programme ID 075.D-0800(A).
GenSSI 2.0: multi-experiment structural identifiability analysis of SBML models.
Ligon, Thomas S; Fröhlich, Fabian; Chis, Oana T; Banga, Julio R; Balsa-Canto, Eva; Hasenauer, Jan
2018-04-15
Mathematical modeling using ordinary differential equations is used in systems biology to improve the understanding of dynamic biological processes. The parameters of ordinary differential equation models are usually estimated from experimental data. To analyze a priori the uniqueness of the solution of the estimation problem, structural identifiability analysis methods have been developed. We introduce GenSSI 2.0, an advancement of the software toolbox GenSSI (Generating Series for testing Structural Identifiability). GenSSI 2.0 is the first toolbox for structural identifiability analysis to implement Systems Biology Markup Language import, state/parameter transformations and multi-experiment structural identifiability analysis. In addition, GenSSI 2.0 supports a range of MATLAB versions and is computationally more efficient than its previous version, enabling the analysis of more complex models. GenSSI 2.0 is an open-source MATLAB toolbox and available at https://github.com/genssi-developer/GenSSI. thomas.ligon@physik.uni-muenchen.de or jan.hasenauer@helmholtz-muenchen.de. Supplementary data are available at Bioinformatics online.
NASA Astrophysics Data System (ADS)
Press, William H.; Teukolsky, Saul A.; Vettering, William T.; Flannery, Brian P.
2003-05-01
The two Numerical Recipes books are marvellous. The principal book, The Art of Scientific Computing, contains program listings for almost every conceivable requirement, and it also contains a well written discussion of the algorithms and the numerical methods involved. The Example Book provides a complete driving program, with helpful notes, for nearly all the routines in the principal book. The first edition of Numerical Recipes: The Art of Scientific Computing was published in 1986 in two versions, one with programs in Fortran, the other with programs in Pascal. There were subsequent versions with programs in BASIC and in C. The second, enlarged edition was published in 1992, again in two versions, one with programs in Fortran (NR(F)), the other with programs in C (NR(C)). In 1996 the authors produced Numerical Recipes in Fortran 90: The Art of Parallel Scientific Computing as a supplement, called Volume 2, with the original (Fortran) version referred to as Volume 1. Numerical Recipes in C++ (NR(C++)) is another version of the 1992 edition. The numerical recipes are also available on a CD ROM: if you want to use any of the recipes, I would strongly advise you to buy the CD ROM. The CD ROM contains the programs in all the languages. When the first edition was published I bought it, and have also bought copies of the other editions as they have appeared. Anyone involved in scientific computing ought to have a copy of at least one version of Numerical Recipes, and there also ought to be copies in every library. If you already have NR(F), should you buy the NR(C++) and, if not, which version should you buy? In the preface to Volume 2 of NR(F), the authors say 'C and C++ programmers have not been far from our minds as we have written this volume, and we think that you will find that time spent in absorbing its principal lessons will be amply repaid in the future as C and C++ eventually develop standard parallel extensions'. In the preface and introduction to NR(C++), the authors point out some of the problems in the use of C++ in scientific computing. I have not found any mention of parallel computing in NR(C++). Fortran has quite a lot going for it. As someone who has used it in most of its versions from Fortran II, I have seen it develop and leave behind other languages promoted by various enthusiasts: who now uses Algol or Pascal? I think it unlikely that C++ will disappear: it was devised as a systems language, and can also be used for other purposes such as scientific computing. It is possible that Fortran will disappear, but Fortran has the strengths that it can develop, that there are extensive Fortran subroutine libraries, and that it has been developed for parallel computing. To argue with programmers as to which is the best language to use is sterile. If you wish to use C++, then buy NR(C++), but you should also look at volume 2 of NR(F). If you are a Fortran programmer, then make sure you have NR(F), volumes 1 and 2. But whichever language you use, make sure you have one version or the other, and the CD ROM. The Example Book provides listings of complete programs to run nearly all the routines in NR, frequently based on cases where an anlytical solution is available. It is helpful when developing a new program incorporating an unfamiliar routine to see that routine actually working, and this is what the programs in the Example Book achieve. I started teaching computational physics before Numerical Recipes was published. If I were starting again, I would make heavy use of both The Art of Scientific Computing and of the Example Book. Every computational physics teaching laboratory should have both volumes: the programs in the Example Book are included on the CD ROM, but the extra commentary in the book itself is of considerable value. P Borcherds
Implementation of Multispectral Image Classification on a Remote Adaptive Computer
NASA Technical Reports Server (NTRS)
Figueiredo, Marco A.; Gloster, Clay S.; Stephens, Mark; Graves, Corey A.; Nakkar, Mouna
1999-01-01
As the demand for higher performance computers for the processing of remote sensing science algorithms increases, the need to investigate new computing paradigms its justified. Field Programmable Gate Arrays enable the implementation of algorithms at the hardware gate level, leading to orders of m a,gnitude performance increase over microprocessor based systems. The automatic classification of spaceborne multispectral images is an example of a computation intensive application, that, can benefit from implementation on an FPGA - based custom computing machine (adaptive or reconfigurable computer). A probabilistic neural network is used here to classify pixels of of a multispectral LANDSAT-2 image. The implementation described utilizes Java client/server application programs to access the adaptive computer from a remote site. Results verify that a remote hardware version of the algorithm (implemented on an adaptive computer) is significantly faster than a local software version of the same algorithm implemented on a typical general - purpose computer).
A new version of a computer program for dynamical calculations of RHEED intensity oscillations
NASA Astrophysics Data System (ADS)
Daniluk, Andrzej; Skrobas, Kazimierz
2006-01-01
We present a new version of the RHEED program which contains a graphical user interface enabling the use of the program in the graphical environment. The presented program also contains a graphical component which enables displaying program data at run-time through an easy-to-use graphical interface. New version program summaryTitle of program: RHEEDGr Catalogue identifier: ADWV Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADWV Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Catalogue identifier of previous version: ADUY Authors of the original program: A. Daniluk Does the new version supersede the original program: no Computer for which the new version is designed and others on which it has been tested: Pentium-based PC Operating systems or monitors under which the new version has been tested: Windows 9x, XP, NT Programming language used: Borland C++ Builder Memory required to execute with typical data: more than 1 MB Number of bits in a word: 64 bits Number of processors used: 1 Number of lines in distributed program, including test data, etc.: 5797 Number of bytes in distributed program, including test data, etc.: 588 121 Distribution format: tar.gz Nature of physical problem: Reflection high-energy electron diffraction (RHEED) is a very useful technique for studying growth and surface analysis of thin epitaxial structures prepared by the molecular beam epitaxy (MBE). The RHEED technique can reveal, almost instantaneously, changes either in the coverage of the sample surface by adsorbates or in the surface structure of a thin film. Method of solution: RHEED intensities are calculated within the framework of the general matrix formulation of Peng and Whelan [1] under the one-beam condition. Reasons for the new version: Responding to the user feedback we designed a graphical package that enables displaying program data at run-time through an easy-to-use graphical interface. Summary of revisions:In the present form the code is an object-oriented extension of previous version [2]. Fig. 1 shows the static structure of classes and their possible relationships (i.e. inheritance, association, aggregation and dependency) in the code. The code has been modified and optimized to compile under the C++ Builder integrated development environment (IDE). A graphical user interface (GUI) for the program has been created. The application is a standard multiple document interface (MDI) project from Builder's object repository. The MDI application spawns child window that reside within the client window; the main form contains child object. We have added an original graphical component [3] which has been tested successfully in the C++ Builder programming environment under Microsoft Windows platform. Fig. 2 shows internal structure of the component. This diagram is a graphic presentation of the static view which shows a collection of declarative model elements, such as classes, types, and their relationships. Each of the model elements shown in Fig. 2 is manifested by one header file Graph2D.h, and one code file Graph2D.cpp. Fig. 3 sets the stage by showing the package which supplies the C++ Builder elements used in the component. Installation instructions of the TGraph2D.bpk package can be found in the new distribution. The program has been constructed according to the systems development live cycle (SDLC) methodology [4]. Typical running time: The typical running time is machine and user-parameters dependent. Unusual features of the program: The program is distributed in the form of a main project RHEEDGr.bpr with associated files, and should be compiled using Borland C++ Builder compilers version 5 or later.
Previous MOVES Versions and Documentation
Find all software, user guides, and download and installation instructions for MOVES2010a and MOVES2010. Note that these version are not valid for SIP and transportation conformity purposes: MOVES2014 and MOVES2014a are the latest versions.
Code OK3 - An upgraded version of OK2 with beam wobbling function
NASA Astrophysics Data System (ADS)
Ogoyski, A. I.; Kawata, S.; Popov, P. H.
2010-07-01
For computer simulations on heavy ion beam (HIB) irradiation onto a target with an arbitrary shape and structure in heavy ion fusion (HIF), the code OK2 was developed and presented in Computer Physics Communications 161 (2004). Code OK3 is an upgrade of OK2 including an important capability of wobbling beam illumination. The wobbling beam introduces a unique possibility for a smooth mechanism of inertial fusion target implosion, so that sufficient fusion energy is released to construct a fusion reactor in future. New version program summaryProgram title: OK3 Catalogue identifier: ADST_v3_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADST_v3_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 221 517 No. of bytes in distributed program, including test data, etc.: 2 471 015 Distribution format: tar.gz Programming language: C++ Computer: PC (Pentium 4, 1 GHz or more recommended) Operating system: Windows or UNIX RAM: 2048 MBytes Classification: 19.7 Catalogue identifier of previous version: ADST_v2_0 Journal reference of previous version: Comput. Phys. Comm. 161 (2004) 143 Does the new version supersede the previous version?: Yes Nature of problem: In heavy ion fusion (HIF), ion cancer therapy, material processing, etc., a precise beam energy deposition is essentially important [1]. Codes OK1 and OK2 have been developed to simulate the heavy ion beam energy deposition in three-dimensional arbitrary shaped targets [2, 3]. Wobbling beam illumination is important to smooth the beam energy deposition nonuniformity in HIF, so that a uniform target implosion is realized and a sufficient fusion output energy is released. Solution method: OK3 code works on the base of OK1 and OK2 [2, 3]. The code simulates a multi-beam illumination on a target with arbitrary shape and structure, including beam wobbling function. Reasons for new version: The code OK3 is based on OK2 [3] and uses the same algorithm with some improvements, the most important one is the beam wobbling function. Summary of revisions:In the code OK3, beams are subdivided on many bunches. The displacement of each bunch center from the initial beam direction is calculated. Code OK3 allows the beamlet number to vary from bunch to bunch. That reduces the calculation error especially in case of very complicated mesh structure with big internal holes. The target temperature rises during the time of energy deposition. Some procedures are improved to perform faster. The energy conservation is checked up on each step of calculation process and corrected if necessary. New procedures included in OK3 Procedure BeamCenterRot( ) rotates the beam axis around the impinging direction of each beam. Procedure BeamletRot( ) rotates the beamlet axes that belong to each beam. Procedure Rotation( ) sets the coordinates of rotated beams and beamlets in chamber and pellet systems. Procedure BeamletOut( ) calculates the lost energy of ions that have not impinged on the target. Procedure TargetT( ) sets the temperature of the target layer of energy deposition during the irradiation process. Procedure ECL( ) checks up the energy conservation law at each step of the energy deposition process. Procedure ECLt( ) performs the final check up of the energy conservation law at the end of deposition process. Modified procedures in OK3 Procedure InitBeam( ): This procedure initializes the beam radius and coefficients A1, A2, A3, A4 and A5 for Gauss distributed beams [2]. It is enlarged in OK3 and can set beams with radii from 1 to 20 mm. Procedure kBunch( ) is modified to allow beamlet number variation from bunch to bunch during the deposition. Procedure ijkSp( ) and procedure Hole( ) are modified to perform faster. Procedure Espl( ) and procedure ChechE( ) are modified to increase the calculation accuracy. Procedure SD( ) calculates the total relative root-mean-square (RMS) deviation and the total relative peak-to-valley (PTV) deviation in energy deposition non-uniformity. This procedure is not included in code OK2 because of its limited applications (for spherical targets only). It is taken from code OK1 and modified to perform with code OK3. Running time: The execution time depends on the pellet mesh number and the number of beams in the simulated illumination as well as on the beam characteristics (beam radius on the pellet surface, beam subdivision, projectile particle energy and so on). In almost all of the practical running tests performed, the typical running time for one beam deposition is about 30 s on a PC with a CPU of Pentium 4, 2.4 GHz. References:A.I. Ogoyski, et al., Heavy ion beam irradiation non-uniformity in inertial fusion, Phys. Lett. A 315 (2003) 372-377. A.I. Ogoyski, et al., Code OK1 - Simulation of multi-beam irradiation on a spherical target in heavy ion fusion, Comput. Phys. Comm. 157 (2004) 160-172. A.I. Ogoyski, et al., Code OK2 - A simulation code of ion-beam illumination on an arbitrary shape and structure target, Comput. Phys. Comm. 161 (2004) 143-150.
ERIC Educational Resources Information Center
Steinmetz, Jean-Paul; Brunner, Martin; Loarer, Even; Houssemand, Claude
2010-01-01
The Wisconsin Card Sorting Test (WCST) assesses executive and frontal lobe function and can be administered manually or by computer. Despite the widespread application of the 2 versions, the psychometric equivalence of their scores has rarely been evaluated and only a limited set of criteria has been considered. The present experimental study (N =…
Revision of FMM-Yukawa: An adaptive fast multipole method for screened Coulomb interactions
NASA Astrophysics Data System (ADS)
Zhang, Bo; Huang, Jingfang; Pitsianis, Nikos P.; Sun, Xiaobai
2010-12-01
FMM-YUKAWA is a mathematical software package primarily for rapid evaluation of the screened Coulomb interactions of N particles in three dimensional space. Since its release, we have revised and re-organized the data structure, software architecture, and user interface, for the purpose of enabling more flexible, broader and easier use of the package. The package and its documentation are available at http://www.fastmultipole.org/, along with a few other closely related mathematical software packages. New version program summaryProgram title: FMM-Yukawa Catalogue identifier: AEEQ_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEQ_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU GPL 2.0 No. of lines in distributed program, including test data, etc.: 78 704 No. of bytes in distributed program, including test data, etc.: 854 265 Distribution format: tar.gz Programming language: FORTRAN 77, FORTRAN 90, and C. Requires gcc and gfortran version 4.4.3 or later Computer: All Operating system: Any Classification: 4.8, 4.12 Catalogue identifier of previous version: AEEQ_v1_0 Journal reference of previous version: Comput. Phys. Comm. 180 (2009) 2331 Does the new version supersede the previous version?: Yes Nature of problem: To evaluate the screened Coulomb potential and force field of N charged particles, and to evaluate a convolution type integral where the Green's function is the fundamental solution of the modified Helmholtz equation. Solution method: The new version of fast multipole method (FMM) that diagonalizes the multipole-to-local translation operator is applied with the tree structure adaptive to sample particle locations. Reasons for new version: To handle much larger particle ensembles, to enable the iterative use of the subroutines in a solver, and to remove potential contention in assignments for parallelization. Summary of revisions: The software package FMM-Yukawa has been revised and re-organized in data structure, software architecture, programming methods, and user interface. The revision enables more flexible use of the package and economic use of memory resources. It consists of five stages. The initial stage (stage 1) determines, based on the accuracy requirement and FMM theory, the length of multipole expansions and the number of quadrature points for diagonalization, and loads the quadrature nodes and weights that are computed off line. Stage 2 constructs the oct-tree and interaction lists, with adaptation to the sparsity or density of particles and employing a dynamic memory allocation scheme at every tree level. Stage 3 executes the core FMM subroutine for numerical calculation of the particle interactions. The subroutine can now be used iteratively as in a solver, while the particle locations remain the same. Stage 4 releases the memory allocated in Stage 2 for the adaptive tree and interaction lists. The user can modify the iterative routine easily. When the particle locations are changed such as in a molecular dynamics simulation, stage 2 to 4 can also be used together repeatedly. The final stage releases the memory space used for the quadrature and other remaining FMM parameters. Programs at the stage level and at the user interface are re-written in the C programming language, while most of the translation and interaction operations remain in FORTRAN. As a result of the change in data structures and memory allocation, the revised package can accommodate much larger particle ensembles while maintaining the same accuracy-efficiency performance. The new version is also developed as an important precursor to its parallel counterpart on multi-core or many core processors in a shared memory programming environment. Particularly, in order to ensure mutual exclusion in concurrent updates without incurring extra latency, we have replaced all the assignment statements at a source box that put its data to multiple target boxes with assignments at every target box that gather data from source boxes. This amounts to replacing the column version of matrix-vector multiplication with the row version. The matrix here, however, is in compressive representation. Sufficient care is taken in the revision not to alter the algorithmic complexity or numerical behavior, as concurrent writing potentially takes place in the upward calculation of the multipole expansion coefficients, interactions at every level of the FMM tree, and downward calculation of the local expansion coefficients. The software modules and their compositions are also organized according to the stages they are used. Demonstration files and makefiles for merging the user routines and the library routines are provided. Restrictions: Accuracy requirement is described in terms of three or six digits. Higher multiples of three digits will be allowed in a later version. Finer decimation in digits for accuracy specification may or may not be necessary. Unusual features: Ready and friendly for customized use and instrumental in expression of concurrency and dependency for efficient parallelization. Running time: The running time depends linearly on the number N of particles, and varies with the distribution characteristics of the particle distribution. It also depends on the accuracy requirement, a higher accuracy requirement takes relatively longer time. The code outperforms the direct summation method when N⩾750.
COMPPAP - COMPOSITE PLATE BUCKLING ANALYSIS PROGRAM (IBM PC VERSION)
NASA Technical Reports Server (NTRS)
Smith, J. P.
1994-01-01
The Composite Plate Buckling Analysis Program (COMPPAP) was written to help engineers determine buckling loads of orthotropic (or isotropic) irregularly shaped plates without requiring hand calculations from design curves or extensive finite element modeling. COMPPAP is a one element finite element program that utilizes high-order displacement functions. The high order of the displacement functions enables the user to produce results more accurate than traditional h-finite elements. This program uses these high-order displacement functions to perform a plane stress analysis of a general plate followed by a buckling calculation based on the stresses found in the plane stress solution. The current version assumes a flat plate (constant thickness) subject to a constant edge load (normal or shear) on one or more edges. COMPPAP uses the power method to find the eigenvalues of the buckling problem. The power method provides an efficient solution when only one eigenvalue is desired. Once the eigenvalue is found, the eigenvector, which corresponds to the plate buckling mode shape, results as a by-product. A positive feature of the power method is that the dominant eigenvalue is the first found, which is this case is the plate buckling load. The reported eigenvalue expresses a load factor to induce plate buckling. COMPPAP is written in ANSI FORTRAN 77. Two machine versions are available from COSMIC: a PC version (MSC-22428), which is for IBM PC 386 series and higher computers and compatibles running MS-DOS; and a UNIX version (MSC-22286). The distribution medium for both machine versions includes source code for both single and double precision versions of COMPPAP. The PC version includes source code which has been optimized for implementation within DOS memory constraints as well as sample executables for both the single and double precision versions of COMPPAP. The double precision versions of COMPPAP have been successfully implemented on an IBM PC 386 compatible running MS-DOS, a Sun4 series computer running SunOS, an HP-9000 series computer running HP-UX, and a CRAY X-MP series computer running UNICOS. COMPPAP requires 1Mb of RAM and the BLAS and LINPACK math libraries, which are included on the distribution medium. The COMPPAP documentation provides instructions for using the commercial post-processing package PATRAN for graphical interpretation of COMPPAP output. The UNIX version includes two electronic versions of the documentation: one in LaTex format and one in PostScript format. The standard distribution medium for the PC version (MSC-22428) is a 5.25 inch 1.2Mb MS-DOS format diskette. The standard distribution medium for the UNIX version (MSC-22286) is a .25 inch streaming magnetic tape cartridge (Sun QIC-24) in UNIX tar format. For the UNIX version, alternate distribution media and formats are available upon request. COMPPAP was developed in 1992.
COMPPAP - COMPOSITE PLATE BUCKLING ANALYSIS PROGRAM (UNIX VERSION)
NASA Technical Reports Server (NTRS)
Smith, J. P.
1994-01-01
The Composite Plate Buckling Analysis Program (COMPPAP) was written to help engineers determine buckling loads of orthotropic (or isotropic) irregularly shaped plates without requiring hand calculations from design curves or extensive finite element modeling. COMPPAP is a one element finite element program that utilizes high-order displacement functions. The high order of the displacement functions enables the user to produce results more accurate than traditional h-finite elements. This program uses these high-order displacement functions to perform a plane stress analysis of a general plate followed by a buckling calculation based on the stresses found in the plane stress solution. The current version assumes a flat plate (constant thickness) subject to a constant edge load (normal or shear) on one or more edges. COMPPAP uses the power method to find the eigenvalues of the buckling problem. The power method provides an efficient solution when only one eigenvalue is desired. Once the eigenvalue is found, the eigenvector, which corresponds to the plate buckling mode shape, results as a by-product. A positive feature of the power method is that the dominant eigenvalue is the first found, which is this case is the plate buckling load. The reported eigenvalue expresses a load factor to induce plate buckling. COMPPAP is written in ANSI FORTRAN 77. Two machine versions are available from COSMIC: a PC version (MSC-22428), which is for IBM PC 386 series and higher computers and compatibles running MS-DOS; and a UNIX version (MSC-22286). The distribution medium for both machine versions includes source code for both single and double precision versions of COMPPAP. The PC version includes source code which has been optimized for implementation within DOS memory constraints as well as sample executables for both the single and double precision versions of COMPPAP. The double precision versions of COMPPAP have been successfully implemented on an IBM PC 386 compatible running MS-DOS, a Sun4 series computer running SunOS, an HP-9000 series computer running HP-UX, and a CRAY X-MP series computer running UNICOS. COMPPAP requires 1Mb of RAM and the BLAS and LINPACK math libraries, which are included on the distribution medium. The COMPPAP documentation provides instructions for using the commercial post-processing package PATRAN for graphical interpretation of COMPPAP output. The UNIX version includes two electronic versions of the documentation: one in LaTex format and one in PostScript format. The standard distribution medium for the PC version (MSC-22428) is a 5.25 inch 1.2Mb MS-DOS format diskette. The standard distribution medium for the UNIX version (MSC-22286) is a .25 inch streaming magnetic tape cartridge (Sun QIC-24) in UNIX tar format. For the UNIX version, alternate distribution media and formats are available upon request. COMPPAP was developed in 1992.
VizieR Online Data Catalog: SKY2000 Master Catalog, Version 5 (Myers+ 2006)
NASA Astrophysics Data System (ADS)
Myers, J. R.; Sande, C. B.; Miller, A. C.; Warren, W. H., Jr.; Tracewell, D. A.
2015-02-01
The SKYMAP Star Catalog System consists of a Master Catalog stellar database and a collection of utility software designed to create and maintain the database and to generate derivative mission star catalogs (run catalogs). It contains an extensive compilation of information on almost 300000 stars brighter than 8.0mag. The original SKYMAP Master Catalog was generated in the early 1970's. Incremental updates and corrections were made over the following years but the first complete revision of the source data occurred with Version 4.0. This revision also produced a unique, consolidated source of astrometric information which can be used by the astronomical community. The derived quantities were removed and wideband and photometric data in the R (red) and I (infrared) systems were added. Version 4 of the SKY2000 Master Catalog was completed in April 2002; it marks the global replacement of the variability identifier and variability data fields. More details can be found in the description file sky2kv4.pdf. The SKY2000 Version 5 Revision 4 Master Catalog differs from Revision 3 in that MK and HD spectral types have been added from the Catalogue of Stellar Spectral Classifications (B. A. Skiff of Lowell Observatory, 2005), which has been assigned source code 50 in this process. 9622 entries now have MK types from this source, while 3976 entries have HD types from this source. SKY2000 V5 R4 also differs globally from preceding MC versions in that the Galactic coordinate computations performed by UPDATE have been increased in accuracy, so that differences from the same quantities from other sources are now typically in the last decimal places carried in the MC. This version supersedes the previous versions 1(V/95), 2(V/102), 3(V/105) and 4(V/109). (6 data files).
Automatic Blocking Of QR and LU Factorizations for Locality
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yi, Q; Kennedy, K; You, H
2004-03-26
QR and LU factorizations for dense matrices are important linear algebra computations that are widely used in scientific applications. To efficiently perform these computations on modern computers, the factorization algorithms need to be blocked when operating on large matrices to effectively exploit the deep cache hierarchy prevalent in today's computer memory systems. Because both QR (based on Householder transformations) and LU factorization algorithms contain complex loop structures, few compilers can fully automate the blocking of these algorithms. Though linear algebra libraries such as LAPACK provides manually blocked implementations of these algorithms, by automatically generating blocked versions of the computations, moremore » benefit can be gained such as automatic adaptation of different blocking strategies. This paper demonstrates how to apply an aggressive loop transformation technique, dependence hoisting, to produce efficient blockings for both QR and LU with partial pivoting. We present different blocking strategies that can be generated by our optimizer and compare the performance of auto-blocked versions with manually tuned versions in LAPACK, both using reference BLAS, ATLAS BLAS and native BLAS specially tuned for the underlying machine architectures.« less
A finite difference Hartree-Fock program for atoms and diatomic molecules
NASA Astrophysics Data System (ADS)
Kobus, Jacek
2013-03-01
The newest version of the two-dimensional finite difference Hartree-Fock program for atoms and diatomic molecules is presented. This is an updated and extended version of the program published in this journal in 1996. It can be used to obtain reference, Hartree-Fock limit values of total energies and multipole moments for a wide range of diatomic molecules and their ions in order to calibrate existing and develop new basis sets, calculate (hyper)polarizabilities (αzz, βzzz, γzzzz, Az,zz, Bzz,zz) of atoms, homonuclear and heteronuclear diatomic molecules and their ions via the finite field method, perform DFT-type calculations using LDA or B88 exchange functionals and LYP or VWN correlations ones or the self-consistent multiplicative constant method, perform one-particle calculations with (smooth) Coulomb and Krammers-Henneberger potentials and take account of finite nucleus models. The program is easy to install and compile (tarball+configure+make) and can be used to perform calculations within double- or quadruple-precision arithmetic. Catalogue identifier: ADEB_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADEB_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License version 2 No. of lines in distributed program, including test data, etc.: 171196 No. of bytes in distributed program, including test data, etc.: 9481802 Distribution format: tar.gz Programming language: Fortran 77, C. Computer: any 32- or 64-bit platform. Operating system: Unix/Linux. RAM: Case dependent, from few MB to many GB Classification: 16.1. Catalogue identifier of previous version: ADEB_v1_0 Journal reference of previous version: Comput. Phys. Comm. 98(1996)346 Does the new version supersede the previous version?: Yes Nature of problem: The program finds virtually exact solutions of the Hartree-Fock and density functional theory type equations for atoms, diatomic molecules and their ions. The lowest energy eigenstates of a given irreducible representation and spin can be obtained. The program can be used to perform one-particle calculations with (smooth) Coulomb and Krammers-Henneberger potentials and also DFT-type calculations using LDA or B88 exchange functionals and LYP or VWN correlations ones or the self-consistent multiplicative constant method. Solution method: Single-particle two-dimensional numerical functions (orbitals) are used to construct an antisymmetric many-electron wave function of the restricted open-shell Hartree-Fock model. The orbitals are obtained by solving the Hartree-Fock equations as coupled two-dimensional second-order (elliptic) partial differential equations (PDEs). The Coulomb and exchange potentials are obtained as solutions of the corresponding Poisson equations. The PDEs are discretized by the eighth-order central difference stencil on a two-dimensional single grid, and the resulting large and sparse system of linear equations is solved by the (multicolour) successive overrelaxation ((MC)SOR) method. The self-consistent-field iterations are interwoven with the (MC)SOR ones and orbital energies and normalization factors are used to monitor the convergence. The accuracy of solutions depends mainly on the grid and the system under consideration, which means that within double precision arithmetic one can obtain orbitals and energies having up to 12 significant figures. If more accurate results are needed, quadruple-precision floating-point arithmetic can be used. Reasons for new version: Additional features, many modifications and corrections, improved convergence rate, overhauled code and documentation. Summary of revisions: see ChangeLog found in tar.gz archive Restrictions: The present version of the program is restricted to 60 orbitals. The maximum grid size is determined at compilation time. Unusual features: The program uses two C routines for allocating and deallocating memory. Several BLAS (Basic Linear Algebra System) routines are emulated by the program. When possible they should be replaced by their library equivalents. Additional comments: automake and autoconf tools are required to build and compile the program; checked with f77, gfortran and ifort compilers Running time: Very case dependent - from a few CPU seconds for the H2 defined on a small grid up to several weeks for the Hartree-Fock-limit calculations for 40-50 electron molecules.
MSFC crack growth analysis computer program, version 2 (users manual)
NASA Technical Reports Server (NTRS)
Creager, M.
1976-01-01
An updated version of the George C. Marshall Space Flight Center Crack Growth Analysis Program is described. The updated computer program has significantly expanded capabilities over the original one. This increased capability includes an extensive expansion of the library of stress intensity factors, plotting capability, increased design iteration capability, and the capability of performing proof test logic analysis. The technical approaches used within the computer program are presented, and the input and output formats and options are described. Details of the stress intensity equations, example data, and example problems are presented.
An evaluation of Lake States STEMS85.
Margaret R. Holdaway
1986-01-01
An updated version of the Lake States variant of STEMS is evaluated and compared with the previous version. The new version is slightly more accurate and precise. The strengths and weaknesses of this tree growth projection system are also identified.
MISR Level 3 Albedo and Cloud Versioning
Atmospheric Science Data Center
2016-11-04
... Albedo Versioning statement for changes to the Level 2 data being summarized. Ver. # Production Start Date ... of Observation" data, which in the previous version was missing many Level 2 observations. The actual Level 3 averages contained all of ...
Atmospheric Science Data Center
2013-08-06
... error Version 4 The MOPITT/NCAR team discovered a data processing error that affects all V4 products previously ... products are available on News and Status on the MOPITT team web site . Data Product New V4 Product Version(s) ...
Experimental demonstration of blind quantum computing
NASA Astrophysics Data System (ADS)
Barz, Stefanie; Kashefi, Elham; Broadbent, Anne; Fitzsimons, Joe; Zeilinger, Anton; Walther, Philip
2012-02-01
Quantum computers are among the most promising applications of quantum-enhanced technologies. Quantum effects such as superposition and entanglement enable computational speed-ups that are unattainable using classical computers. The challenges in realising quantum computers suggest that in the near future, only a few facilities worldwide will be capable of operating such devices. In order to exploit these computers, users would seemingly have to give up their privacy. It was recently shown that this is not the case and that, via the universal blind quantum computation protocol, quantum mechanics provides a way to guarantee that the user's data remain private. Here, we demonstrate the first experimental version of this protocol using polarisation-entangled photonic qubits. We demonstrate various blind one- and two-qubit gate operations as well as blind versions of the Deutsch's and Grover's algorithms. When the technology to build quantum computers becomes available, this will become an important privacy-preserving feature of quantum information processing.
JPL Development Ephemeris number 96
NASA Technical Reports Server (NTRS)
Standish, E. M., Jr.; Keesey, M. S. W.; Newhall, X. X.
1976-01-01
The fourth issue of JPL Planetary Ephemerides, designated JPL Development Ephemeris No. 96 (DE96), is described. This ephemeris replaces a previous issue which has become obsolete since its release in 1969. Improvements in this issue include more recent and more accurate observational data, new types of data, better processing of the data, and refined equations of motion which more accurately describe the actual physics of the solar system. The descriptions in this report include these new features as well as the new export version of the ephemeris. The tapes and requisite software will be distributed through the NASA Computer Software Management and Information Center (COSMIC) at the University of Georgia.
A premier analysis of supersymmetric closed string tachyon cosmology
NASA Astrophysics Data System (ADS)
Vázquez-Báez, V.; Ramírez, C.
2018-04-01
From a previously found worldline supersymmetric formulation for the effective action of the closed string tachyon in a FRW background, the Hamiltonian of the theory is constructed, by means of the Dirac procedure, and written in a quantum version. Using the supersymmetry algebra we are able to find solutions to the Wheeler-DeWitt equation via a more simple set of first order differential equations. Finally, for the k = 0 case, we compute the expectation value of the scale factor with a suitably potential also favored in the present literature. We give some interpretations of the results and state future work lines on this matter.
Multistage Planetary Power Transmissions
NASA Technical Reports Server (NTRS)
Hadden, G. B.; Dyba, G. J.; Ragen, M. A.; Kleckner, R. J.; Sheynin, L.
1986-01-01
PLANETSYS simulates thermomechanical performance of multistage planetary performance of multistage planetary power transmission. Two versions of code developed, SKF version and NASA version. Major function of program: compute performance characteristics of planet bearing for any of six kinematic inversions. PLANETSYS solves heat-balance equations for either steadystate or transient thermal conditions, and produces temperature maps for mechanical system.
NASA Astrophysics Data System (ADS)
Kastner, Margaret E.; Vasbinder, Eric; Kowalcyzk, Deborah; Jackson, Sean; Giammalvo, Joseph; Braun, James; Dimarco, Keith
2000-09-01
Literature Cited
1989-01-17
6Is OBsO.[il I J)A s3 0,2O-L,-01,-5601 UNCLASSIFIED Ada Compiler Validation Summary Report : Compiler Name: Harris Ada, Version 5.0 Certificate Number...United States Department of Defense Washington DC 20301-3081 Ada Compiler Validation Summary Report : Compiler Name: Harris Ada, Version 5.0 Certificate...O RE[PP" 9 PEA= COVELRD Ada Corpiler Validation SummT, ary Repor6:Hnrris 17 Jan 19S9 to 17 Jan 1990 Corporation, Computer SYLeIns Di%ision, Harris Ada
An Improved Version of the NASA-Lockheed Multielement Airfoil Analysis Computer Program
NASA Technical Reports Server (NTRS)
Brune, G. W.; Manke, J. W.
1978-01-01
An improved version of the NASA-Lockheed computer program for the analysis of multielement airfoils is described. The predictions of the program are evaluated by comparison with recent experimental high lift data including lift, pitching moment, profile drag, and detailed distributions of surface pressures and boundary layer parameters. The results of the evaluation show that the contract objectives of improving program reliability and accuracy have been met.
2013-06-03
and a C++ computational backend . The most current version of ORA (3.0.8.5) software is available on the casos website: http://casos.cs.cmu.edu...optimizing a network’s design structure. ORA uses a Java interface for ease of use, and a C++ computational backend . The most current version of ORA...Eigenvector Centrality : Node most connected to other highly connected nodes. Assists in identifying those who can mobilize others Entity Class
Overview and Evaluation of the Community Multiscale Air Quality (CMAQ) Modeling System Version 5.2
A new version of the Community Multiscale Air Quality (CMAQ) model, version 5.2 (CMAQv5.2), is currently being developed, with a planned release date in 2017. The new model includes numerous updates from the previous version of the model (CMAQv5.1). Specific updates include a new...
Torak, L.J.
1993-01-01
A MODular Finite-Element, digital-computer program (MODFE) was developed to simulate steady or unsteady-state, two-dimensional or axisymmetric ground-water-flow. The modular structure of MODFE places the computationally independent tasks that are performed routinely by digital-computer programs simulating ground-water flow into separate subroutines, which are executed from the main program by control statements. Each subroutine consists of complete sets of computations, or modules, which are identified by comment statements, and can be modified by the user without affecting unrelated computations elsewhere in the program. Simulation capabilities can be added or modified by either adding or modifying subroutines that perform specific computational tasks, and the modular-program structure allows the user to create versions of MODFE that contain only the simulation capabilities that pertain to the ground-water problem of interest. MODFE is written in a Fortran programming language that makes it virtually device independent and compatible with desk-top personal computers and large mainframes. MODFE uses computer storage and execution time efficiently by taking advantage of symmetry and sparseness within the coefficient matrices of the finite-element equations. Parts of the matrix coefficients are computed and stored as single-subscripted variables, which are assembled into a complete coefficient just prior to solution. Computer storage is reused during simulation to decrease storage requirements. Descriptions of subroutines that execute the computational steps of the modular-program structure are given in tables that cross reference the subroutines with particular versions of MODFE. Programming details of linear and nonlinear hydrologic terms are provided. Structure diagrams for the main programs show the order in which subroutines are executed for each version and illustrate some of the linear and nonlinear versions of MODFE that are possible. Computational aspects of changing stresses and boundary conditions with time and of mass-balance and error terms are given for each hydrologic feature. Program variables are listed and defined according to their occurrence in the main programs and in subroutines. Listings of the main programs and subroutines are given.
Assessment of radionuclide databases in CAP88 mainframe version 1.0 and Windows-based version 3.0.
LaBone, Elizabeth D; Farfán, Eduardo B; Lee, Patricia L; Jannik, G Timothy; Donnelly, Elizabeth H; Foley, Trevor Q
2009-09-01
In this study the radionuclide databases for two versions of the Clean Air Act Assessment Package-1988 (CAP88) computer model were assessed in detail. CAP88 estimates radiation dose and the risk of health effects to human populations from radionuclide emissions to air. This program is used by several U.S. Department of Energy (DOE) facilities to comply with National Emission Standards for Hazardous Air Pollutants regulations. CAP88 Mainframe, referred to as version 1.0 on the U.S. Environmental Protection Agency Web site (http://www.epa.gov/radiation/assessment/CAP88/), was the very first CAP88 version released in 1988. Some DOE facilities including the Savannah River Site still employ this version (1.0) while others use the more user-friendly personal computer Windows-based version 3.0 released in December 2007. Version 1.0 uses the program RADRISK based on International Commission on Radiological Protection Publication 30 as its radionuclide database. Version 3.0 uses half-life, dose, and risk factor values based on Federal Guidance Report 13. Differences in these values could cause different results for the same input exposure data (same scenario), depending on which version of CAP88 is used. Consequently, the differences between the two versions are being assessed in detail at Savannah River National Laboratory. The version 1.0 and 3.0 database files contain 496 and 838 radionuclides, respectively, and though one would expect the newer version to include all the 496 radionuclides, 35 radionuclides are listed in version 1.0 that are not included in version 3.0. The majority of these has either extremely short or long half-lives or is no longer in production; however, some of the short-lived radionuclides might produce progeny of great interest at DOE sites. In addition, 122 radionuclides were found to have different half-lives in the two versions, with 21 over 3 percent different and 12 over 10 percent different.
. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
SEQ-POINTER: Next generation, planetary spacecraft remote sensing science observation design tool
NASA Technical Reports Server (NTRS)
Boyer, Jeffrey S.
1994-01-01
Since Mariner, NASA-JPL planetary missions have been supported by ground software to plan and design remote sensing science observations. The software used by the science and sequence designers to plan and design observations has evolved with mission and technological advances. The original program, PEGASIS (Mariners 4, 6, and 7), was re-engineered as POGASIS (Mariner 9, Viking, and Mariner 10), and again later as POINTER (Voyager and Galileo). Each of these programs were developed under technological, political, and fiscal constraints which limited their adaptability to other missions and spacecraft designs. Implementation of a multi-mission tool, SEQ POINTER, under the auspices of the JPL Multimission Operations Systems Office (MOSO) is in progress. This version has been designed to address the limitations experienced on previous versions as they were being adapted to a new mission and spacecraft. The tool has been modularly designed with subroutine interface structures to support interchangeable celestial body and spacecraft definition models. The computational and graphics modules have also been designed to interface with data collected from previous spacecraft, or on-going observations, which describe the surface of each target body. These enhancements make SEQ POINTER a candidate for low-cost mission usage, when a remote sensing science observation design capability is required. The current and planned capabilities of the tool will be discussed. The presentation will also include a 5-10 minute video presentation demonstrating the capabilities of a proto-Cassini Project version that was adapted to test the tool. The work described in this abstract was performed by the Jet Propulsion Laboratory, California Institute of Technology, under contract to the National Aeronautics and Space Administration.
SEQ-POINTER: Next generation, planetary spacecraft remote sensing science observation design tool
NASA Astrophysics Data System (ADS)
Boyer, Jeffrey S.
1994-11-01
Since Mariner, NASA-JPL planetary missions have been supported by ground software to plan and design remote sensing science observations. The software used by the science and sequence designers to plan and design observations has evolved with mission and technological advances. The original program, PEGASIS (Mariners 4, 6, and 7), was re-engineered as POGASIS (Mariner 9, Viking, and Mariner 10), and again later as POINTER (Voyager and Galileo). Each of these programs were developed under technological, political, and fiscal constraints which limited their adaptability to other missions and spacecraft designs. Implementation of a multi-mission tool, SEQ POINTER, under the auspices of the JPL Multimission Operations Systems Office (MOSO) is in progress. This version has been designed to address the limitations experienced on previous versions as they were being adapted to a new mission and spacecraft. The tool has been modularly designed with subroutine interface structures to support interchangeable celestial body and spacecraft definition models. The computational and graphics modules have also been designed to interface with data collected from previous spacecraft, or on-going observations, which describe the surface of each target body. These enhancements make SEQ POINTER a candidate for low-cost mission usage, when a remote sensing science observation design capability is required. The current and planned capabilities of the tool will be discussed. The presentation will also include a 5-10 minute video presentation demonstrating the capabilities of a proto-Cassini Project version that was adapted to test the tool. The work described in this abstract was performed by the Jet Propulsion Laboratory, California Institute of Technology, under contract to the National Aeronautics and Space Administration.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mueller, S; Dunn, JB; Wang, M
2012-06-07
The Carbon Calculator for Land Use Change from Biofuels Production (CCLUB) calculates carbon emissions from land use change (LUC) for four different ethanol production pathways including corn grain ethanol and cellulosic ethanol from corn stover, miscanthus, and switchgrass. This document discusses the version of CCLUB released May 31, 2012 which includes corn, as did the previous CCLUB version, and three cellulosic feedstocks: corn stover, miscanthus, and switchgrass. CCLUB calculations are based upon two data sets: land change areas and above- and below-ground carbon content. Table 1 identifies where these data are stored and used within the CCLUB model, which ismore » built in MS Excel. Land change area data is from Purdue University's Global Trade Analysis Project (GTAP) model, a computable general equilibrium (CGE) economic model. Section 2 describes the GTAP data CCLUB uses and how these data were modified to reflect shrubland transitions. Feedstock- and spatially-explicit below-ground carbon content data for the United States were generated with a surrogate model for CENTURY's soil organic carbon sub-model (Kwon and Hudson 2010) as described in Section 3. CENTURY is a soil organic matter model developed by Parton et al. (1987). The previous CCLUB version used more coarse domestic carbon emission factors. Above-ground non-soil carbon content data for forest ecosystems was sourced from the USDA/NCIAS Carbon Online Estimator (COLE) as explained in Section 4. We discuss emission factors used for calculation of international greenhouse gas (GHG) emissions in Section 5. Temporal issues associated with modeling LUC emissions are the topic of Section 6. Finally, in Section 7 we provide a step-by-step guide to using CCLUB and obtaining results.« less
Spacecraft Orbit Design and Analysis (SODA), version 1.0 user's guide
NASA Technical Reports Server (NTRS)
Stallcup, Scott S.; Davis, John S.
1989-01-01
The Spacecraft Orbit Design and Analysis (SODA) computer program, Version 1.0 is described. SODA is a spaceflight mission planning system which consists of five program modules integrated around a common database and user interface. SODA runs on a VAX/VMS computer with an EVANS & SUTHERLAND PS300 graphics workstation. BOEING RIM-Version 7 relational database management system performs transparent database services. In the current version three program modules produce an interactive three dimensional (3D) animation of one or more satellites in planetary orbit. Satellite visibility and sensor coverage capabilities are also provided. One module produces an interactive 3D animation of the solar system. Another module calculates cumulative satellite sensor coverage and revisit time for one or more satellites. Currently Earth, Moon, and Mars systems are supported for all modules except the solar system module.
Corno, Giulia; Molinari, Guadalupe; Baños, Rosa Maria
2016-01-01
The aim of this study is to explore the psychometric properties of an affect scale, the Scale of Positive and Negative Experience (SPANE), in an Italian-speaking population. The results of this study demonstrate that the Italian version of the SPANE has psychometric properties similar to those shown by the original and previous versions, and it presents satisfactory reliability and factorial validity. The results of the Confirmatory Factor Analysis support the expected two-factor structure, positive and negative feeling, which characterized the previous versions. As expected, measures of negative affect, anxiety, negative future expectances, and depression correlated positively with the negative experiences SPANE subscale, and negatively with the positive experiences SPANE subscale. Results of this study demonstrate that the Italian version of the SPANE has psychometric properties similar to those shown by the original and previous versions, and it presents satisfactory reliability and factorial validity. The use of this instrument provides clinically useful information about a person’s overall emotional experience and it is an indicator of well-being. Although further studies are required to confirm the psychometric characteristics of the scale, the SPANE Italian version is expected to improve theoretical and empirical research on the well-being of the Italian population.
Computer-based training for safety: comparing methods with older and younger workers.
Wallen, Erik S; Mulloy, Karen B
2006-01-01
Computer-based safety training is becoming more common and is being delivered to an increasingly aging workforce. Aging results in a number of changes that make it more difficult to learn from certain types of computer-based training. Instructional designs derived from cognitive learning theories may overcome some of these difficulties. Three versions of computer-based respiratory safety training were shown to older and younger workers who then took a high and a low level learning test. Younger workers did better overall. Both older and younger workers did best with the version containing text with pictures and audio narration. Computer-based training with pictures and audio narration may be beneficial for workers over 45 years of age. Computer-based safety training has advantages but workers of different ages may benefit differently. Computer-based safety programs should be designed and selected based on their ability to effectively train older as well as younger learners.
A self-analysis of the NASA-TLX workload measure.
Noyes, Jan M; Bruneau, Daniel P J
2007-04-01
Computer use and, more specifically, the administration of tests and materials online continue to proliferate. A number of subjective, self-report workload measures exist, but the National Aeronautics and Space Administration-Task Load Index (NASA-TLX) is probably the most well known and used. The aim of this paper is to consider the workload costs associated with the computer-based and paper versions of the NASA-TLX measure. It was found that there is a significant difference between the workload scores for the two media, with the computer version of the NASA-TLX incurring more workload. This has implications for the practical use of the NASA-TLX as well as for other computer-based workload measures.
Review and verification of CARE 3 mathematical model and code
NASA Technical Reports Server (NTRS)
Rose, D. M.; Altschul, R. E.; Manke, J. W.; Nelson, D. L.
1983-01-01
The CARE-III mathematical model and code verification performed by Boeing Computer Services were documented. The mathematical model was verified for permanent and intermittent faults. The transient fault model was not addressed. The code verification was performed on CARE-III, Version 3. A CARE III Version 4, which corrects deficiencies identified in Version 3, is being developed.
ERIC Educational Resources Information Center
Sander, Elisabeth; Heiß, Andrea
2014-01-01
Three different versions of a learning program on trigonometry were compared, a program controlled, non-interactive version (CG), an interactive, conflict inducing version (EG 1), and an interactive one which was supposed to reduce the occurrence of a cognitive conflict regarding the central problem solution (EG 2). Pupils (N = 101) of a…
BehavePlus fire modeling system, version 5.0: Variables
Patricia L. Andrews
2009-01-01
This publication has been revised to reflect updates to version 4.0 of the BehavePlus software. It was originally published as the BehavePlus fire modeling system, version 4.0: Variables in July, 2008.The BehavePlus fire modeling system is a computer program based on mathematical models that describe wildland fire behavior and effects and the...
MAGNA (Materially and Geometrically Nonlinear Analysis). Part I. Finite Element Analysis Manual.
1982-12-01
provided for operating the program, modifying storage caoacity, preparing input data, estimating computer run times , and interpreting the output...7.1.3 Reserved File Names 7.1.16 7.1.4 Typical Execution Times on CDC Computers 7.1.18 7.2 CRAY PROGRAM VERSION 7.2.1 7.2.1 Job Control Language 7.2.1...7.2.2 Modification of Storage Capacity 7.2.8 7.2.3 Execution Times on the CRAY-I Computer 7.2.12 7.3 VAX PROGRAM VERSION 7.3.1 8 INPUT DATA 8.0.1 8.1
Program Processes Thermocouple Readings
NASA Technical Reports Server (NTRS)
Quave, Christine A.; Nail, William, III
1995-01-01
Digital Signal Processor for Thermocouples (DART) computer program implements precise and fast method of converting voltage to temperature for large-temperature-range thermocouple applications. Written using LabVIEW software. DART available only as object code for use on Macintosh II FX or higher-series computers running System 7.0 or later and IBM PC-series and compatible computers running Microsoft Windows 3.1. Macintosh version of DART (SSC-00032) requires LabVIEW 2.2.1 or 3.0 for execution. IBM PC version (SSC-00031) requires LabVIEW 3.0 for Windows 3.1. LabVIEW software product of National Instruments and not included with program.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chapman, Bryan Scott; Gough, Sean T.
This report documents a validation of the MCNP6 Version 1.0 computer code on the high performance computing platform Moonlight, for operations at Los Alamos National Laboratory (LANL) that involve plutonium metals, oxides, and solutions. The validation is conducted using the ENDF/B-VII.1 continuous energy group cross section library at room temperature. The results are for use by nuclear criticality safety personnel in performing analysis and evaluation of various facility activities involving plutonium materials.
BOLD VENTURE COMPUTATION SYSTEM for nuclear reactor core analysis, Version III
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vondy, D.R.; Fowler, T.B.; Cunningham, G.W. III.
1981-06-01
This report is a condensed documentation for VERSION III of the BOLD VENTURE COMPUTATION SYSTEM for nuclear reactor core analysis. An experienced analyst should be able to use this system routinely for solving problems by referring to this document. Individual reports must be referenced for details. This report covers basic input instructions and describes recent extensions to the modules as well as to the interface data file specifications. Some application considerations are discussed and an elaborate sample problem is used as an instruction aid. Instructions for creating the system on IBM computers are also given.
Documentary table-top view of a comparison of the General Purpose Computers.
1988-09-13
S88-47513 (Aug 1988) --- The current and future versions of general purpose computers for Space Shuttle orbiters are represented in this frame. The two boxes on the left (AP101B) represent the current GPC configuration, with the input-output processor at far left and the central processing unit at its side. The upgraded version combines both elements in a single unit (far right, AP101S).
PCACE-Personal-Computer-Aided Cabling Engineering
NASA Technical Reports Server (NTRS)
Billitti, Joseph W.
1987-01-01
PCACE computer program developed to provide inexpensive, interactive system for learning and using engineering approach to interconnection systems. Basically database system that stores information as files of individual connectors and handles wiring information in circuit groups stored as records. Directly emulates typical manual engineering methods of handling data, thus making interface between user and program very natural. Apple version written in P-Code Pascal and IBM PC version of PCACE written in TURBO Pascal 3.0
Walthouwer, Michel Jean Louis; Oenema, Anke; Lechner, Lilian; de Vries, Hein
2015-10-19
Web-based computer-tailored interventions often suffer from small effect sizes and high drop-out rates, particularly among people with a low level of education. Using videos as a delivery format can possibly improve the effects and attractiveness of these interventions The main aim of this study was to examine the effects of a video and text version of a Web-based computer-tailored obesity prevention intervention on dietary intake, physical activity, and body mass index (BMI) among Dutch adults. A second study aim was to examine differences in appreciation between the video and text version. The final study aim was to examine possible differences in intervention effects and appreciation per educational level. A three-armed randomized controlled trial was conducted with a baseline and 6 months follow-up measurement. The intervention consisted of six sessions, lasting about 15 minutes each. In the video version, the core tailored information was provided by means of videos. In the text version, the same tailored information was provided in text format. Outcome variables were self-reported and included BMI, physical activity, energy intake, and appreciation of the intervention. Multiple imputation was used to replace missing values. The effect analyses were carried out with multiple linear regression analyses and adjusted for confounders. The process evaluation data were analyzed with independent samples t tests. The baseline questionnaire was completed by 1419 participants and the 6 months follow-up measurement by 1015 participants (71.53%). No significant interaction effects of educational level were found on any of the outcome variables. Compared to the control condition, the video version resulted in lower BMI (B=-0.25, P=.049) and lower average daily energy intake from energy-dense food products (B=-175.58, P<.001), while the text version had an effect only on energy intake (B=-163.05, P=.001). No effects on physical activity were found. Moreover, the video version was rated significantly better than the text version on feelings of relatedness (P=.041), usefulness (P=.047), and grade given to the intervention (P=.018). The video version of the Web-based computer-tailored obesity prevention intervention was the most effective intervention and most appreciated. Future research needs to examine if the effects are maintained in the long term and how the intervention can be optimized. Netherlands Trial Register: NTR3501; http://www.trialregister.nl/trialreg/admin/rctview.asp?TC=3501 (Archived by WebCite at http://www.webcitation.org/6cBKIMaW1).
Development of an hp-version finite element method for computational optimal control
NASA Technical Reports Server (NTRS)
Hodges, Dewey H.; Warner, Michael S.
1993-01-01
The purpose of this research effort was to begin the study of the application of hp-version finite elements to the numerical solution of optimal control problems. Under NAG-939, the hybrid MACSYMA/FORTRAN code GENCODE was developed which utilized h-version finite elements to successfully approximate solutions to a wide class of optimal control problems. In that code the means for improvement of the solution was the refinement of the time-discretization mesh. With the extension to hp-version finite elements, the degrees of freedom include both nodal values and extra interior values associated with the unknown states, co-states, and controls, the number of which depends on the order of the shape functions in each element. One possible drawback is the increased computational effort within each element required in implementing hp-version finite elements. We are trying to determine whether this computational effort is sufficiently offset by the reduction in the number of time elements used and improved Newton-Raphson convergence so as to be useful in solving optimal control problems in real time. Because certain of the element interior unknowns can be eliminated at the element level by solving a small set of nonlinear algebraic equations in which the nodal values are taken as given, the scheme may turn out to be especially powerful in a parallel computing environment. A different processor could be assigned to each element. The number of processors, strictly speaking, is not required to be any larger than the number of sub-regions which are free of discontinuities of any kind.
A validation of well-being and happiness surveys for administration via the Internet.
Howell, Ryan T; Rodzon, Katrina S; Kurai, Mark; Sanchez, Amy H
2010-08-01
Internet research is appealing because it is a cost- and time-efficient way to access a large number of participants; however, the validity of Internet research for important subjective well-being (SWB) surveys has not been adequately assessed. The goal of the present study was to validate the Satisfaction With Life Scale (SWLS; Diener, Emmons, Larsen, & Griffin, 1985), the Positive and Negative Affect Schedule (PANAS-X; Watson & Clark, 1994), and the Subjective Happiness Scale (SHS; Lyubomirsky & Lepper, 1999) for use on the Internet. This study compared the quality of data collected using paper-based (paper-and-pencil version in a lab setting), computer-based (Web-based version in a lab setting), and Internet (Web-based version on a computer of the participant's choosing) surveys for these three measures of SWB. The paper-based and computer-based experiment recruited two college student samples; the Internet experiments recruited a college student sample and an adult sample responding to ads on different social-networking Web sites. This study provides support for the reliability, validity, and generalizability of the Internet format of the SWLS, PANAS-X, and SHS. Across the three experiments, the results indicate that the computer-based and Internet surveys had means, standard deviations, reliabilities, and factor structures that were similar to those of the paper-based versions. The discussion examines the difficulty of higher attrition for the Internet version, the need to examine reverse-coded items in the future, and the possibility that unhappy individuals are more likely to participate in Internet surveys of SWB.
NASA Astrophysics Data System (ADS)
Chang, Chao-Hsi; Wang, Jian-Xiong; Wu, Xing-Gang
2006-11-01
An upgraded version of the package BCVEGPY2.0: [C.-H. Chang, J.-X. Wang, X.-G. Wu, Comput. Phys. Commun. 174 (2006) 241] is presented, which works under LINUX system and is named as BCVEGPY2.1. With the version and a GNU C compiler additionally, users may simulate the B-events in various experimental environments very conveniently. It has been manipulated in better modularity and code reusability (less cross communication among various modules) than BCVEGPY2.0 has. Furthermore, in the upgraded version a special execution is arranged as that the GNU command make compiles a requested code with the help of a master makefile in main code directory, and then builds an executable file with the default name run. Finally, this paper may also be considered as an erratum, i.e., typo errors in BCVEGPY2.0 and corrections accordingly have been listed. New version program (BCVEGPY2.1) summaryTitle of program: BCVEGPY2.1 Catalogue identifier: ADTJ_v2_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADTJ_v2_1 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Reference to original program: BCVEGPY2.0 Reference in CPC: Comput. Phys. Commun. 174 (2006) 241 Does the new version supersede the old program: No Computer: Any LINUX based on PC with FORTRAN 77 or FORTRAN 90 and GNU C compiler as well Operating systems: LINUX Programming language used: FORTRAN 77/90 Memory required to execute with typical data: About 2.0 MB No. of lines in distributed program, including test data, etc.: 31 521 No. of bytes in distributed program, including test data, etc.: 1 310 179 Distribution format: tar.gz Nature of physical problem: Hadronic production of B meson itself and its excited states Method of solution: The code with option can generate weighted and unweighted events. An interface to PYTHIA is provided to meet the needs of jets hadronization in the production. Restrictions on the complexity of the problem: The hadronic production of (cb¯)-quarkonium in S-wave and P-wave states via the mechanism of gluon-gluon fusion are given by the so-called 'complete calculation' approach. Reasons for new version: Responding to the feedback from users, we rearrange the program in a convenient way and then it can be easily adopted by the users to do the simulations according to their own experimental environment (e.g. detector acceptances and experimental cuts). We have paid many efforts to rearrange the program into several modules with less cross communication among the modules, the main program is slimmed down and all the further actions are decoupled from the main program and can be easily called for various purposes. Typical running time: The typical running time is machine and user-parameters dependent. Typically, for production of the S-wave (cb¯)-quarkonium, when IDWTUP = 1, it takes about 20 hour on a 1.8 GHz Intel P4-processor machine to generate 1000 events; however, when IDWTUP = 3, to generate 10 6 events it takes about 40 minutes only. Of the production, the time for the P-wave (cb¯)-quarkonium will take almost two times longer than that for its S-wave quarkonium. Summary of the changes (improvements): (1) The structure and organization of the program have been changed a lot. The new version package BCVEGPY2.1 has been divided into several modules with less cross communication among the modules (some old version source files are divided into several parts for the purpose). The main program is slimmed down and all the further actions are decoupled from the main program so that they can be easily called for various applications. All of the Fortran codes are organized in the main code directory named as bcvegpy2.1, which contains the main program, all of its prerequisite files and subsidiary 'folders' (subdirectory to the main code directory). The method for setting the parameter is the same as that of the previous versions [C.-H. Chang, C. Driouich, P. Eerola, X.-G. Wu, Comput. Phys. Commun. 159 (2004) 192, hep-ph/0309120. [1
Spectral Classes for FAA's Integrated Noise Model Version 6.0.
DOT National Transportation Integrated Search
1999-12-07
The starting point in any empirical model such as the Federal Aviation Administrations (FAA) : Integrated Noise Model (INM) is a reference data base. In Version 5.2 and in previous versions : the reference data base consisted solely of a set of no...
NASA Astrophysics Data System (ADS)
Davidson, N.; Golonka, P.; Przedziński, T.; Waş, Z.
2011-03-01
Theoretical predictions in high energy physics are routinely provided in the form of Monte Carlo generators. Comparisons of predictions from different programs and/or different initialization set-ups are often necessary. MC-TESTER can be used for such tests of decays of intermediate states (particles or resonances) in a semi-automated way. Since 2002 new functionalities were introduced into the package. In particular, it now works with the HepMC event record, the standard for C++ programs. The complete set-up for benchmarking the interfaces, such as interface between τ-lepton production and decay, including QED bremsstrahlung effects is shown. The example is chosen to illustrate the new options introduced into the program. From the technical perspective, our paper documents software updates and supplements previous documentation. As in the past, our test consists of two steps. Distinct Monte Carlo programs are run separately; events with decays of a chosen particle are searched, and information is stored by MC-TESTER. Then, at the analysis step, information from a pair of runs may be compared and represented in the form of tables and plots. Updates introduced in the program up to version 1.24.4 are also documented. In particular, new configuration scripts or script to combine results from multitude of runs into single information file to be used in analysis step are explained. Program summaryProgram title: MC-TESTER, version 1.23 and version 1.24.4 Catalog identifier: ADSM_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADSM_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 250 548 No. of bytes in distributed program, including test data, etc.: 4 290 610 Distribution format: tar.gz Programming language: C++, FORTRAN77 Tested and compiled with: gcc 3.4.6, 4.2.4 and 4.3.2 with g77/gfortran Computer: Tested on various platforms Operating system: Tested on operating systems: Linux SLC 4.6 and SLC 5, Fedora 8, Ubuntu 8.2 etc. Classification: 11.9 External routines: HepMC ( https://savannah.cern.ch/projects/hepmc/), PYTHIA8 ( http://home.thep.lu.se/~torbjorn/Pythia.html), LaTeX ( http://www.latex-project.org/) Catalog identifier of previous version: ADSM_v1_0 Journal reference of previous version: Comput. Phys. Comm. 157 (2004) 39 Does the new version supersede the previous version?: Yes Nature of problem: The decays of individual particles are well defined modules of a typical Monte Carlo program chain in high energy physics. A fast, semi-automatic way of comparing results from different programs is often desirable for the development of new programs, in order to check correctness of the installations or for discussion of uncertainties. Solution method: A typical HEP Monte Carlo program stores the generated events in event records such as HepMC, HEPEVT or PYJETS. MC-TESTER scans, event by event, the contents of the record and searches for the decays of the particle under study. The list of the found decay modes is successively incremented and histograms of all invariant masses which can be calculated from the momenta of the particle decay products are defined and filled. The outputs from the two runs of distinct programs can be later compared. A booklet of comparisons is created: for every decay channel, all histograms present in the two outputs are plotted and parameter quantifying shape difference is calculated. Its maximum over every decay channel is printed in the summary table. Reasons for new version: Interface for HepMC Event Record is introduced. Setup for benchmarking the interfaces, such as τ-lepton production and decay, including QED bremsstrahlung effects is introduced as well. This required significant changes in the algorithm. As a consequence, a new version of the code was introduced. Restrictions: Only the first 200 decay channels that were found will initialize histograms and if the multiplicity of decay products in a given channel was larger than 7, histograms will not be created for that channel. Additional comments: New features: HepMC interface, use of lists in definition of histograms and decay channels, filters for decay products or secondary decays to be omitted, bug fixing, extended flexibility in representation of program output, installation configuration scripts, merging multiple output files from separate generations. Running time: Varies substantially with the analyzed decay particle, but generally speed estimation of the old version remains valid. On a PC/Linux with 2.0 GHz processors MC-TESTER increases the run time of the τ-lepton Monte Carlo program TAUOLA by 4.0 seconds for every 100 000 analyzed events (generation itself takes 26 seconds). The analysis step takes 13 seconds; LATEX processing takes additionally 10 seconds. Generation step runs may be executed simultaneously on multiprocessor machines.
TOA Radiation Balance Study through Reprocessed ERBS WFOV Nonscanner data from 1985 to 1998
NASA Astrophysics Data System (ADS)
Shrestha, A. K.; Kato, S.; Wong, T.; Stackhouse, P. W., Jr.; Doelling, D. R.; Loughman, R. P.
2017-12-01
Wide-field-of-view (WFOV) nonscanner instrument onboard Earth Radiation Budget Satellite (ERBS) provided broadband irradiances at the top-of-atmosphere (TOA) from 1985 to 1999. However, earlier studies show that the uncertainty in this TOA radiation dataset is significantly higher during the period after the Mt. Pinatubo eruption and battery issue in 1991. In addition, the difference between daytime and nighttime longwave irradiance drifts with time throughout the lifetime of the instrument. We re-processed ERBS WFOV data using the algorithm similar to the one used in the CERES project and calibrated it with CERES-derived irradiances. In addition, the spatial coverage of ERBS irradiances is extended to global from near-global (60°N to 60°S latitudes) using CERES climatological ratio of the near-global to global mean irradiances. The near-global standard deviation of deseasonalized shortwave anomalies computed with Ed4 decreases to 3.2 Wm-2 from 8.0 Wm-2, computed with previous version. In addition, the drift of day-minus-night longwave irradiance is reduced by one third. Similar to the previous version, however, the Ed4 global shortwave irradiance averaged over the 1994 to 1997 period (second period) is smaller by 2.2 Wm-2 compared to that averaged over the 1985 to 1989 period (first period). In addition, the global longwave irradiance in the second period is larger by 0.7 Wm-2 compared to that averaged over the first period. When the difference of two periods is computed (second period minus first period) with the DEEP-C data product (Allan et al. 2014), the difference is 0.5 (-0.3) Wm-2 for shortwave (longwave). The global net imbalance at the TOA computed with ERBS and DEEP-C data sets are, respectively, 0.45 (1.89) Wm-2 and 0.17 (0.96) Wm-2 for the first (second) period. The net imbalance for the CERES period in the 2000s is 0.65 Wm-2. In this presentation, we will further compare Ed4 ERBS-derived TOA net imbalance with ocean heating rates. Re-processed ERBS data product (Edition 4) was released in July 2017 from NASA Langley Atmospheric Science Data Center and available form https://eosweb.larc.nasa.gov/project/measures/long-term-toa-m.
Status and Plans for Finalization of SRT's Contribution to AIRS Version-7 and Version-7 AO
NASA Technical Reports Server (NTRS)
Susskind, Joel; Blaisdell, John; Iredell, Lena; Kouvaris, Louis C.
2017-01-01
Version-6.46 temperature profiles, water vapor profiles, and especially total O3, are very much compared to Version-6. With minor tweaking, Version-6.46 is a good candidate for use in Version-7. JPL Version-6.4.6 and Version-6.4.6 AO monthly mean products agree extremely well with each other. Version-6.4.6 AO is accurate enough that there is not necessarily a need to process both Version-7 and Version-7 AO data sets. Single day comparisons show Version-6.46 CrIS/ATMS and Version-6.46 AIRS/AMSU products agree extremely well with each other. We need to demonstrate agreement of Version-6.46 CrIS/ATMS and Version-6.46 AO products on a monthly mean basis for different months and years. CrIS/ATMS and AIRS/AMSU monthly mean comparisons showed excellent agreement using a previous version.
48 CFR 52.253-1 - Computer Generated Forms.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 48 Federal Acquisition Regulations System 2 2012-10-01 2012-10-01 false Computer Generated Forms....253-1 Computer Generated Forms. As prescribed in FAR 53.111, insert the following clause: Computer... by the Federal Acquisition Regulation (FAR) may be submitted on a computer generated version of the...
48 CFR 52.253-1 - Computer Generated Forms.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 48 Federal Acquisition Regulations System 2 2013-10-01 2013-10-01 false Computer Generated Forms....253-1 Computer Generated Forms. As prescribed in FAR 53.111, insert the following clause: Computer... by the Federal Acquisition Regulation (FAR) may be submitted on a computer generated version of the...
48 CFR 52.253-1 - Computer Generated Forms.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 48 Federal Acquisition Regulations System 2 2014-10-01 2014-10-01 false Computer Generated Forms....253-1 Computer Generated Forms. As prescribed in FAR 53.111, insert the following clause: Computer... by the Federal Acquisition Regulation (FAR) may be submitted on a computer generated version of the...
48 CFR 52.253-1 - Computer Generated Forms.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 48 Federal Acquisition Regulations System 2 2011-10-01 2011-10-01 false Computer Generated Forms....253-1 Computer Generated Forms. As prescribed in FAR 53.111, insert the following clause: Computer... by the Federal Acquisition Regulation (FAR) may be submitted on a computer generated version of the...
48 CFR 52.253-1 - Computer Generated Forms.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 2 2010-10-01 2010-10-01 false Computer Generated Forms....253-1 Computer Generated Forms. As prescribed in FAR 53.111, insert the following clause: Computer... by the Federal Acquisition Regulation (FAR) may be submitted on a computer generated version of the...
Canella, Richard Prazeres; Adam, Guilherme Pradi; de Castillo, Roberto André Ulhôa; Codonho, Daniel; Ganev, Gerson Gandhi; de Vicenzi, Luiz Fernando
2016-01-01
Objective To correlate the angles between the acetabulum and the proximal femur in symptomatic patients with femoroacetabular impingement (FAI), using computed tomography (CT). Methods We retrospectively evaluated 103 hips from 103 patients, using multislice CT to measure the acetabular age, acetabular version (in its supraequatorial portion and in its middle third), femoral neck version, cervical-diaphyseal and alpha angles and the acetabular depth. For the statistical analysis, we used the Pearson correlation coefficient. Results There were inverse correlations between the following angles: (1) acetabular coverage versus alpha angle (p = 0.019); (2) acetabular version (supraequatorial) versus alpha angle (p = 0.049). For patients with femoral anteversion lower than 15 degrees: (1) acetabular version (supraequatorial) versus alpha angle (p = 0.026); (2) acetabular version (middle third) versus alpha angle (p = 0.02). For patients with acetabular version (supraequatorial) lower than 10 degrees: (1) acetabular version (supraequatorial) versus alpha angle (p = 0.004); (2) acetabular version (middle third) versus alpha angle (p = 0.009). Conclusion There was a statistically significant inverse correlation between the acetabular version and alpha angles (the smaller the acetabular anteversion angle was, the larger the alpha angle was) in symptomatic patients, thus supporting the hypothesis that FAI occurs when cam and pincer findings due to acetabular retroversion are seen simultaneously, and that the latter alone does not cause FAI, which leads to overdiagnosis in these cases. PMID:27069890
Canella, Richard Prazeres; Adam, Guilherme Pradi; de Castillo, Roberto André Ulhôa; Codonho, Daniel; Ganev, Gerson Gandhi; de Vicenzi, Luiz Fernando
2016-01-01
To correlate the angles between the acetabulum and the proximal femur in symptomatic patients with femoroacetabular impingement (FAI), using computed tomography (CT). We retrospectively evaluated 103 hips from 103 patients, using multislice CT to measure the acetabular age, acetabular version (in its supraequatorial portion and in its middle third), femoral neck version, cervical-diaphyseal and alpha angles and the acetabular depth. For the statistical analysis, we used the Pearson correlation coefficient. There were inverse correlations between the following angles: (1) acetabular coverage versus alpha angle (p = 0.019); (2) acetabular version (supraequatorial) versus alpha angle (p = 0.049). For patients with femoral anteversion lower than 15 degrees: (1) acetabular version (supraequatorial) versus alpha angle (p = 0.026); (2) acetabular version (middle third) versus alpha angle (p = 0.02). For patients with acetabular version (supraequatorial) lower than 10 degrees: (1) acetabular version (supraequatorial) versus alpha angle (p = 0.004); (2) acetabular version (middle third) versus alpha angle (p = 0.009). There was a statistically significant inverse correlation between the acetabular version and alpha angles (the smaller the acetabular anteversion angle was, the larger the alpha angle was) in symptomatic patients, thus supporting the hypothesis that FAI occurs when cam and pincer findings due to acetabular retroversion are seen simultaneously, and that the latter alone does not cause FAI, which leads to overdiagnosis in these cases.
FINDS: A fault inferring nonlinear detection system programmers manual, version 3.0
NASA Technical Reports Server (NTRS)
Lancraft, R. E.
1985-01-01
Detailed software documentation of the digital computer program FINDS (Fault Inferring Nonlinear Detection System) Version 3.0 is provided. FINDS is a highly modular and extensible computer program designed to monitor and detect sensor failures, while at the same time providing reliable state estimates. In this version of the program the FINDS methodology is used to detect, isolate, and compensate for failures in simulated avionics sensors used by the Advanced Transport Operating Systems (ATOPS) Transport System Research Vehicle (TSRV) in a Microwave Landing System (MLS) environment. It is intended that this report serve as a programmers guide to aid in the maintenance, modification, and revision of the FINDS software.
Plummer, Niel; Jones, Blair F.; Truesdell, Alfred Hemingway
1976-01-01
WATEQF is a FORTRAN IV computer program that models the thermodynamic speciation of inorganic ions and complex species in solution for a given water analysis. The original version (WATEQ) was written in 1973 by A. H. Truesdell and B. F. Jones in Programming Language/one (PL/1.) With but a few exceptions, the thermochemical data, speciation, coefficients, and general calculation procedure of WATEQF is identical to the PL/1 version. This report notes the differences between WATEQF and WATEQ, demonstrates how to set up the input data to execute WATEQF, provides a test case for comparison, and makes available a listing of WATEQF. (Woodard-USGS)
Evaluation of MOSTAS computer code for predicting dynamic loads in two-bladed wind turbines
NASA Technical Reports Server (NTRS)
Kaza, K. R. V.; Janetzke, D. C.; Sullivan, T. L.
1979-01-01
Calculated dynamic blade loads are compared with measured loads over a range of yaw stiffnesses of the DOE/NASA Mod-0 wind turbine to evaluate the performance of two versions of the MOSTAS computer code. The first version uses a time-averaged coefficient approximation in conjunction with a multiblade coordinate transformation for two-bladed rotors to solve the equations of motion by standard eigenanalysis. The results obtained with this approximate analysis do not agree with dynamic blade load amplifications at or close to resonance conditions. The results of the second version, which accounts for periodic coefficients while solving the equations by a time history integration, compare well with the measured data.
Version 2.0 of the AOP-Wiki was released on December 4, 2016. This was a major upgrade and should provide a better user experience. It fixes a number of bugs with the previous version, provides a more streamlined user interface, and sets the stage for providing more programmatic...
NASA Astrophysics Data System (ADS)
Avellar, J.; Duarte, L. G. S.; da Mota, L. A. C. P.
2012-10-01
We present a set of software routines in Maple 14 for solving first order ordinary differential equations (FOODEs). The package implements the Prelle-Singer method in its original form together with its extension to include integrating factors in terms of elementary functions. The package also presents a theoretical extension to deal with all FOODEs presenting Liouvillian solutions. Applications to ODEs taken from standard references show that it solves ODEs which remain unsolved using Maple's standard ODE solution routines. New version program summary Program title: PSsolver Catalogue identifier: ADPR_v2_0 Program summary URL: http://cpc.cs.qub.ac.uk/summaries/ADPR_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 2302 No. of bytes in distributed program, including test data, etc.: 31962 Distribution format: tar.gz Programming language: Maple 14 (also tested using Maple 15 and 16). Computer: Intel Pentium Processor P6000, 1.86 GHz. Operating system: Windows 7. RAM: 4 GB DDR3 Memory Classification: 4.3. Catalogue identifier of previous version: ADPR_v1_0 Journal reference of previous version: Comput. Phys. Comm. 144 (2002) 46 Does the new version supersede the previous version?: Yes Nature of problem: Symbolic solution of first order differential equations via the Prelle-Singer method. Solution method: The method of solution is based on the standard Prelle-Singer method, with extensions for the cases when the FOODE contains elementary functions. Additionally, an extension of our own which solves FOODEs with Liouvillian solutions is included. Reasons for new version: The program was not running anymore due to changes in the latest versions of Maple. Additionally, we corrected/changed some bugs/details that were hampering the smoother functioning of the routines. Summary of revisions: • As time went by, many commands in Maple were deprecated. So, in order to make the program able to run with the newer versions, we have checked and changed some of those. For instance, the command sum had changed, and some program lines were substituted so that the package works properly. • In the old version we must supply the degree of the Darboux polynomials we want to determine. In the present version the user can set the degree by typing Deg = number in the command call (e.g., PSsolve(ode, Deg =3); telling the command PSsolve that it must use Darboux polynomials of degree up to three). If the user does not specify the degree, the routines use, as default, the degree 1. Restrictions: If the integrating factor for the FOODE under consideration has factors of high degree in the dependent and independent variables and in the elementary functions appearing in the FOODE, the package may spend a long time finding the solution. Also, when dealing with FOODEs containing elementary functions, it is essential that the algebraic dependency between them is recognized. If that does not happen, our program can miss some solutions. Unusual features: Our implementation of the Prelle-Singer approach not only solves FOODEs, but can also be used as a research tool that allows the user to follow all the steps of the procedure. For example, the Darboux polynomials (eigenpolynomials) of the D-operator associated with a FOODE (see Section 4) can be calculated. In addition, our package is successful in solving FOODEs that were not solved by some of the most commonly available solvers. Finally, our package implements a theoretical extension (for details, see [1,2]) to the original Prelle-Singer approach that enhances its scope, allowing it to tackle some FOODEs whose solutions involve non-elementary Liouvillian functions. Running time: This depends strongly on the FOODE, but usually under 2 seconds when running our 'arena' test file: The non linear FOODEs presented in the book by Kamke [3]. These times were obtained using an Intel Pentium Processor P6000, 1.86 GHz, with 4 GB RAM. References: [1] M. Singer, Liouvillian first integrals of differential equations, Trans. Amer. Math. Soc. 333 (1992) 673-688. [2] L.G.S. Duarte, S.E.S. Duarte, L.A.C.P. da Mota, J.E.F. Skea, A method to tackle first order ordinary differential equations with Liouvilian functions in the solution, J. Phys. A: Math. Gen. Inglaterra 35 (17) (2002) 3899-3910. [3] E. Kamke, Differentialgleichungen: Lösungsmethoden und Lösungen, Chelsea Publishing Co., New York, 1959.
Dual PECCS: a cognitive system for conceptual representation and categorization
NASA Astrophysics Data System (ADS)
Lieto, Antonio; Radicioni, Daniele P.; Rho, Valentina
2017-03-01
In this article we present an advanced version of Dual-PECCS, a cognitively-inspired knowledge representation and reasoning system aimed at extending the capabilities of artificial systems in conceptual categorization tasks. It combines different sorts of common-sense categorization (prototypical and exemplars-based categorization) with standard monotonic categorization procedures. These different types of inferential procedures are reconciled according to the tenets coming from the dual process theory of reasoning. On the other hand, from a representational perspective, the system relies on the hypothesis of conceptual structures represented as heterogeneous proxytypes. Dual-PECCS has been experimentally assessed in a task of conceptual categorization where a target concept illustrated by a simple common-sense linguistic description had to be identified by resorting to a mix of categorization strategies, and its output has been compared to human responses. The obtained results suggest that our approach can be beneficial to improve the representational and reasoning conceptual capabilities of standard cognitive artificial systems, and - in addition - that it may be plausibly applied to different general computational models of cognition. The current version of the system, in fact, extends our previous work, in that Dual- PECCS is now integrated and tested into two cognitive architectures, ACT-R and CLARION, implementing different assumptions on the underlying invariant structures governing human cognition. Such integration allowed us to extend our previous evaluation.
NASA Astrophysics Data System (ADS)
Coarfa, Violeta Florentina
2007-12-01
Air toxics, also called hazardous air pollutants (HAPs), pose a serious threat to human health and the environment. Their study is important in the Houston area, where point sources, mostly located along the Ship Channel, mobile and area sources contribute to large emissions of such toxic pollutants. Previous studies carried out in this area found dangerous levels of different HAPs in the atmosphere. This thesis presents several studies that were performed for the aromatic and non-aromatic air toxics in the HGA. For these studies we developed several tools: (1) a refined chemical mechanism, which explicitly represents 18 aromatic air toxics that were lumped under two model species by the previous version, based on their reactivity with the hydroxyl radical; (2) an engineering version of an existing air toxics photochemical model that enables us to perform much faster long-term simulations compared to the original model, that leads to a 8--9 times improvement in the running time across different computing platforms; (3) a combined emission inventory based on the available emission databases. Using the developed tools, we quantified the mobile source impact on a few selected air toxics, and analyzed the temporal and spatial variation of selected aromatic and non-aromatic air toxics in a few regions within the Houston area; these regions were characterized by different emissions and environmental conditions.
Familias 3 - Extensions and new functionality.
Kling, Daniel; Tillmar, Andreas O; Egeland, Thore
2014-11-01
In relationship testing the aim is to determine the most probable pedigree structure given genetic marker data for a set of persons. Disaster Victim Identification (DVI) based on DNA data from presumed relatives of the missing persons can be considered to be a collection of relationship problems. Forensic calculations in investigative mode address questions like "How many markers and reference persons are needed?" Such questions can be answered by simulations. Mutations, deviations from Hardy-Weinberg Equilibrium (or more generally, accounting for population substructure) and silent alleles cannot be ignored when evaluating forensic evidence in case work. With the advent of new markers, so called microvariants have become more common. Previous mutation models are no longer appropriate and a new model is proposed. This paper describes methods designed to deal with DVI problems and a new simulation model to study distribution of likelihoods. There are softwares available, addressing similar problems. However, for some problems including DVI, we are not aware of freely available validated software. The Familias software has long been widely used by forensic laboratories worldwide to compute likelihoods in relationship scenarios, though previous versions have lacked desired functionality, such as the above mentioned. The extensions as well as some other novel features have been implemented in the new version, freely available at www.familias.no. The implementation and validation are briefly mentioned leaving complete details to Supplementary sections. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Expert System for Automated Design Synthesis
NASA Technical Reports Server (NTRS)
Rogers, James L., Jr.; Barthelemy, Jean-Francois M.
1987-01-01
Expert-system computer program EXADS developed to aid users of Automated Design Synthesis (ADS) general-purpose optimization program. EXADS aids engineer in determining best combination based on knowledge of specific problem and expert knowledge stored in knowledge base. Available in two interactive machine versions. IBM PC version (LAR-13687) written in IQ-LISP. DEC VAX version (LAR-13688) written in Franz-LISP.
Barlow, Paul M.; Moench, Allen F.
2011-01-01
The computer program WTAQ simulates axial-symmetric flow to a well pumping from a confined or unconfined (water-table) aquifer. WTAQ calculates dimensionless or dimensional drawdowns that can be used with measured drawdown data from aquifer tests to estimate aquifer hydraulic properties. Version 2 of the program, which is described in this report, provides an alternative analytical representation of drainage to water-table aquifers from the unsaturated zone than that which was available in the initial versions of the code. The revised drainage model explicitly accounts for hydraulic characteristics of the unsaturated zone, specifically, the moisture retention and relative hydraulic conductivity of the soil. The revised program also retains the original conceptualizations of drainage from the unsaturated zone that were available with version 1 of the program to provide alternative approaches to simulate the drainage process. Version 2 of the program includes all other simulation capabilities of the first versions, including partial penetration of the pumped well and of observation wells and piezometers, well-bore storage and skin effects at the pumped well, and delayed drawdown response of observation wells and piezometers.
NASA Astrophysics Data System (ADS)
Bogusz, Michael
1993-01-01
The need for a systematic methodology for the analysis of aircraft electromagnetic compatibility (EMC) problems is examined. The available computer aids used in aircraft EMC analysis are assessed and a theoretical basis is established for the complex algorithms which identify and quantify electromagnetic interactions. An overview is presented of one particularly well established aircraft antenna to antenna EMC analysis code, the Aircraft Inter-Antenna Propagation with Graphics (AAPG) Version 07 software. The specific new algorithms created to compute cone geodesics and their associated path losses and to graph the physical coupling path are discussed. These algorithms are validated against basic principles. Loss computations apply the uniform geometrical theory of diffraction and are subsequently compared to measurement data. The increased modelling and analysis capabilities of the newly developed AAPG Version 09 are compared to those of Version 07. Several models of real aircraft, namely the Electronic Systems Trainer Challenger, are generated and provided as a basis for this preliminary comparative assessment. Issues such as software reliability, algorithm stability, and quality of hardcopy output are also discussed.
Computer version of astronomical ephemerides.
NASA Astrophysics Data System (ADS)
Choliy, V. Ya.
A computer version of astronomical ephemerides for bodies of the Solar System, stars, and astronomical phenomena was created at the Main Astronomical Observatory of the National Academy of Sciences of Ukraine and the Astronomy and Cosmic Physics Department of the Taras Shevchenko National University. The ephemerides will be distributed via INTERNET or in the file form. This information is accessible via the web servers space.ups.kiev.ua and alfven.ups.kiev.ua or the address choliy@astrophys.ups.kiev.ua.
NASA Technical Reports Server (NTRS)
Olmedo, L.
1980-01-01
The changes, modifications, and inclusions which were adapted to the current version of the MINIVER program are discussed. Extensive modifications were made to various subroutines, and a new plot package added. This plot package is the Johnson Space Center DISSPLA Graphics System currently driven under an 1110 EXEC 8 configuration. User instructions on executing the MINIVER program are provided and the plot package is described.
A brief introduction to PYTHIA 8.1
NASA Astrophysics Data System (ADS)
Sjöstrand, Torbjörn; Mrenna, Stephen; Skands, Peter
2008-06-01
The PYTHIA program is a standard tool for the generation of high-energy collisions, comprising a coherent set of physics models for the evolution from a few-body hard process to a complex multihadronic final state. It contains a library of hard processes and models for initial- and final-state parton showers, multiple parton-parton interactions, beam remnants, string fragmentation and particle decays. It also has a set of utilities and interfaces to external programs. While previous versions were written in Fortran, PYTHIA 8 represents a complete rewrite in C++. The current release is the first main one after this transition, and does not yet in every respect replace the old code. It does contain some new physics aspects, on the other hand, that should make it an attractive option especially for LHC physics studies. Program summaryProgram title:PYTHIA 8.1 Catalogue identifier: ACTU_v3_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ACTU_v3_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GPL version 2 No. of lines in distributed program, including test data, etc.: 176 981 No. of bytes in distributed program, including test data, etc.: 2 411 876 Distribution format: tar.gz Programming language: C++ Computer: Commodity PCs Operating system: Linux; should also work on other systems RAM: 8 megabytes Classification: 11.2 Does the new version supersede the previous version?: yes, partly Nature of problem: High-energy collisions between elementary particles normally give rise to complex final states, with large multiplicities of hadrons, leptons, photons and neutrinos. The relation between these final states and the underlying physics description is not a simple one, for two main reasons. Firstly, we do not even in principle have a complete understanding of the physics. Secondly, any analytical approach is made intractable by the large multiplicities. Solution method: Complete events are generated by Monte Carlo methods. The complexity is mastered by a subdivision of the full problem into a set of simpler separate tasks. All main aspects of the events are simulated, such as hard-process selection, initial- and final-state radiation, beam remnants, fragmentation, decays, and so on. Therefore events should be directly comparable with experimentally observable ones. The programs can be used to extract physics from comparisons with existing data, or to study physics at future experiments. Reasons for new version: Improved and expanded physics models, transition from Fortran to C++. Summary of revisions: New user interface, transverse-momentum-ordered showers, interleaving with multiple interactions, and much more. Restrictions: Depends on the problem studied. Running time: 10-1000 events per second, depending on process studied. References: [1] T. Sjöstrand, P. Edén, C. Friberg, L. Lönnblad, G. Miu, S. Mrenna, E. Norrbin, Comput. Phys. Comm. 135 (2001) 238.
NASA Astrophysics Data System (ADS)
Nasir, Rizal E. M.; Ali, Zurriati; Kuntjoro, Wahyu; Wisnoe, Wirachman
2012-06-01
Previous wind tunnel test has proven the improved aerodynamic charasteristics of Baseline-II E-2 Blended Wing-Body (BWB) aircraft studied in Universiti Teknologi Mara. The E-2 is a version of Baseline-II BWB with modified outer wing and larger canard, solely-designed to gain favourable longitudinal static stability during flight. This paper highlights some results from current investigation on the said aircraft via computational fluid dynamics simulation as a mean to validate the wind tunnel test results. The simulation is conducted based on standard one-equation turbulence, Spalart-Allmaras model with polyhedral mesh. The ambience of the flight simulation is made based on similar ambience of wind tunnel test. The simulation shows lift, drag and moment results to be near the values found in wind tunnel test but only within angles of attack where the lift change is linear. Beyond the linear region, clear differences between computational simulation and wind tunnel test results are observed. It is recommended that different type of mathematical model be used to simulate flight conditions beyond linear lift region.
Algorithm and code development for unsteady three-dimensional Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Obayashi, Shigeru
1991-01-01
A streamwise upwind algorithm for solving the unsteady 3-D Navier-Stokes equations was extended to handle the moving grid system. It is noted that the finite volume concept is essential to extend the algorithm. The resulting algorithm is conservative for any motion of the coordinate system. Two extensions to an implicit method were considered and the implicit extension that makes the algorithm computationally efficient is implemented into Ames's aeroelasticity code, ENSAERO. The new flow solver has been validated through the solution of test problems. Test cases include three-dimensional problems with fixed and moving grids. The first test case shown is an unsteady viscous flow over an F-5 wing, while the second test considers the motion of the leading edge vortex as well as the motion of the shock wave for a clipped delta wing. The resulting algorithm has been implemented into ENSAERO. The upwind version leads to higher accuracy in both steady and unsteady computations than the previously used central-difference method does, while the increase in the computational time is small.
Supporting students' learning in the domain of computer science
NASA Astrophysics Data System (ADS)
Gasparinatou, Alexandra; Grigoriadou, Maria
2011-03-01
Previous studies have shown that students with low knowledge understand and learn better from more cohesive texts, whereas high-knowledge students have been shown to learn better from texts of lower cohesion. This study examines whether high-knowledge readers in computer science benefit from a text of low cohesion. Undergraduate students (n = 65) read one of four versions of a text concerning Local Network Topologies, orthogonally varying local and global cohesion. Participants' comprehension was examined through free-recall measure, text-based, bridging-inference, elaborative-inference, problem-solving questions and a sorting task. The results indicated that high-knowledge readers benefited from the low-cohesion text. The interaction of text cohesion and knowledge was reliable for the sorting activity, for elaborative-inference and for problem-solving questions. Although high-knowledge readers performed better in text-based and in bridging-inference questions with the low-cohesion text, the interaction of text cohesion and knowledge was not reliable. The results suggest a more complex view of when and for whom textual cohesion affects comprehension and consequently learning in computer science.
Simplified and Yet Turing Universal Spiking Neural P Systems with Communication on Request.
Wu, Tingfang; Bîlbîe, Florin-Daniel; Păun, Andrei; Pan, Linqiang; Neri, Ferrante
2018-04-02
Spiking neural P systems are a class of third generation neural networks belonging to the framework of membrane computing. Spiking neural P systems with communication on request (SNQ P systems) are a type of spiking neural P system where the spikes are requested from neighboring neurons. SNQ P systems have previously been proved to be universal (computationally equivalent to Turing machines) when two types of spikes are considered. This paper studies a simplified version of SNQ P systems, i.e. SNQ P systems with one type of spike. It is proved that one type of spike is enough to guarantee the Turing universality of SNQ P systems. Theoretical results are shown in the cases of the SNQ P system used in both generating and accepting modes. Furthermore, the influence of the number of unbounded neurons (the number of spikes in a neuron is not bounded) on the computation power of SNQ P systems with one type of spike is investigated. It is found that SNQ P systems functioning as number generating devices with one type of spike and four unbounded neurons are Turing universal.
Quantitative computational infrared imaging of buoyant diffusion flames
NASA Astrophysics Data System (ADS)
Newale, Ashish S.
Studies of infrared radiation from turbulent buoyant diffusion flames impinging on structural elements have applications to the development of fire models. A numerical and experimental study of radiation from buoyant diffusion flames with and without impingement on a flat plate is reported. Quantitative images of the radiation intensity from the flames are acquired using a high speed infrared camera. Large eddy simulations are performed using fire dynamics simulator (FDS version 6). The species concentrations and temperature from the simulations are used in conjunction with a narrow-band radiation model (RADCAL) to solve the radiative transfer equation. The computed infrared radiation intensities rendered in the form of images and compared with the measurements. The measured and computed radiation intensities reveal necking and bulging with a characteristic frequency of 7.1 Hz which is in agreement with previous empirical correlations. The results demonstrate the effects of stagnation point boundary layer on the upstream buoyant shear layer. The coupling between these two shear layers presents a model problem for sub-grid scale modeling necessary for future large eddy simulations.
1991-08-01
90-09-25- VRX Ada COMPILER VALIDATION SUMMARY REPORT: Certificate Number: 901129W1.11096 Verdix Corporation VADS Sequent Balance DYNIX 3.0, VAda-1lO...8000, DYNIX Version 3.0 Target Computer System: Sequent Balance 8000, DYNIX Version 3.0 Customer Agreement Number: 90-09-25- VRX See Section 3.1 for any
An interactive code (NETPATH) for modeling NET geochemical reactions along a flow PATH, version 2.0
Plummer, Niel; Prestemon, Eric C.; Parkhurst, David L.
1994-01-01
NETPATH is an interactive Fortran 77 computer program used to interpret net geochemical mass-balance reactions between an initial and final water along a hydrologic flow path. Alternatively, NETPATH computes the mixing proportions of two to five initial waters and net geochemical reactions that can account for the observed composition of a final water. The program utilizes previously defined chemical and isotopic data for waters from a hydrochemical system. For a set of mineral and (or) gas phases hypothesized to be the reactive phases in the system, NETPATH calculates the mass transfers in every possible combination of the selected phases that accounts for the observed changes in the selected chemical and (or) isotopic compositions observed along the flow path. The calculations are of use in interpreting geochemical reactions, mixing proportions, evaporation and (or) dilution of waters, and mineral mass transfer in the chemical and isotopic evolution of natural and environmental waters. Rayleigh distillation calculations are applied to each mass-balance model that satisfies the constraints to predict carbon, sulfur, nitrogen, and strontium isotopic compositions at the end point, including radiocarbon dating. DB is an interactive Fortran 77 computer program used to enter analytical data into NETPATH, and calculate the distribution of species in aqueous solution. This report describes the types of problems that can be solved, the methods used to solve problems, and the features available in the program to facilitate these solutions. Examples are presented to demonstrate most of the applications and features of NETPATH. The codes DB and NETPATH can be executed in the UNIX or DOS1 environment. This report replaces U.S. Geological Survey Water-Resources Investigations Report 91-4078, by Plummer and others, which described the original release of NETPATH, version 1.0 (dated December, 1991), and documents revisions and enhancements that are included in version 2.0. 1 The use of trade, brand or product names in this report is for identification purposes only and does not constitute endorsement by the U.S. Geological Survey.
FHWA traffic noise model, version 1.0 : user's guide
DOT National Transportation Integrated Search
1998-01-01
This User's Guide is for the Federal Highway Administration's Traffic Noise Model (FHWA TNM), Version 1.0 -- the FHWAs computer program for highway traffic noise prediction and analysis. Two companion reports, a Technical Manual and a data repor...
FHWA Traffic Noise Model, version 1.0 technical manual
DOT National Transportation Integrated Search
1998-02-01
This Technical Manual is for the Federal Highway Administrations Traffic Noise Model (FHWA TNM), Version 1.0 -- the FHWAs computer program for highway traffic noise prediction and analysis. Two companion reports, a Users Guide and a data r...
Broadband continuous wave source localization via pair-wise, cochleagram processing
NASA Astrophysics Data System (ADS)
Nosal, Eva-Marie; Frazer, L. Neil
2005-04-01
A pair-wise processor has been developed for the passive localization of broadband continuous-wave underwater sources. The algorithm uses sparse hydrophone arrays and does not require previous knowledge of the source signature. It is applicable in multiple source situations. A spectrogram/cochleagram version of the algorithm has been developed in order to utilize higher frequencies at longer ranges where signal incoherence, and limited computational resources, preclude the use of full waveforms. Simulations demonstrating the robustness of the algorithm with respect to noise and environmental mismatch will be presented, together with initial results from the analysis of humpback whale song recorded at the Pacific Missile Range Facility off Kauai. [Work supported by MHPCC and ONR.
A Benchmark Problem for Development of Autonomous Structural Modal Identification
NASA Technical Reports Server (NTRS)
Pappa, Richard S.; Woodard, Stanley E.; Juang, Jer-Nan
1996-01-01
This paper summarizes modal identification results obtained using an autonomous version of the Eigensystem Realization Algorithm on a dynamically complex, laboratory structure. The benchmark problem uses 48 of 768 free-decay responses measured in a complete modal survey test. The true modal parameters of the structure are well known from two previous, independent investigations. Without user involvement, the autonomous data analysis identified 24 to 33 structural modes with good to excellent accuracy in 62 seconds of CPU time (on a DEC Alpha 4000 computer). The modal identification technique described in the paper is the baseline algorithm for NASA's Autonomous Dynamics Determination (ADD) experiment scheduled to fly on International Space Station assembly flights in 1997-1999.
Creating a Computer Adaptive Test Version of the Late-Life Function & Disability Instrument
Jette, Alan M.; Haley, Stephen M.; Ni, Pengsheng; Olarsch, Sippy; Moed, Richard
2009-01-01
Background This study applied Item Response Theory (IRT) and Computer Adaptive Test (CAT) methodologies to develop a prototype function and disability assessment instrument for use in aging research. Herein, we report on the development of the CAT version of the Late-Life Function & Disability instrument (Late-Life FDI) and evaluate its psychometric properties. Methods We employed confirmatory factor analysis, IRT methods, validation, and computer simulation analyses of data collected from 671 older adults residing in residential care facilities. We compared accuracy, precision, and sensitivity to change of scores from CAT versions of two Late-Life FDI scales with scores from the fixed-form instrument. Score estimates from the prototype CAT versus the original instrument were compared in a sample of 40 older adults. Results Distinct function and disability domains were identified within the Late-Life FDI item bank and used to construct two prototype CAT scales. Using retrospective data, scores from computer simulations of the prototype CAT scales were highly correlated with scores from the original instrument. The results of computer simulation, accuracy, precision, and sensitivity to change of the CATs closely approximated those of the fixed-form scales, especially for the 10- or 15-item CAT versions. In the prospective study each CAT was administered in less than 3 minutes and CAT scores were highly correlated with scores generated from the original instrument. Conclusions CAT scores of the Late-Life FDI were highly comparable to those obtained from the full-length instrument with a small loss in accuracy, precision, and sensitivity to change. PMID:19038841
Ghobrial, Fady Emil Ibrahim; Eldin, Manal Salah; Razek, Ahmed Abdel Khalek Abdel; Atwan, Nadia Ibrahim; Shamaa, Sameh Sayed Ahmed
2017-01-01
To assess inter-observer agreement of revised RECIST criteria (version 1.1) for computed tomography assessment of hepatic metastases of breast cancer. A prospective study was conducted in 28 female patients with breast cancer and with at least one measurable metastatic lesion in the liver that was treated with 3 cycles of anthracycline-based chemotherapy. All patients underwent computed tomography of the abdomen with 64-row multi- detector CT at baseline and after 3 cycles of chemotherapy for response assessment. Image analysis was performed by 2 observers, based on the RECIST criteria (version 1.1). Computed tomography revealed partial response of hepatic metastases in 7 patients (25%) by one observer and in 10 patients (35.7%) by the other observer, with good inter-observer agreement (k=0.75, percent agreement of 89.29%). Stable disease was detected in 19 patients (67.8%) by one observer and in 16 patients (57.1%) by the other observer, with good agreement (k=0.774, percent agreement of 89.29%). Progressive disease was detected in 2 patients (7.2%) by both observers, with perfect agreement (k=1, percent agreement of 100%). The overall inter-observer agreement in the CT-based response assessment of hepatic metastasis between the two observers was good ( k =0.793, percent agreement of 89.29%). We concluded that computed tomography is a reliable and reproducible imaging modality for response assessment of hepatic metastases of breast cancer according to the RECIST criteria (version 1.1).
MCNP output data analysis with ROOT (MODAR)
NASA Astrophysics Data System (ADS)
Carasco, C.
2010-12-01
MCNP Output Data Analysis with ROOT (MODAR) is a tool based on CERN's ROOT software. MODAR has been designed to handle time-energy data issued by MCNP simulations of neutron inspection devices using the associated particle technique. MODAR exploits ROOT's Graphical User Interface and functionalities to visualize and process MCNP simulation results in a fast and user-friendly way. MODAR allows to take into account the detection system time resolution (which is not possible with MCNP) as well as detectors energy response function and counting statistics in a straightforward way. New version program summaryProgram title: MODAR Catalogue identifier: AEGA_v1_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGA_v1_1.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 150 927 No. of bytes in distributed program, including test data, etc.: 4 981 633 Distribution format: tar.gz Programming language: C++ Computer: Most Unix workstations and PCs Operating system: Most Unix systems, Linux and windows, provided the ROOT package has been installed. Examples where tested under Suse Linux and Windows XP. RAM: Depends on the size of the MCNP output file. The example presented in the article, which involves three two dimensional 139×740 bins histograms, allocates about 60 MB. These data are running under ROOT and include consumption by ROOT itself. Classification: 17.6 Catalogue identifier of previous version: AEGA_v1_0 Journal reference of previous version: Comput. Phys. Comm. 181 (2010) 1161 External routines: ROOT version 5.24.00 ( http://root.cern.ch/drupal/) Does the new version supersede the previous version?: Yes Nature of problem: The output of a MCNP simulation is an ascii file. The data processing is usually performed by copying and pasting the relevant parts of the ascii file into Microsoft Excel. Such an approach is satisfactory when the quantity of data is small but is not efficient when the size of the simulated data is large, for example when time-energy correlations are studied in detail such as in problems involving the associated particle technique. In addition, since the finite time resolution of the simulated detector cannot be modeled with MCNP, systems in which time-energy correlation is crucial cannot be described in a satisfactory way. Finally, realistic particle energy deposit in detectors is calculated with MCNP in a two step process involving type-5 then type-8 tallies. In the first step, the photon flux energy spectrum associated to a time region is selected and serves as a source energy distribution for the second step. Thus, several files must be manipulated before getting the result, which can be time consuming if one needs to study several time regions or different detectors performances. In the same way, modeling counting statistics obtained in a limited acquisition time requires several steps and can also be time consuming. Solution method: In order to overcome the previous limitations, the MODAR C++ code has been written to make use of CERN's ROOT data analysis software. MCNP output data are read from the MCNP output file with dedicated routines. Two dimensional histograms are filled and can be handled efficiently within the ROOT framework. To keep a user friendly analysis tool, all processing and data display can be done by means of ROOT Graphical User Interface. Specific routines have been written to include detectors finite time resolution and energy response function as well as counting statistics in a straightforward way. Reasons for new version: For applications involving the Associate Particle Technique, a large number of gamma rays are produced by the fast neutrons interactions. To study the energy spectra, it is useful to identify the gamma-ray energy peaks in a straightforward way. Therefore, the possibility to show gamma rays corresponding to specific reactions has been added in MODAR. Summary of revisions: It is possible to use a gamma ray database to better identify in the energy spectra gamma ray peaks with their first and second escapes. Histograms can be scaled by the number of source particle to evaluate the number of counts that is expected without statistical uncertainties. Additional comments: The possibility of adding tallies has also been incorporated in MODAR in order to describe systems in which the signal from several detectors can be summed. Moreover, MODAR can be adapted to handle other problems involving two dimensional data. Running time: The CPU time needed to smear a two dimensional histogram depends on the size of the histogram. In the presented example, the time-energy smearing of one of the 139×740 two dimensional histograms takes 3 minutes with a DELL computer equipped with INTEL Core 2.
DOE Office of Scientific and Technical Information (OSTI.GOV)
KURTZER, GREGORY; MURIKI, KRISHNA
Singularity is a container solution designed to facilitate mobility of compute across systems and HPC infrastructures. It does this by creating minimal containers that are defined by a specfile and files from the host system are used to build the container. The resulting container can then be launched by any Linux computer with Singularity installed regardless if the programs inside the container are present on the target system, or if they are a different version, or even incompatible versions. Singularity achieves extreme portability without sacrificing usability thus solving the need of mobility of compute. Singularity containers can be executed withinmore » a normal/standard command line process flow.« less
NASA Astrophysics Data System (ADS)
Galaev, S. A.; Ris, V. V.; Smirnov, E. M.; Babiev, A. N.
2018-06-01
Experience gained from designing exhaust hoods for modernized versions of K-175/180-12.8 and K-330-23.5-1 steam turbines is presented. The hood flow path is optimized based on the results of analyzing equilibrium wet steam 3D flow fields calculated using up-to-date computation fluid dynamics techniques. The mathematical model constructed on the basis of Reynolds-averaged Navier-Stokes equations is validated by comparing the calculated kinetic energy loss with the published data on full-scale experiments for the hood used in the K-160-130 turbine produced by the Kharkiv Turbine-Generator Works. Test calculations were carried out for four turbine operation modes. The obtained results from validating the model with the K-160-130 turbine hood taken as an example were found to be equally positive with the results of the previously performed calculations of flow pattern in the K-300-240 turbine hood. It is shown that the calculated coefficients of total losses in the K-160-130 turbine hood differ from the full-scale test data by no more than 5%. As a result of optimizing the K-175/180-12.8 turbine hood flow path, the total loss coefficient has been decreased from 1.50 for the initial design to 1.05 for the best of the modification versions. The optimized hood is almost completely free from supersonic flow areas, and the flow through it has become essentially more uniform both inside the hood and at its outlet. In the modified version of the K-330-23.5-1 turbine hood, the total loss coefficient has been decreased by more than a factor of 2: from 2.3 in the hood initial design to a value of 1.1 calculated for the hood final design version and sizes adopted for developing the detailed design. Cardinally better performance of both the hoods with respect to their initial designs was achieved as a result of multicase calculations, during which the flow path geometrical characteristics were sequentially varied, including options involving its maximally possible expansion and removal of the guiding plates producing an adverse effect.
Automated system for the calibration of magnetometers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petrucha, Vojtech; Kaspar, Petr; Ripka, Pavel
2009-04-01
A completely nonmagnetic calibration platform has been developed and constructed at DTU Space (Technical University of Denmark). It is intended for on-site scalar calibration of high-precise fluxgate magnetometers. An enhanced version of the same platform is being built at the Czech Technical University. There are three axes of rotation in this design (compared to two axes in the previous version). The addition of the third axis allows us to calibrate more complex devices. An electronic compass based on a vector fluxgate magnetometer and micro electro mechanical systems (MEMS) accelerometer is one example. The new platform can also be used tomore » evaluate the parameters of the compass in all possible variations in azimuth, pitch, and roll. The system is based on piezoelectric motors, which are placed on a platform made of aluminum, brass, plastic, and glass. Position sensing is accomplished through custom-made optical incremental sensors. The system is controlled by a microcontroller, which executes commands from a computer. The properties of the system as well as calibration and measurement results will be presented.« less
TFaNS-Tone Fan Noise Design/Prediction System: Users' Manual TFaNS Version 1.5
NASA Technical Reports Server (NTRS)
Topol, David A.; Huff, Dennis L. (Technical Monitor)
2003-01-01
TFaNS is the Tone Fan Noise Design/Prediction System developed by Pratt & Whitney under contract to NASA Glenn. The purpose of this system is to predict tone noise emanating from a fan stage including the effects of reflection and transmission by the rotor and stator and by the duct inlet and nozzle. The first version of this design system was developed under a previous NASA contract. Several improvements have been made to TFaNS. This users' manual shows how to run this new system. TFaNS consists of the codes that compute the acoustic properties (reflection and transmission coefficients) of the various elements and writes them to files, CUP3D Fan Noise Coupling Code that reads these files, solves the coupling problem, and outputs the desired noise predictions, and AWAKEN CFD/Measured Wake Postprocessor which reformats CFD wake predictions and/or measured wake data so they can be used by the system. This report provides information on code input and file structure essential for potential users of TFaNS.
NASA Astrophysics Data System (ADS)
Helm, Anton; Vieira, Jorge; Silva, Luis; Fonseca, Ricardo
2016-10-01
Laser-driven accelerators gained an increased attention over the past decades. Typical modeling techniques for laser wakefield acceleration (LWFA) are based on particle-in-cell (PIC) simulations. PIC simulations, however, are very computationally expensive due to the disparity of the relevant scales ranging from the laser wavelength, in the micrometer range, to the acceleration length, currently beyond the ten centimeter range. To minimize the gap between these despair scales the ponderomotive guiding center (PGC) algorithm is a promising approach. By describing the evolution of the laser pulse envelope separately, only the scales larger than the plasma wavelength are required to be resolved in the PGC algorithm, leading to speedups in several orders of magnitude. Previous work was limited to two dimensions. Here we present the implementation of the 3D version of a PGC solver into the massively parallel, fully relativistic PIC code OSIRIS. We extended the solver to include periodic boundary conditions and parallelization in all spatial dimensions. We present benchmarks for distributed and shared memory parallelization. We also discuss the stability of the PGC solver.
Coupling of TRAC-PF1/MOD2, Version 5.4.25, with NESTLE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Knepper, P.L.; Hochreiter, L.E.; Ivanov, K.N.
1999-09-01
A three-dimensional (3-D) spatial kinetics capability within a thermal-hydraulics system code provides a more correct description of the core physics during reactor transients that involve significant variations in the neutron flux distribution. Coupled codes provide the ability to forecast safety margins in a best-estimate manner. The behavior of a reactor core and the feedback to the plant dynamics can be accurately simulated. For each time step, coupled codes are capable of resolving system interaction effects on neutronics feedback and are capable of describing local neutronics effects caused by the thermal hydraulics and neutronics coupling. With the improvements in computational technology,more » modeling complex reactor behaviors with coupled thermal hydraulics and spatial kinetics is feasible. Previously, reactor analysis codes were limited to either a detailed thermal-hydraulics model with simplified kinetics or multidimensional neutron kinetics with a simplified thermal-hydraulics model. The authors discuss the coupling of the Transient Reactor Analysis Code (TRAC)-PF1/MOD2, Version 5.4.25, with the NESTLE code.« less
Multiple point least squares equalization in a room
NASA Technical Reports Server (NTRS)
Elliott, S. J.; Nelson, P. A.
1988-01-01
Equalization filters designed to minimize the mean square error between a delayed version of the original electrical signal and the equalized response at a point in a room have previously been investigated. In general, such a strategy degrades the response at positions in a room away from the equalization point. A method is presented for designing an equalization filter by adjusting the filter coefficients to minimize the sum of the squares of the errors between the equalized responses at multiple points in the room and delayed versions of the original, electrical signal. Such an equalization filter can give a more uniform frequency response over a greater volume of the enclosure than can the single point equalizer above. Computer simulation results are presented of equalizing the frequency responses from a loudspeaker to various typical ear positions, in a room with dimensions and acoustic damping typical of a car interior, using the two approaches outlined above. Adaptive filter algorithms, which can automatically adjust the coefficients of a digital equalization filter to achieve this minimization, will also be discussed.
Designing action games for appealing to buyers.
Hsu, Shang Hwa; Lee, Feng-Liang; Wu, Muh-Cherng
2005-12-01
This study aims to identify design features for action games that would appeal to game-buyers, rather than game-players. Sixteen frequent-buyers of computer games identified 39 design features that appeal to buyers by contrasting different versions of Pacman games. Twenty-eight versions of Pacman were then evaluated in terms of the identified design features by 45 participants (27 male and 18 female college students). Qnet2000 neural network software was used to determine the relative importance of these design features. The results indicated that the top 10 most important design features could account for more than 50% of "perceived fun" among these 39 design features. The feature of avatar is important to game-buyers, yet not revealed in previous player-oriented studies. Moreover, six design factors underlying the 39 features were identified through factor analysis. These factors included "novelty and powerfulness," "appealing presentation," "interactivity," "challenging," "sense of control," and "rewarding," and could account for 54% of total variance. Among these six factors, appealing presentation has not been emphasized by player-oriented research. Implications of the findings were discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Werner, Mike
Why this utility? After years of upgrading the Java Runtime Environment (JRE) or the Java Software Development Kit (JDK/SDK), a Windows computer becomes littered with so many old versions that the machine may become a security risk due to exploits targeted at those older versions. This utility helps mitigate those vulnerabilities by searching for, and removing, versions 1.3.x thru 1.7.x of the Java JRE and/or JDK/SDK.
An all-FORTRAN version of NASTRAN for the VAX
NASA Technical Reports Server (NTRS)
Purves, L.
1981-01-01
All FORTRAN version of NASA structural analysis program NASATRAN is implemented on DEC VAX-series computer. Applications of NASATRAN extend to almost every type of linear structure and construction. Two special features are available in VAX version; program is executed from terminal in manner permitting use of VAX interactive debugger, and links are interactively restarted when desired by first making copy of all NASATRAN work files.
NASA Astrophysics Data System (ADS)
Regnier, D.; Dubray, N.; Verrière, M.; Schunck, N.
2018-04-01
The time-dependent generator coordinate method (TDGCM) is a powerful method to study the large amplitude collective motion of quantum many-body systems such as atomic nuclei. Under the Gaussian Overlap Approximation (GOA), the TDGCM leads to a local, time-dependent Schrödinger equation in a multi-dimensional collective space. In this paper, we present the version 2.0 of the code FELIX that solves the collective Schrödinger equation in a finite element basis. This new version features: (i) the ability to solve a generalized TDGCM+GOA equation with a metric term in the collective Hamiltonian, (ii) support for new kinds of finite elements and different types of quadrature to compute the discretized Hamiltonian and overlap matrices, (iii) the possibility to leverage the spectral element scheme, (iv) an explicit Krylov approximation of the time propagator for time integration instead of the implicit Crank-Nicolson method implemented in the first version, (v) an entirely redesigned workflow. We benchmark this release on an analytic problem as well as on realistic two-dimensional calculations of the low-energy fission of 240Pu and 256Fm. Low to moderate numerical precision calculations are most efficiently performed with simplex elements with a degree 2 polynomial basis. Higher precision calculations should instead use the spectral element method with a degree 4 polynomial basis. We emphasize that in a realistic calculation of fission mass distributions of 240Pu, FELIX-2.0 is about 20 times faster than its previous release (within a numerical precision of a few percents).
Why Does a Method That Fails Continue To Be Used: The Answer
Templeton, Alan R.
2009-01-01
It has been claimed that hundreds of researchers use nested clade phylogeographic analysis (NCPA) based on what the method promises rather than requiring objective validation of the method. The supposed failure of NCPA is based upon the argument that validating it by using positive controls ignored type I error, and that computer simulations have shown a high type I error. The first argument is factually incorrect: the previously published validation analysis fully accounted for both type I and type II errors. The simulations that indicate a 75% type I error rate have serious flaws and only evaluate outdated versions of NCPA. These outdated type I error rates fall precipitously when the 2003 version of single locus NCPA is used or when the 2002 multi-locus version of NCPA is used. It is shown that the treewise type I errors in single-locus NCPA can be corrected to the desired nominal level by a simple statistical procedure, and that multilocus NCPA reconstructs a simulated scenario used to discredit NCPA with 100% accuracy. Hence, NCPA is a not a failed method at all, but rather has been validated both by actual data and by simulated data in a manner that satisfies the published criteria given by its critics. The critics have come to different conclusions because they have focused on the pre-2002 versions of NCPA and have failed to take into account the extensive developments in NCPA since 2002. Hence, researchers can choose to use NCPA based upon objective critical validation that shows that NCPA delivers what it promises. PMID:19335340
NASA Technical Reports Server (NTRS)
Rabideau, Gregg; Chien, Steve; Knight, Russell; Schaffer, Steven; Tran, Daniel; Cichy, Benjamin; Sherwood, Robert
2006-01-01
The Automated Scheduling and Planning Environment (ASPEN) computer program has been updated to version 3.0. ASPEN is a modular, reconfigurable, application software framework for solving batch problems that involve reasoning about time, activities, states, and resources. Applications of ASPEN can include planning spacecraft missions, scheduling of personnel, and managing supply chains, inventories, and production lines. ASPEN 3.0 can be customized for a wide range of applications and for a variety of computing environments that include various central processing units and random access memories.
AA9int: SNP Interaction Pattern Search Using Non-Hierarchical Additive Model Set.
Lin, Hui-Yi; Huang, Po-Yu; Chen, Dung-Tsa; Tung, Heng-Yuan; Sellers, Thomas A; Pow-Sang, Julio; Eeles, Rosalind; Easton, Doug; Kote-Jarai, Zsofia; Amin Al Olama, Ali; Benlloch, Sara; Muir, Kenneth; Giles, Graham G; Wiklund, Fredrik; Gronberg, Henrik; Haiman, Christopher A; Schleutker, Johanna; Nordestgaard, Børge G; Travis, Ruth C; Hamdy, Freddie; Neal, David E; Pashayan, Nora; Khaw, Kay-Tee; Stanford, Janet L; Blot, William J; Thibodeau, Stephen N; Maier, Christiane; Kibel, Adam S; Cybulski, Cezary; Cannon-Albright, Lisa; Brenner, Hermann; Kaneva, Radka; Batra, Jyotsna; Teixeira, Manuel R; Pandha, Hardev; Lu, Yong-Jie; Park, Jong Y
2018-06-07
The use of single nucleotide polymorphism (SNP) interactions to predict complex diseases is getting more attention during the past decade, but related statistical methods are still immature. We previously proposed the SNP Interaction Pattern Identifier (SIPI) approach to evaluate 45 SNP interaction patterns/patterns. SIPI is statistically powerful but suffers from a large computation burden. For large-scale studies, it is necessary to use a powerful and computation-efficient method. The objective of this study is to develop an evidence-based mini-version of SIPI as the screening tool or solitary use and to evaluate the impact of inheritance mode and model structure on detecting SNP-SNP interactions. We tested two candidate approaches: the 'Five-Full' and 'AA9int' method. The Five-Full approach is composed of the five full interaction models considering three inheritance modes (additive, dominant and recessive). The AA9int approach is composed of nine interaction models by considering non-hierarchical model structure and the additive mode. Our simulation results show that AA9int has similar statistical power compared to SIPI and is superior to the Five-Full approach, and the impact of the non-hierarchical model structure is greater than that of the inheritance mode in detecting SNP-SNP interactions. In summary, it is recommended that AA9int is a powerful tool to be used either alone or as the screening stage of a two-stage approach (AA9int+SIPI) for detecting SNP-SNP interactions in large-scale studies. The 'AA9int' and 'parAA9int' functions (standard and parallel computing version) are added in the SIPI R package, which is freely available at https://linhuiyi.github.io/LinHY_Software/. hlin1@lsuhsc.edu. Supplementary data are available at Bioinformatics online.
Gigaflop performance on a CRAY-2: Multitasking a computational fluid dynamics application
NASA Technical Reports Server (NTRS)
Tennille, Geoffrey M.; Overman, Andrea L.; Lambiotte, Jules J.; Streett, Craig L.
1991-01-01
The methodology is described for converting a large, long-running applications code that executed on a single processor of a CRAY-2 supercomputer to a version that executed efficiently on multiple processors. Although the conversion of every application is different, a discussion of the types of modification used to achieve gigaflop performance is included to assist others in the parallelization of applications for CRAY computers, especially those that were developed for other computers. An existing application, from the discipline of computational fluid dynamics, that had utilized over 2000 hrs of CPU time on CRAY-2 during the previous year was chosen as a test case to study the effectiveness of multitasking on a CRAY-2. The nature of dominant calculations within the application indicated that a sustained computational rate of 1 billion floating-point operations per second, or 1 gigaflop, might be achieved. The code was first analyzed and modified for optimal performance on a single processor in a batch environment. After optimal performance on a single CPU was achieved, the code was modified to use multiple processors in a dedicated environment. The results of these two efforts were merged into a single code that had a sustained computational rate of over 1 gigaflop on a CRAY-2. Timings and analysis of performance are given for both single- and multiple-processor runs.
Evaluation of MOSTAS computer code for predicting dynamic loads in two bladed wind turbines
NASA Technical Reports Server (NTRS)
Kaza, K. R. V.; Janetzke, D. C.; Sullivan, T. L.
1979-01-01
Calculated dynamic blade loads were compared with measured loads over a range of yaw stiffnesses of the DOE/NASA Mod-O wind turbine to evaluate the performance of two versions of the MOSTAS computer code. The first version uses a time-averaged coefficient approximation in conjunction with a multi-blade coordinate transformation for two bladed rotors to solve the equations of motion by standard eigenanalysis. The second version accounts for periodic coefficients while solving the equations by a time history integration. A hypothetical three-degree of freedom dynamic model was investigated. The exact equations of motion of this model were solved using the Floquet-Lipunov method. The equations with time-averaged coefficients were solved by standard eigenanalysis.
FHWA Traffic Noise Model user's guide (version 2.0 addendum).
DOT National Transportation Integrated Search
2002-03-01
In March 1998, the Federal Highway Administration (FHWA) Office of Natural : Environment, released the FHWA Traffic Noise Model (FHWA TNM) Version 1.0, a : state-of-the-art computer program for highway traffic noise prediction and : analysis. Since t...
Carbon monoxide screen for signalized intersections : COSIM, version 4.0 - technical documentation.
DOT National Transportation Integrated Search
2013-06-01
Illinois Carbon Monoxide Screen for Intersection Modeling (COSIM) Version 3.0 is a Windows-based computer : program currently used by the Illinois Department of Transportation (IDOT) to estimate worst-case carbon : monoxide (CO) concentrations near s...
HNM, heliport noise model : version 2.2 user's guide
DOT National Transportation Integrated Search
1994-02-01
The John A. Volpe National Transportation Systems Center (Volpe Center), in support of : the Federal Aviation Administration, Office of Environment and Energy, has developed : Version 2.2 of the Heliport Noise Model (HNM). The HNM is a computer progr...
NASA Technical Reports Server (NTRS)
Gladden, Roy E.; Khanampornpan, Teerapat; Fisher, Forest W.
2010-01-01
Version 5.0 of the AutoGen software has been released. Previous versions, variously denoted Autogen and autogen, were reported in two articles: Automated Sequence Generation Process and Software (NPO-30746), Software Tech Briefs (Special Supplement to NASA Tech Briefs), September 2007, page 30, and Autogen Version 2.0 (NPO- 41501), NASA Tech Briefs, Vol. 31, No. 10 (October 2007), page 58. To recapitulate: AutoGen (now signifying automatic sequence generation ) automates the generation of sequences of commands in a standard format for uplink to spacecraft. AutoGen requires fewer workers than are needed for older manual sequence-generation processes, and greatly reduces sequence-generation times. The sequences are embodied in spacecraft activity sequence files (SASFs). AutoGen automates generation of SASFs by use of another previously reported program called APGEN. AutoGen encodes knowledge of different mission phases and of how the resultant commands must differ among the phases. AutoGen also provides means for customizing sequences through use of configuration files. The approach followed in developing AutoGen has involved encoding the behaviors of a system into a model and encoding algorithms for context-sensitive customizations of the modeled behaviors. This version of AutoGen addressed the MRO (Mars Reconnaissance Orbiter) primary science phase (PSP) mission phase. On previous Mars missions this phase has more commonly been referred to as mapping phase. This version addressed the unique aspects of sequencing orbital operations and specifically the mission specific adaptation of orbital operations for MRO. This version also includes capabilities for MRO s role in Mars relay support for UHF relay communications with the MER rovers and the Phoenix lander.
Wright-Berryman, Jennifer L; Salyers, Michelle P; O'Halloran, James P; Kemp, Aaron S; Mueser, Kim T; Diazoni, Amanda J
2013-12-01
To explore mental health consumer and provider responses to a computerized version of the Illness Management and Recovery (IMR) program. Semistructured interviews were conducted to gather data from 6 providers and 12 consumers who participated in a computerized prototype of the IMR program. An inductive-consensus-based approach was used to analyze the interview responses. Qualitative analysis revealed consumers perceived various personal benefits and ease of use afforded by the new technology platform. Consumers also highly valued provider assistance and offered several suggestions to improve the program. The largest perceived barriers to future implementation were lack of computer skills and access to computers. Similarly, IMR providers commented on its ease and convenience, and the reduction of time intensive material preparation. Providers also expressed that the use of technology creates more options for the consumer to access treatment. The technology was acceptable, easy to use, and well-liked by consumers and providers. Clinician assistance with technology was viewed as helpful to get clients started with the program, as lack of computer skills and access to computers was a concern. Access to materials between sessions appears to be desired; however, given perceived barriers of computer skills and computer access, additional supports may be needed for consumers to achieve full benefits of a computerized version of IMR. PsycINFO Database Record (c) 2013 APA, all rights reserved.
TRASYS - THERMAL RADIATION ANALYZER SYSTEM (DEC VAX VERSION WITH NASADIG)
NASA Technical Reports Server (NTRS)
Anderson, G. E.
1994-01-01
The Thermal Radiation Analyzer System, TRASYS, is a computer software system with generalized capability to solve the radiation related aspects of thermal analysis problems. TRASYS computes the total thermal radiation environment for a spacecraft in orbit. The software calculates internode radiation interchange data as well as incident and absorbed heat rate data originating from environmental radiant heat sources. TRASYS provides data of both types in a format directly usable by such thermal analyzer programs as SINDA/FLUINT (available from COSMIC, program number MSC-21528). One primary feature of TRASYS is that it allows users to write their own driver programs to organize and direct the preprocessor and processor library routines in solving specific thermal radiation problems. The preprocessor first reads and converts the user's geometry input data into the form used by the processor library routines. Then, the preprocessor accepts the user's driving logic, written in the TRASYS modified FORTRAN language. In many cases, the user has a choice of routines to solve a given problem. Users may also provide their own routines where desirable. In particular, the user may write output routines to provide for an interface between TRASYS and any thermal analyzer program using the R-C network concept. Input to the TRASYS program consists of Options and Edit data, Model data, and Logic Flow and Operations data. Options and Edit data provide for basic program control and user edit capability. The Model data describe the problem in terms of geometry and other properties. This information includes surface geometry data, documentation data, nodal data, block coordinate system data, form factor data, and flux data. Logic Flow and Operations data house the user's driver logic, including the sequence of subroutine calls and the subroutine library. Output from TRASYS consists of two basic types of data: internode radiation interchange data, and incident and absorbed heat rate data. The flexible structure of TRASYS allows considerable freedom in the definition and choice of solution method for a thermal radiation problem. The program's flexible structure has also allowed TRASYS to retain the same basic input structure as the authors update it in order to keep up with changing requirements. Among its other important features are the following: 1) up to 3200 node problem size capability with shadowing by intervening opaque or semi-transparent surfaces; 2) choice of diffuse, specular, or diffuse/specular radiant interchange solutions; 3) a restart capability that minimizes recomputing; 4) macroinstructions that automatically provide the executive logic for orbit generation that optimizes the use of previously completed computations; 5) a time variable geometry package that provides automatic pointing of the various parts of an articulated spacecraft and an automatic look-back feature that eliminates redundant form factor calculations; 6) capability to specify submodel names to identify sets of surfaces or components as an entity; and 7) subroutines to perform functions which save and recall the internodal and/or space form factors in subsequent steps for nodes with fixed geometry during a variable geometry run. There are two machine versions of TRASYS v27: a DEC VAX version and a Cray UNICOS version. Both versions require installation of the NASADIG library (MSC-21801 for DEC VAX or COS-10049 for CRAY), which is available from COSMIC either separately or bundled with TRASYS. The NASADIG (NASA Device Independent Graphics Library) plot package provides a pictorial representation of input geometry, orbital/orientation parameters, and heating rate output as a function of time. NASADIG supports Tektronix terminals. The CRAY version of TRASYS v27 is written in FORTRAN 77 for batch or interactive execution and has been implemented on CRAY X-MP and CRAY Y-MP series computers running UNICOS. The standard distribution medium for MSC-21959 (CRAY version without NASADIG) is a 1600 BPI 9-track magnetic tape in UNIX tar format. The standard distribution medium for COS-10040 (CRAY version with NASADIG) is a set of two 6250 BPI 9-track magnetic tapes in UNIX tar format. Alternate distribution media and formats are available upon request. The DEC VAX version of TRASYS v27 is written in FORTRAN 77 for batch execution (only the plotting driver program is interactive) and has been implemented on a DEC VAX 8650 computer under VMS. Since the source codes for MSC-21030 and COS-10026 are in VAX/VMS text library files and DEC Command Language files, COSMIC will only provide these programs in the following formats: MSC-21030, TRASYS (DEC VAX version without NASADIG) is available on a 1600 BPI 9-track magnetic tape in VAX BACKUP format (standard distribution medium) or in VAX BACKUP format on a TK50 tape cartridge; COS-10026, TRASYS (DEC VAX version with NASADIG), is available in VAX BACKUP format on a set of three 6250 BPI 9-track magnetic tapes (standard distribution medium) or a set of three TK50 tape cartridges in VAX BACKUP format. TRASYS was last updated in 1993.
TRASYS - THERMAL RADIATION ANALYZER SYSTEM (CRAY VERSION WITH NASADIG)
NASA Technical Reports Server (NTRS)
Anderson, G. E.
1994-01-01
The Thermal Radiation Analyzer System, TRASYS, is a computer software system with generalized capability to solve the radiation related aspects of thermal analysis problems. TRASYS computes the total thermal radiation environment for a spacecraft in orbit. The software calculates internode radiation interchange data as well as incident and absorbed heat rate data originating from environmental radiant heat sources. TRASYS provides data of both types in a format directly usable by such thermal analyzer programs as SINDA/FLUINT (available from COSMIC, program number MSC-21528). One primary feature of TRASYS is that it allows users to write their own driver programs to organize and direct the preprocessor and processor library routines in solving specific thermal radiation problems. The preprocessor first reads and converts the user's geometry input data into the form used by the processor library routines. Then, the preprocessor accepts the user's driving logic, written in the TRASYS modified FORTRAN language. In many cases, the user has a choice of routines to solve a given problem. Users may also provide their own routines where desirable. In particular, the user may write output routines to provide for an interface between TRASYS and any thermal analyzer program using the R-C network concept. Input to the TRASYS program consists of Options and Edit data, Model data, and Logic Flow and Operations data. Options and Edit data provide for basic program control and user edit capability. The Model data describe the problem in terms of geometry and other properties. This information includes surface geometry data, documentation data, nodal data, block coordinate system data, form factor data, and flux data. Logic Flow and Operations data house the user's driver logic, including the sequence of subroutine calls and the subroutine library. Output from TRASYS consists of two basic types of data: internode radiation interchange data, and incident and absorbed heat rate data. The flexible structure of TRASYS allows considerable freedom in the definition and choice of solution method for a thermal radiation problem. The program's flexible structure has also allowed TRASYS to retain the same basic input structure as the authors update it in order to keep up with changing requirements. Among its other important features are the following: 1) up to 3200 node problem size capability with shadowing by intervening opaque or semi-transparent surfaces; 2) choice of diffuse, specular, or diffuse/specular radiant interchange solutions; 3) a restart capability that minimizes recomputing; 4) macroinstructions that automatically provide the executive logic for orbit generation that optimizes the use of previously completed computations; 5) a time variable geometry package that provides automatic pointing of the various parts of an articulated spacecraft and an automatic look-back feature that eliminates redundant form factor calculations; 6) capability to specify submodel names to identify sets of surfaces or components as an entity; and 7) subroutines to perform functions which save and recall the internodal and/or space form factors in subsequent steps for nodes with fixed geometry during a variable geometry run. There are two machine versions of TRASYS v27: a DEC VAX version and a Cray UNICOS version. Both versions require installation of the NASADIG library (MSC-21801 for DEC VAX or COS-10049 for CRAY), which is available from COSMIC either separately or bundled with TRASYS. The NASADIG (NASA Device Independent Graphics Library) plot package provides a pictorial representation of input geometry, orbital/orientation parameters, and heating rate output as a function of time. NASADIG supports Tektronix terminals. The CRAY version of TRASYS v27 is written in FORTRAN 77 for batch or interactive execution and has been implemented on CRAY X-MP and CRAY Y-MP series computers running UNICOS. The standard distribution medium for MSC-21959 (CRAY version without NASADIG) is a 1600 BPI 9-track magnetic tape in UNIX tar format. The standard distribution medium for COS-10040 (CRAY version with NASADIG) is a set of two 6250 BPI 9-track magnetic tapes in UNIX tar format. Alternate distribution media and formats are available upon request. The DEC VAX version of TRASYS v27 is written in FORTRAN 77 for batch execution (only the plotting driver program is interactive) and has been implemented on a DEC VAX 8650 computer under VMS. Since the source codes for MSC-21030 and COS-10026 are in VAX/VMS text library files and DEC Command Language files, COSMIC will only provide these programs in the following formats: MSC-21030, TRASYS (DEC VAX version without NASADIG) is available on a 1600 BPI 9-track magnetic tape in VAX BACKUP format (standard distribution medium) or in VAX BACKUP format on a TK50 tape cartridge; COS-10026, TRASYS (DEC VAX version with NASADIG), is available in VAX BACKUP format on a set of three 6250 BPI 9-track magnetic tapes (standard distribution medium) or a set of three TK50 tape cartridges in VAX BACKUP format. TRASYS was last updated in 1993.
TRASYS - THERMAL RADIATION ANALYZER SYSTEM (DEC VAX VERSION WITHOUT NASADIG)
NASA Technical Reports Server (NTRS)
Vogt, R. A.
1994-01-01
The Thermal Radiation Analyzer System, TRASYS, is a computer software system with generalized capability to solve the radiation related aspects of thermal analysis problems. TRASYS computes the total thermal radiation environment for a spacecraft in orbit. The software calculates internode radiation interchange data as well as incident and absorbed heat rate data originating from environmental radiant heat sources. TRASYS provides data of both types in a format directly usable by such thermal analyzer programs as SINDA/FLUINT (available from COSMIC, program number MSC-21528). One primary feature of TRASYS is that it allows users to write their own driver programs to organize and direct the preprocessor and processor library routines in solving specific thermal radiation problems. The preprocessor first reads and converts the user's geometry input data into the form used by the processor library routines. Then, the preprocessor accepts the user's driving logic, written in the TRASYS modified FORTRAN language. In many cases, the user has a choice of routines to solve a given problem. Users may also provide their own routines where desirable. In particular, the user may write output routines to provide for an interface between TRASYS and any thermal analyzer program using the R-C network concept. Input to the TRASYS program consists of Options and Edit data, Model data, and Logic Flow and Operations data. Options and Edit data provide for basic program control and user edit capability. The Model data describe the problem in terms of geometry and other properties. This information includes surface geometry data, documentation data, nodal data, block coordinate system data, form factor data, and flux data. Logic Flow and Operations data house the user's driver logic, including the sequence of subroutine calls and the subroutine library. Output from TRASYS consists of two basic types of data: internode radiation interchange data, and incident and absorbed heat rate data. The flexible structure of TRASYS allows considerable freedom in the definition and choice of solution method for a thermal radiation problem. The program's flexible structure has also allowed TRASYS to retain the same basic input structure as the authors update it in order to keep up with changing requirements. Among its other important features are the following: 1) up to 3200 node problem size capability with shadowing by intervening opaque or semi-transparent surfaces; 2) choice of diffuse, specular, or diffuse/specular radiant interchange solutions; 3) a restart capability that minimizes recomputing; 4) macroinstructions that automatically provide the executive logic for orbit generation that optimizes the use of previously completed computations; 5) a time variable geometry package that provides automatic pointing of the various parts of an articulated spacecraft and an automatic look-back feature that eliminates redundant form factor calculations; 6) capability to specify submodel names to identify sets of surfaces or components as an entity; and 7) subroutines to perform functions which save and recall the internodal and/or space form factors in subsequent steps for nodes with fixed geometry during a variable geometry run. There are two machine versions of TRASYS v27: a DEC VAX version and a Cray UNICOS version. Both versions require installation of the NASADIG library (MSC-21801 for DEC VAX or COS-10049 for CRAY), which is available from COSMIC either separately or bundled with TRASYS. The NASADIG (NASA Device Independent Graphics Library) plot package provides a pictorial representation of input geometry, orbital/orientation parameters, and heating rate output as a function of time. NASADIG supports Tektronix terminals. The CRAY version of TRASYS v27 is written in FORTRAN 77 for batch or interactive execution and has been implemented on CRAY X-MP and CRAY Y-MP series computers running UNICOS. The standard distribution medium for MSC-21959 (CRAY version without NASADIG) is a 1600 BPI 9-track magnetic tape in UNIX tar format. The standard distribution medium for COS-10040 (CRAY version with NASADIG) is a set of two 6250 BPI 9-track magnetic tapes in UNIX tar format. Alternate distribution media and formats are available upon request. The DEC VAX version of TRASYS v27 is written in FORTRAN 77 for batch execution (only the plotting driver program is interactive) and has been implemented on a DEC VAX 8650 computer under VMS. Since the source codes for MSC-21030 and COS-10026 are in VAX/VMS text library files and DEC Command Language files, COSMIC will only provide these programs in the following formats: MSC-21030, TRASYS (DEC VAX version without NASADIG) is available on a 1600 BPI 9-track magnetic tape in VAX BACKUP format (standard distribution medium) or in VAX BACKUP format on a TK50 tape cartridge; COS-10026, TRASYS (DEC VAX version with NASADIG), is available in VAX BACKUP format on a set of three 6250 BPI 9-track magnetic tapes (standard distribution medium) or a set of three TK50 tape cartridges in VAX BACKUP format. TRASYS was last updated in 1993.
Introduction to Quantum Intelligence
NASA Technical Reports Server (NTRS)
Zak, Michail
1996-01-01
An impact of ideas associated with the concept of a hypothetical quantum computer upon classical computing is analyzed. Two fundamental properties of quantum computing: direct simulations of probabilities, and influence between different branches of probabilistic scenarios, as well as their classical versions, are discussed.
Global 30m Height Above the Nearest Drainage
NASA Astrophysics Data System (ADS)
Donchyts, Gennadii; Winsemius, Hessel; Schellekens, Jaap; Erickson, Tyler; Gao, Hongkai; Savenije, Hubert; van de Giesen, Nick
2016-04-01
Variability of the Earth surface is the primary characteristics affecting the flow of surface and subsurface water. Digital elevation models, usually represented as height maps above some well-defined vertical datum, are used a lot to compute hydrologic parameters such as local flow directions, drainage area, drainage network pattern, and many others. Usually, it requires a significant effort to derive these parameters at a global scale. One hydrological characteristic introduced in the last decade is Height Above the Nearest Drainage (HAND): a digital elevation model normalized using nearest drainage. This parameter has been shown to be useful for many hydrological and more general purpose applications, such as landscape hazard mapping, landform classification, remote sensing and rainfall-runoff modeling. One of the essential characteristics of HAND is its ability to capture heterogeneities in local environments, difficult to measure or model otherwise. While many applications of HAND were published in the academic literature, no studies analyze its variability on a global scale, especially, using higher resolution DEMs, such as the new, one arc-second (approximately 30m) resolution version of SRTM. In this work, we will present the first global version of HAND computed using a mosaic of two DEMS: 30m SRTM and Viewfinderpanorama DEM (90m). The lower resolution DEM was used to cover latitudes above 60 degrees north and below 56 degrees south where SRTM is not available. We compute HAND using the unmodified version of the input DEMs to ensure consistency with the original elevation model. We have parallelized processing by generating a homogenized, equal-area version of HydroBASINS catchments. The resulting catchment boundaries were used to perform processing using 30m resolution DEM. To compute HAND, a new version of D8 local drainage directions as well as flow accumulation were calculated. The latter was used to estimate river head by incorporating fixed and variable thresholding methods. The resulting HAND dataset was analyzed regarding its spatial variability and to assess the global distribution of the main landform types: valley, ecotone, slope, and plateau. The method used to compute HAND was implemented using PCRaster software, running on Google Compute Engine platform running under Ubuntu Linux. The Google Earth Engine was used to perform mosaicing and clipping of the original DEMs as well as to provide access to the final product. The effort took about three months of computing time on eight core CPU virtual machine.
ERIC Educational Resources Information Center
Falleur, David M.
This presentation describes SuperPILOT, an extended version of Apple PILOT, a programming language for developing computer-assisted instruction (CAI) with the Apple II computer that includes the features of its early PILOT (Programmed Inquiry, Learning or Teaching) ancestors together with new features that make use of the Apple computer's advanced…
How to Create, Modify, and Interface Aspen In-House and User Databanks for System Configuration 1:
DOE Office of Scientific and Technical Information (OSTI.GOV)
Camp, D W
2000-10-27
The goal of this document is to provide detailed instructions to create, modify, interface, and test Aspen User and In-House databanks with minimal frustration. The level of instructions are aimed at a novice Aspen Plus simulation user who is neither a programming nor computer-system expert. The instructions are tailored to Version 10.1 of Aspen Plus and the specific computing configuration summarized in the Title of this document and detailed in Section 2. Many details of setting up databanks depend on the computing environment specifics, such as the machines, operating systems, command languages, directory structures, inter-computer communications software, the version ofmore » the Aspen Engine and Graphical User Interface (GUI), and the directory structure of how these were installed.« less
NASA Technical Reports Server (NTRS)
Smith, M. E.; Gevins, A.; Brown, H.; Karnik, A.; Du, R.
2001-01-01
Electroencephalographic (EEG) recordings were made while 16 participants performed versions of a personal-computer-based flight simulation task of low, moderate, or high difficulty. As task difficulty increased, frontal midline theta EEG activity increased and alpha band activity decreased. A participant-specific function that combined multiple EEG features to create a single load index was derived from a sample of each participant's data and then applied to new test data from that participant. Index values were computed for every 4 s of task data. Across participants, mean task load index values increased systematically with increasing task difficulty and differed significantly between the different task versions. Actual or potential applications of this research include the use of multivariate EEG-based methods to monitor task loading during naturalistic computer-based work.
NASA Astrophysics Data System (ADS)
Iwasaki, Y.; CP-PACS Collaboration
1998-01-01
The CP-PACS project is a five year plan, which formally started in April 1992 and has been completed in March 1997, to develop a massively parallel computer for carrying out research in computational physics with primary emphasis on lattice QCD. The initial version of the CP-PACS computer with a theoretical peak speed of 307 GFLOPS with 1024 processors was completed in March 1996. The final version with a peak speed of 614 GFLOPS with 2048 processors was completed in September 1996, and has been in full operation since October 1996. We describe the architecture, the final specification, the hardware implementation, and the software of the CP-PACS computer. The CP-PACS has been used for hadron spectroscopy production runs since July 1996. The performance for lattice QCD applications and the LINPACK benchmark are given.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tucker, T.C.
1980-06-01
The implementation of a version of the Rutherford Laboratory's magnetostatic computer code GFUN3D on the CDC 7600 at the National Magnetic Fusion Energy Computer Center is reported. A new iteration technique that greatly increases the probability of convergence and reduces computation time by about 30% for calculations with nonlinear, ferromagnetic materials is included. The use of GFUN3D on the NMFE network is discussed, and suggestions for future work are presented. Appendix A consists of revisions to the GFUN3D User Guide (published by Rutherford Laboratory( that are necessary to use this version. Appendix B contains input and output for some samplemore » calculations. Appendix C is a detailed discussion of the old and new iteration techniques.« less
FHWA Traffic Noise Model version 1.1 user's guide (Addendum)
DOT National Transportation Integrated Search
2000-09-30
In March 1998, the Federal Highway Administration (FHWA) Office of Natural Environment, released the FHWA Traffic Noise Model (FHWA TNM) Version 1.0, a state-of-the-art computer program for highway traffic noise prediction and analysis. Since then, t...
FHWA Traffic Noise Model user's guide (version 2.5 addendum)
DOT National Transportation Integrated Search
2004-04-30
In March 1998, the Federal Highway Administration (FHWA), Office of Natural and Human Environment, released the FHWA Traffic Noise Model (TNM), Version 1.0, a state-of-the-art computer model for highway traffic noise prediction and analysis. Since th...
NASA Astrophysics Data System (ADS)
Schimeczek, C.; Engel, D.; Wunner, G.
2012-07-01
Our previously published code for calculating energies and bound-bound transitions of medium-Z elements at neutron star magnetic field strengths [D. Engel, M. Klews, G. Wunner, Comput. Phys. Comm. 180 (2009) 302-311] was based on the adiabatic approximation. It assumes a complete decoupling of the (fast) gyration of the electrons under the action of the magnetic field and the (slow) bound motion along the field under the action of the Coulomb forces. For the single-particle orbitals this implied that each is a product of a Landau state and an (unknown) longitudinal wave function whose B-spline coefficients were determined self-consistently by solving the Hartree-Fock equations for the many-electron problem on a finite-element grid. In the present code we go beyond the adiabatic approximation, by allowing the transverse part of each orbital to be a superposition of Landau states, while assuming that the longitudinal part can be approximated by the same wave function in each Landau level. Inserting this ansatz into the energy variational principle leads to a system of coupled equations in which the B-spline coefficients depend on the weights of the individual Landau states, and vice versa, and which therefore has to be solved in a doubly self-consistent manner. The extended ansatz takes into account the back-reaction of the Coulomb motion of the electrons along the field direction on their motion in the plane perpendicular to the field, an effect which cannot be captured by the adiabatic approximation. The new code allows for the inclusion of up to 8 Landau levels. This reduces the relative error of energy values as compared to the adiabatic approximation results by typically a factor of three (1/3 of the original error), and yields accurate results also in regions of lower neutron star magnetic field strengths where the adiabatic approximation fails. Further improvements in the code are a more sophisticated choice of the initial wave functions, which takes into account the shielding of the core potential for outer electrons by inner electrons, and an optimal finite-element decomposition of each individual longitudinal wave function. These measures largely enhance the convergence properties compared to the previous code, and lead to speed-ups by factors up to two orders of magnitude compared with the implementation of the Hartree-Fock-Roothaan method used by Engel and Wunner in [D. Engel, G. Wunner, Phys. Rev. A 78 (2008) 032515]. New version program summaryProgram title: HFFER II Catalogue identifier: AECC_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECC_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: v 55 130 No. of bytes in distributed program, including test data, etc.: 293 700 Distribution format: tar.gz Programming language: Fortran 95 Computer: Cluster of 1-13 HP Compaq dc5750 Operating system: Linux Has the code been vectorized or parallelized?: Yes, parallelized using MPI directives. RAM: 1 GByte per node Classification: 2.1 External routines: MPI/GFortran, LAPACK, BLAS, FMlib (included in the package) Catalogue identifier of previous version: AECC_v1_0 Journal reference of previous version: Comput. Phys. Comm. 180 (2009) 302 Does the new version supersede the previous version?: Yes Nature of problem: Quantitative modellings of features observed in the X-ray spectra of isolated magnetic neutron stars are hampered by the lack of sufficiently large and accurate databases for atoms and ions up to the last fusion product, iron, at strong magnetic field strengths. Our code is intended to provide a powerful tool for calculating energies and oscillator strengths of medium-Z atoms and ions at neutron star magnetic field strengths with sufficient accuracy in a routine way to create such databases. Solution method: The Slater determinants of the atomic wave functions are constructed from single-particle orbitals ψi which are products of a wave function in the z direction (the direction of the magnetic field) and an expansion of the wave function perpendicular to the direction of the magnetic field in terms of Landau states, ψi(ρ,φ,z)=Pi(z)∑n=0NLtinϕni(ρ,φ). The tin are expansion coefficients, and the expansion is cut off at some maximum Landau level quantum number n=NL. In the previous version of the code only the lowest Landau level was included (NL=0), in the new version NL can take values of up to 7. As in the previous version of the code, the longitudinal wave functions are expanded in terms of sixth-order B-splines on finite elements on the z axis, with a combination of equidistant and quadratically widening element borders. Both the B-spline expansion coefficients and the Landau weights tin of all orbitals have to be determined in a doubly self-consistent way: For a given set of Landau weights tin, the system of linear equations for the B-spline expansion coefficients, which is equivalent to the Hartree-Fock equations for the longitudinal wave functions, is solved numerically. In the second step, for frozen B-spline coefficients new Landau weights are determined by minimizing the total energy with respect to the Landau expansion coefficients. Both steps require solving non-linear eigenvalue problems of Roothaan type. The procedure is repeated until convergence of both the B-spline coefficients and the Landau weights is achieved. Reasons for new version: The former version of the code was restricted to the adiabatic approximation, which assumes the quantum dynamics of the electrons in the plane perpendicular to the magnetic field to be fixed in the lowest Landau level, n=0. This approximation is valid only if the magnetic field strengths are large compared to the reference magnetic field BZ, for a nuclear charge Z,BZ=Z24.70108×105 T. Summary of revisions: In the new version, the transverse parts of the orbitals are expanded in terms of Landau states up to n=7, and the expansion coefficients are determined, together with the longitudinal wave functions, in a doubly self-consistent way. Thus the back-reaction of the quantum dynamics along the magnetic field direction on the quantum dynamics in the plane perpendicular to it is taken into account. The new ansatz not only increases the accuracy of the results for energy values and transition strengths obtained so far, but also allows their calculation for magnetic field strengths down to B≳BZ, where the adiabatic approximation fails. Restrictions: Intense magnetic field strengths are required, since the expansion of the transverse single-particle wave functions using 8 Landau levels will no longer produce accurate results if the scaled magnetic field strength parameter βZ=B/BZ becomes much smaller than unity. Unusual features: A huge program speed-up is achieved by making use of pre-calculated binary files. These can be calculated with additional programs provided with this package. Running time: 1-30 min.
ARC2D - EFFICIENT SOLUTION METHODS FOR THE NAVIER-STOKES EQUATIONS (DEC RISC ULTRIX VERSION)
NASA Technical Reports Server (NTRS)
Biyabani, S. R.
1994-01-01
ARC2D is a computational fluid dynamics program developed at the NASA Ames Research Center specifically for airfoil computations. The program uses implicit finite-difference techniques to solve two-dimensional Euler equations and thin layer Navier-Stokes equations. It is based on the Beam and Warming implicit approximate factorization algorithm in generalized coordinates. The methods are either time accurate or accelerated non-time accurate steady state schemes. The evolution of the solution through time is physically realistic; good solution accuracy is dependent on mesh spacing and boundary conditions. The mathematical development of ARC2D begins with the strong conservation law form of the two-dimensional Navier-Stokes equations in Cartesian coordinates, which admits shock capturing. The Navier-Stokes equations can be transformed from Cartesian coordinates to generalized curvilinear coordinates in a manner that permits one computational code to serve a wide variety of physical geometries and grid systems. ARC2D includes an algebraic mixing length model to approximate the effect of turbulence. In cases of high Reynolds number viscous flows, thin layer approximation can be applied. ARC2D allows for a variety of solutions to stability boundaries, such as those encountered in flows with shocks. The user has considerable flexibility in assigning geometry and developing grid patterns, as well as in assigning boundary conditions. However, the ARC2D model is most appropriate for attached and mildly separated boundary layers; no attempt is made to model wake regions and widely separated flows. The techniques have been successfully used for a variety of inviscid and viscous flowfield calculations. The Cray version of ARC2D is written in FORTRAN 77 for use on Cray series computers and requires approximately 5Mb memory. The program is fully vectorized. The tape includes variations for the COS and UNICOS operating systems. Also included is a sample routine for CONVEX computers to emulate Cray system time calls, which should be easy to modify for other machines as well. The standard distribution media for this version is a 9-track 1600 BPI ASCII Card Image format magnetic tape. The Cray version was developed in 1987. The IBM ES/3090 version is an IBM port of the Cray version. It is written in IBM VS FORTRAN and has the capability of executing in both vector and parallel modes on the MVS/XA operating system and in vector mode on the VM/XA operating system. Various options of the IBM VS FORTRAN compiler provide new features for the ES/3090 version, including 64-bit arithmetic and up to 2 GB of virtual addressability. The IBM ES/3090 version is available only as a 9-track, 1600 BPI IBM IEBCOPY format magnetic tape. The IBM ES/3090 version was developed in 1989. The DEC RISC ULTRIX version is a DEC port of the Cray version. It is written in FORTRAN 77 for RISC-based Digital Equipment platforms. The memory requirement is approximately 7Mb of main memory. It is available in UNIX tar format on TK50 tape cartridge. The port to DEC RISC ULTRIX was done in 1990. COS and UNICOS are trademarks and Cray is a registered trademark of Cray Research, Inc. IBM, ES/3090, VS FORTRAN, MVS/XA, and VM/XA are registered trademarks of International Business Machines. DEC and ULTRIX are registered trademarks of Digital Equipment Corporation.
ARC2D - EFFICIENT SOLUTION METHODS FOR THE NAVIER-STOKES EQUATIONS (CRAY VERSION)
NASA Technical Reports Server (NTRS)
Pulliam, T. H.
1994-01-01
ARC2D is a computational fluid dynamics program developed at the NASA Ames Research Center specifically for airfoil computations. The program uses implicit finite-difference techniques to solve two-dimensional Euler equations and thin layer Navier-Stokes equations. It is based on the Beam and Warming implicit approximate factorization algorithm in generalized coordinates. The methods are either time accurate or accelerated non-time accurate steady state schemes. The evolution of the solution through time is physically realistic; good solution accuracy is dependent on mesh spacing and boundary conditions. The mathematical development of ARC2D begins with the strong conservation law form of the two-dimensional Navier-Stokes equations in Cartesian coordinates, which admits shock capturing. The Navier-Stokes equations can be transformed from Cartesian coordinates to generalized curvilinear coordinates in a manner that permits one computational code to serve a wide variety of physical geometries and grid systems. ARC2D includes an algebraic mixing length model to approximate the effect of turbulence. In cases of high Reynolds number viscous flows, thin layer approximation can be applied. ARC2D allows for a variety of solutions to stability boundaries, such as those encountered in flows with shocks. The user has considerable flexibility in assigning geometry and developing grid patterns, as well as in assigning boundary conditions. However, the ARC2D model is most appropriate for attached and mildly separated boundary layers; no attempt is made to model wake regions and widely separated flows. The techniques have been successfully used for a variety of inviscid and viscous flowfield calculations. The Cray version of ARC2D is written in FORTRAN 77 for use on Cray series computers and requires approximately 5Mb memory. The program is fully vectorized. The tape includes variations for the COS and UNICOS operating systems. Also included is a sample routine for CONVEX computers to emulate Cray system time calls, which should be easy to modify for other machines as well. The standard distribution media for this version is a 9-track 1600 BPI ASCII Card Image format magnetic tape. The Cray version was developed in 1987. The IBM ES/3090 version is an IBM port of the Cray version. It is written in IBM VS FORTRAN and has the capability of executing in both vector and parallel modes on the MVS/XA operating system and in vector mode on the VM/XA operating system. Various options of the IBM VS FORTRAN compiler provide new features for the ES/3090 version, including 64-bit arithmetic and up to 2 GB of virtual addressability. The IBM ES/3090 version is available only as a 9-track, 1600 BPI IBM IEBCOPY format magnetic tape. The IBM ES/3090 version was developed in 1989. The DEC RISC ULTRIX version is a DEC port of the Cray version. It is written in FORTRAN 77 for RISC-based Digital Equipment platforms. The memory requirement is approximately 7Mb of main memory. It is available in UNIX tar format on TK50 tape cartridge. The port to DEC RISC ULTRIX was done in 1990. COS and UNICOS are trademarks and Cray is a registered trademark of Cray Research, Inc. IBM, ES/3090, VS FORTRAN, MVS/XA, and VM/XA are registered trademarks of International Business Machines. DEC and ULTRIX are registered trademarks of Digital Equipment Corporation.
OpenMP GNU and Intel Fortran programs for solving the time-dependent Gross-Pitaevskii equation
NASA Astrophysics Data System (ADS)
Young-S., Luis E.; Muruganandam, Paulsamy; Adhikari, Sadhan K.; Lončar, Vladimir; Vudragović, Dušan; Balaž, Antun
2017-11-01
We present Open Multi-Processing (OpenMP) version of Fortran 90 programs for solving the Gross-Pitaevskii (GP) equation for a Bose-Einstein condensate in one, two, and three spatial dimensions, optimized for use with GNU and Intel compilers. We use the split-step Crank-Nicolson algorithm for imaginary- and real-time propagation, which enables efficient calculation of stationary and non-stationary solutions, respectively. The present OpenMP programs are designed for computers with multi-core processors and optimized for compiling with both commercially-licensed Intel Fortran and popular free open-source GNU Fortran compiler. The programs are easy to use and are elaborated with helpful comments for the users. All input parameters are listed at the beginning of each program. Different output files provide physical quantities such as energy, chemical potential, root-mean-square sizes, densities, etc. We also present speedup test results for new versions of the programs. Program files doi:http://dx.doi.org/10.17632/y8zk3jgn84.2 Licensing provisions: Apache License 2.0 Programming language: OpenMP GNU and Intel Fortran 90. Computer: Any multi-core personal computer or workstation with the appropriate OpenMP-capable Fortran compiler installed. Number of processors used: All available CPU cores on the executing computer. Journal reference of previous version: Comput. Phys. Commun. 180 (2009) 1888; ibid.204 (2016) 209. Does the new version supersede the previous version?: Not completely. It does supersede previous Fortran programs from both references above, but not OpenMP C programs from Comput. Phys. Commun. 204 (2016) 209. Nature of problem: The present Open Multi-Processing (OpenMP) Fortran programs, optimized for use with commercially-licensed Intel Fortran and free open-source GNU Fortran compilers, solve the time-dependent nonlinear partial differential (GP) equation for a trapped Bose-Einstein condensate in one (1d), two (2d), and three (3d) spatial dimensions for six different trap symmetries: axially and radially symmetric traps in 3d, circularly symmetric traps in 2d, fully isotropic (spherically symmetric) and fully anisotropic traps in 2d and 3d, as well as 1d traps, where no spatial symmetry is considered. Solution method: We employ the split-step Crank-Nicolson algorithm to discretize the time-dependent GP equation in space and time. The discretized equation is then solved by imaginary- or real-time propagation, employing adequately small space and time steps, to yield the solution of stationary and non-stationary problems, respectively. Reasons for the new version: Previously published Fortran programs [1,2] have now become popular tools [3] for solving the GP equation. These programs have been translated to the C programming language [4] and later extended to the more complex scenario of dipolar atoms [5]. Now virtually all computers have multi-core processors and some have motherboards with more than one physical computer processing unit (CPU), which may increase the number of available CPU cores on a single computer to several tens. The C programs have been adopted to be very fast on such multi-core modern computers using general-purpose graphic processing units (GPGPU) with Nvidia CUDA and computer clusters using Message Passing Interface (MPI) [6]. Nevertheless, previously developed Fortran programs are also commonly used for scientific computation and most of them use a single CPU core at a time in modern multi-core laptops, desktops, and workstations. Unless the Fortran programs are made aware and capable of making efficient use of the available CPU cores, the solution of even a realistic dynamical 1d problem, not to mention the more complicated 2d and 3d problems, could be time consuming using the Fortran programs. Previously, we published auto-parallel Fortran programs [2] suitable for Intel (but not GNU) compiler for solving the GP equation. Hence, a need for the full OpenMP version of the Fortran programs to reduce the execution time cannot be overemphasized. To address this issue, we provide here such OpenMP Fortran programs, optimized for both Intel and GNU Fortran compilers and capable of using all available CPU cores, which can significantly reduce the execution time. Summary of revisions: Previous Fortran programs [1] for solving the time-dependent GP equation in 1d, 2d, and 3d with different trap symmetries have been parallelized using the OpenMP interface to reduce the execution time on multi-core processors. There are six different trap symmetries considered, resulting in six programs for imaginary-time propagation and six for real-time propagation, totaling to 12 programs included in BEC-GP-OMP-FOR software package. All input data (number of atoms, scattering length, harmonic oscillator trap length, trap anisotropy, etc.) are conveniently placed at the beginning of each program, as before [2]. Present programs introduce a new input parameter, which is designated by Number_of_Threads and defines the number of CPU cores of the processor to be used in the calculation. If one sets the value 0 for this parameter, all available CPU cores will be used. For the most efficient calculation it is advisable to leave one CPU core unused for the background system's jobs. For example, on a machine with 20 CPU cores such that we used for testing, it is advisable to use up to 19 CPU cores. However, the total number of used CPU cores can be divided into more than one job. For instance, one can run three simulations simultaneously using 10, 4, and 5 CPU cores, respectively, thus totaling to 19 used CPU cores on a 20-core computer. The Fortran source programs are located in the directory src, and can be compiled by the make command using the makefile in the root directory BEC-GP-OMP-FOR of the software package. The examples of produced output files can be found in the directory output, although some large density files are omitted, to save space. The programs calculate the values of actually used dimensionless nonlinearities from the physical input parameters, where the input parameters correspond to the identical nonlinearity values as in the previously published programs [1], so that the output files of the old and new programs can be directly compared. The output files are conveniently named such that their contents can be easily identified, following the naming convention introduced in Ref. [2]. For example, a file named -out.txt, where is a name of the individual program, represents the general output file containing input data, time and space steps, nonlinearity, energy and chemical potential, and was named fort.7 in the old Fortran version of programs [1]. A file named -den.txt is the output file with the condensate density, which had the names fort.3 and fort.4 in the old Fortran version [1] for imaginary- and real-time propagation programs, respectively. Other possible density outputs, such as the initial density, are commented out in the programs to have a simpler set of output files, but users can uncomment and re-enable them, if needed. In addition, there are output files for reduced (integrated) 1d and 2d densities for different programs. In the real-time programs there is also an output file reporting the dynamics of evolution of root-mean-square sizes after a perturbation is introduced. The supplied real-time programs solve the stationary GP equation, and then calculate the dynamics. As the imaginary-time programs are more accurate than the real-time programs for the solution of a stationary problem, one can first solve the stationary problem using the imaginary-time programs, adapt the real-time programs to read the pre-calculated wave function and then study the dynamics. In that case the parameter NSTP in the real-time programs should be set to zero and the space mesh and nonlinearity parameters should be identical in both programs. The reader is advised to consult our previous publication where a complete description of the output files is given [2]. A readme.txt file, included in the root directory, explains the procedure to compile and run the programs. We tested our programs on a workstation with two 10-core Intel Xeon E5-2650 v3 CPUs. The parameters used for testing are given in sample input files, provided in the corresponding directory together with the programs. In Table 1 we present wall-clock execution times for runs on 1, 6, and 19 CPU cores for programs compiled using Intel and GNU Fortran compilers. The corresponding columns "Intel speedup" and "GNU speedup" give the ratio of wall-clock execution times of runs on 1 and 19 CPU cores, and denote the actual measured speedup for 19 CPU cores. In all cases and for all numbers of CPU cores, although the GNU Fortran compiler gives excellent results, the Intel Fortran compiler turns out to be slightly faster. Note that during these tests we always ran only a single simulation on a workstation at a time, to avoid any possible interference issues. Therefore, the obtained wall-clock times are more reliable than the ones that could be measured with two or more jobs running simultaneously. We also studied the speedup of the programs as a function of the number of CPU cores used. The performance of the Intel and GNU Fortran compilers is illustrated in Fig. 1, where we plot the speedup and actual wall-clock times as functions of the number of CPU cores for 2d and 3d programs. We see that the speedup increases monotonically with the number of CPU cores in all cases and has large values (between 10 and 14 for 3d programs) for the maximal number of cores. This fully justifies the development of OpenMP programs, which enable much faster and more efficient solving of the GP equation. However, a slow saturation in the speedup with the further increase in the number of CPU cores is observed in all cases, as expected. The speedup tends to increase for programs in higher dimensions, as they become more complex and have to process more data. This is why the speedups of the supplied 2d and 3d programs are larger than those of 1d programs. Also, for a single program the speedup increases with the size of the spatial grid, i.e., with the number of spatial discretization points, since this increases the amount of calculations performed by the program. To demonstrate this, we tested the supplied real2d-th program and varied the number of spatial discretization points NX=NY from 20 to 1000. The measured speedup obtained when running this program on 19 CPU cores as a function of the number of discretization points is shown in Fig. 2. The speedup first increases rapidly with the number of discretization points and eventually saturates. Additional comments: Example inputs provided with the programs take less than 30 minutes to run on a workstation with two Intel Xeon E5-2650 v3 processors (2 QPI links, 10 CPU cores, 25 MB cache, 2.3 GHz).
Adjusting for cross-cultural differences in computer-adaptive tests of quality of life.
Gibbons, C J; Skevington, S M
2018-04-01
Previous studies using the WHOQOL measures have demonstrated that the relationship between individual items and the underlying quality of life (QoL) construct may differ between cultures. If unaccounted for, these differing relationships can lead to measurement bias which, in turn, can undermine the reliability of results. We used item response theory (IRT) to assess differential item functioning (DIF) in WHOQOL data from diverse language versions collected in UK, Zimbabwe, Russia, and India (total N = 1332). Data were fitted to the partial credit 'Rasch' model. We used four item banks previously derived from the WHOQOL-100 measure, which provided excellent measurement for physical, psychological, social, and environmental quality of life domains (40 items overall). Cross-cultural differential item functioning was assessed using analysis of variance for item residuals and post hoc Tukey tests. Simulated computer-adaptive tests (CATs) were conducted to assess the efficiency and precision of the four items banks. Splitting item parameters by DIF results in four linked item banks without DIF or other breaches of IRT model assumptions. Simulated CATs were more precise and efficient than longer paper-based alternatives. Assessing differential item functioning using item response theory can identify measurement invariance between cultures which, if uncontrolled, may undermine accurate comparisons in computer-adaptive testing assessments of QoL. We demonstrate how compensating for DIF using item anchoring allowed data from all four countries to be compared on a common metric, thus facilitating assessments which were both sensitive to cultural nuance and comparable between countries.
Maxis-A rezoning and remapping code in two dimensional cylindrical geometry
NASA Astrophysics Data System (ADS)
Lin, Zhiwei; Jiang, Shaoen; Zhang, Lu; Kuang, Longyu; Li, Hang
2018-06-01
This paper presents the new version of our code Maxis (Lin et al., 2011). Maxis is a local rezoning and remapping code in two dimensional cylindrical geometry, which can be employed to address the grid distortion problem of unstructured meshes. The new version of Maxis is mostly programmed in the C language which considerably improves its computational efficiency with respect to the former Matlab version. A new algorithm for determining the intersection of two arbitrary convex polygons is also incorporated into the new version. Some additional linking functions are further provided in the new version for the purpose of combining Maxis and MULTI2D.
NASA Technical Reports Server (NTRS)
Cullimore, B.
1994-01-01
SINDA, the Systems Improved Numerical Differencing Analyzer, is a software system for solving lumped parameter representations of physical problems governed by diffusion-type equations. SINDA was originally designed for analyzing thermal systems represented in electrical analog, lumped parameter form, although its use may be extended to include other classes of physical systems which can be modeled in this form. As a thermal analyzer, SINDA can handle such interrelated phenomena as sublimation, diffuse radiation within enclosures, transport delay effects, and sensitivity analysis. FLUINT, the FLUid INTegrator, is an advanced one-dimensional fluid analysis program that solves arbitrary fluid flow networks. The working fluids can be single phase vapor, single phase liquid, or two phase. The SINDA'85/FLUINT system permits the mutual influences of thermal and fluid problems to be analyzed. The SINDA system consists of a programming language, a preprocessor, and a subroutine library. The SINDA language is designed for working with lumped parameter representations and finite difference solution techniques. The preprocessor accepts programs written in the SINDA language and converts them into standard FORTRAN. The SINDA library consists of a large number of FORTRAN subroutines that perform a variety of commonly needed actions. The use of these subroutines can greatly reduce the programming effort required to solve many problems. A complete run of a SINDA'85/FLUINT model is a four step process. First, the user's desired model is run through the preprocessor which writes out data files for the processor to read and translates the user's program code. Second, the translated code is compiled. The third step requires linking the user's code with the processor library. Finally, the processor is executed. SINDA'85/FLUINT program features include 20,000 nodes, 100,000 conductors, 100 thermal submodels, and 10 fluid submodels. SINDA'85/FLUINT can also model two phase flow, capillary devices, user defined fluids, gravity and acceleration body forces on a fluid, and variable volumes. SINDA'85/FLUINT offers the following numerical solution techniques. The Finite difference formulation of the explicit method is the Forward-difference explicit approximation. The formulation of the implicit method is the Crank-Nicolson approximation. The program allows simulation of non-uniform heating and facilitates modeling thin-walled heat exchangers. The ability to model non-equilibrium behavior within two-phase volumes is included. Recent improvements to the program were made in modeling real evaporator-pumps and other capillary-assist evaporators. SINDA'85/FLUINT is available by license for a period of ten (10) years to approved licensees. The licensed program product includes the source code and one copy of the supporting documentation. Additional copies of the documentation may be purchased separately at any time. SINDA'85/FLUINT is written in FORTRAN 77. Version 2.3 has been implemented on Cray series computers running UNICOS, CONVEX computers running CONVEX OS, and DEC RISC computers running ULTRIX. Binaries are included with the Cray version only. The Cray version of SINDA'85/FLUINT also contains SINGE, an additional graphics program developed at Johnson Space Flight Center. Both source and executable code are provided for SINGE. Users wishing to create their own SINGE executable will also need the NASA Device Independent Graphics Library (NASADIG, previously known as SMDDIG; UNIX version, MSC-22001). The Cray and CONVEX versions of SINDA'85/FLUINT are available on 9-track 1600 BPI UNIX tar format magnetic tapes. The CONVEX version is also available on a .25 inch streaming magnetic tape cartridge in UNIX tar format. The DEC RISC ULTRIX version is available on a TK50 magnetic tape cartridge in UNIX tar format. SINDA was developed in 1971, and first had fluid capability added in 1975. SINDA'85/FLUINT version 2.3 was released in 1990.
Highway Fuel Consumption Computer Model (Version 1)
DOT National Transportation Integrated Search
1974-04-01
A highway fuel consumption computer model is given. The model allows the computation of fuel consumption of a highway vehicle class as a function of time. The model is of the initial value (in this case initial inventory) and lumped parameter type. P...
REMOTE: Modem Communicator Program for the IBM personal computer
DOE Office of Scientific and Technical Information (OSTI.GOV)
McGirt, F.
1984-06-01
REMOTE, a Modem Communicator Program, was developed to provide full duplex serial communication with arbitrary remote computers via either dial-up telephone modems or direct lines. The latest version of REMOTE (documented in this report) was developed for the IBM Personal Computer.
Development and Evaluation of a Chinese Version of the Questionnaire on Teacher Interaction (QTI)
ERIC Educational Resources Information Center
Sun, Xiaojing; Mainhard, Tim; Wubbels, Theo
2018-01-01
Teacher-student interpersonal relationships play an important role in education. The Questionnaire on Teacher Interaction (QTI) was designed to measure students' interpersonal perceptions of their teachers. There are two Chinese versions of the QTI for student use, and that inherited the weaknesses of the previous English versions, such as items…
FLEXAN (version 2.0) user's guide
NASA Technical Reports Server (NTRS)
Stallcup, Scott S.
1989-01-01
The FLEXAN (Flexible Animation) computer program, Version 2.0 is described. FLEXAN animates 3-D wireframe structural dynamics on the Evans and Sutherland PS300 graphics workstation with a VAX/VMS host computer. Animation options include: unconstrained vibrational modes, mode time histories (multiple modes), delta time histories (modal and/or nonmodal deformations), color time histories (elements of the structure change colors through time), and rotational time histories (parts of the structure rotate through time). Concurrent color, mode, delta, and rotation, time history animations are supported. FLEXAN does not model structures or calculate the dynamics of structures; it only animates data from other computer programs. FLEXAN was developed to aid in the study of the structural dynamics of spacecraft.
Computer aided systems human engineering: A hypermedia tool
NASA Technical Reports Server (NTRS)
Boff, Kenneth R.; Monk, Donald L.; Cody, William J.
1992-01-01
The Computer Aided Systems Human Engineering (CASHE) system, Version 1.0, is a multimedia ergonomics database on CD-ROM for the Apple Macintosh II computer, being developed for use by human system designers, educators, and researchers. It will initially be available on CD-ROM and will allow users to access ergonomics data and models stored electronically as text, graphics, and audio. The CASHE CD-ROM, Version 1.0 will contain the Boff and Lincoln (1988) Engineering Data Compendium, MIL-STD-1472D and a unique, interactive simulation capability, the Perception and Performance Prototyper. Its features also include a specialized data retrieval, scaling, and analysis capability and the state of the art in information retrieval, browsing, and navigation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kendon, Viv
2014-12-04
Quantum versions of random walks have diverse applications that are motivating experimental implementations as well as theoretical studies. Recent results showing quantum walks are “universal for quantum computation” relate to algorithms, to be run on quantum computers. We consider whether an experimental implementation of a quantum walk could provide useful computation before we have a universal quantum computer.
Measuring Attitude toward Computers: The Computer Appreciator-Critic Attitude Scales.
ERIC Educational Resources Information Center
Mathews, Walter M.; Wolf, Abraham W.
The purpose of this study was to develop a reliable and valid instrument that conveniently measures a person's attitude toward computers. The final version of the instrument is composed of 40 items on a Likert-type scale which assign scores to subjects on their "appreciative" and "critical" attitude toward computers. The sample…
U.S. Army weapon systems human-computer interface style guide. Version 2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Avery, L.W.; O`Mara, P.A.; Shepard, A.P.
1997-12-31
A stated goal of the US Army has been the standardization of the human computer interfaces (HCIs) of its system. Some of the tools being used to accomplish this standardization are HCI design guidelines and style guides. Currently, the Army is employing a number of HCI design guidance documents. While these style guides provide good guidance for the command, control, communications, computers, and intelligence (C4I) domain, they do not necessarily represent the more unique requirements of the Army`s real time and near-real time (RT/NRT) weapon systems. The Office of the Director of Information for Command, Control, Communications, and Computers (DISC4),more » in conjunction with the Weapon Systems Technical Architecture Working Group (WSTAWG), recognized this need as part of their activities to revise the Army Technical Architecture (ATA), now termed the Joint Technical Architecture-Army (JTA-A). To address this need, DISC4 tasked the Pacific Northwest National Laboratory (PNNL) to develop an Army weapon systems unique HCI style guide, which resulted in the US Army Weapon Systems Human-Computer Interface (WSHCI) Style Guide Version 1. Based on feedback from the user community, DISC4 further tasked PNNL to revise Version 1 and publish Version 2. The intent was to update some of the research and incorporate some enhancements. This document provides that revision. The purpose of this document is to provide HCI design guidance for the RT/NRT Army system domain across the weapon systems subdomains of ground, aviation, missile, and soldier systems. Each subdomain should customize and extend this guidance by developing their domain-specific style guides, which will be used to guide the development of future systems within their subdomains.« less
Development of an Aeromedical Scientific Information System for Aviation Safety
2008-01-01
math- ematics, engineering, computer hardware, software , and networking, was assembled to glean the most knowledge from the complicated aeromedical...9, SPlus Enterprise Developer 8, and Insightful Miner version 7. Process flow charts were done with SmartDraw Suite Edition version 7. Static and
NASA Technical Reports Server (NTRS)
Clarke, R.; Lintereur, L.; Bahm, C.
2016-01-01
A desire for more complete documentation of the National Aeronautics and Space Administration (NASA) Armstrong Flight Research Center (AFRC), Edwards, California legacy code used in the core simulation has led to this e ort to fully document the oblate Earth six-degree-of-freedom equations of motion and integration algorithm. The authors of this report have taken much of the earlier work of the simulation engineering group and used it as a jumping-o point for this report. The largest addition this report makes is that each element of the equations of motion is traced back to first principles and at no point is the reader forced to take an equation on faith alone. There are no discoveries of previously unknown principles contained in this report; this report is a collection and presentation of textbook principles. The value of this report is that those textbook principles are herein documented in standard nomenclature that matches the form of the computer code DERIVC. Previous handwritten notes are much of the backbone of this work, however, in almost every area, derivations are explicitly shown to assure the reader that the equations which make up the oblate Earth version of the computer routine, DERIVC, are correct.
GPU-accelerated two dimensional synthetic aperture focusing for photoacoustic microscopy
NASA Astrophysics Data System (ADS)
Liu, Siyu; Feng, Xiaohua; Gao, Fei; Jin, Haoran; Zhang, Ruochong; Luo, Yunqi; Zheng, Yuanjin
2018-02-01
Acoustic resolution photoacoustic microscopy (AR-PAM) generally suffers from limited depth of focus, which had been extended by synthetic aperture focusing techniques (SAFTs). However, for three dimensional AR-PAM, current one dimensional (1D) SAFT and its improved version like cross-shaped SAFT do not provide isotropic resolution in the lateral direction. The full potential of the SAFT remains to be tapped. To this end, two dimensional (2D) SAFT with fast computing architecture is proposed in this work. Explained by geometric modeling and Fourier acoustics theories, 2D-SAFT provide the narrowest post-focusing capability, thus to achieve best lateral resolution. Compared with previous 1D-SAFT techniques, the proposed 2D-SAFT improved the lateral resolution by at least 1.7 times and the signal-to-noise ratio (SNR) by about 10 dB in both simulation and experiments. Moreover, the improved 2D-SAFT algorithm is accelerated by a graphical processing unit that reduces the long period of reconstruction to only a few seconds. The proposed 2D-SAFT is demonstrated to outperform previous reported 1D SAFT in the aspects of improving the depth of focus, imaging resolution, and SNR with fast computational efficiency. This work facilitates future studies on in vivo deeper and high-resolution photoacoustic microscopy beyond several centimeters.
Burghaus, Rolf; Coboeken, Katrin; Gaub, Thomas; Niederalt, Christoph; Sensse, Anke; Siegmund, Hans-Ulrich; Weiss, Wolfgang; Mueck, Wolfgang; Tanigawa, Takahiko; Lippert, Jörg
2014-01-01
The long-lasting anticoagulant effect of vitamin K antagonists can be problematic in cases of adverse drug reactions or when patients are switched to another anticoagulant therapy. The objective of this study was to examine in silico the anticoagulant effect of rivaroxaban, an oral, direct Factor Xa inhibitor, combined with the residual effect of discontinued warfarin. Our simulations were based on the recommended anticoagulant dosing regimen for stroke prevention in patients with atrial fibrillation. The effects of the combination of discontinued warfarin plus rivaroxaban were simulated using an extended version of a previously validated blood coagulation computer model. A strong synergistic effect of the two distinct mechanisms of action was observed in the first 2–3 days after warfarin discontinuation; thereafter, the effect was close to additive. Nomograms for the introduction of rivaroxaban therapy after warfarin discontinuation were derived for Caucasian and Japanese patients using safety and efficacy criteria described previously, together with the coagulation model. The findings of our study provide a mechanistic pharmacologic rationale for dosing schedules during the therapy switch from warfarin to rivaroxaban and support the switching strategies as outlined in the Summary of Product Characteristics and Prescribing Information for rivaroxaban. PMID:25426077
Burghaus, Rolf; Coboeken, Katrin; Gaub, Thomas; Niederalt, Christoph; Sensse, Anke; Siegmund, Hans-Ulrich; Weiss, Wolfgang; Mueck, Wolfgang; Tanigawa, Takahiko; Lippert, Jörg
2014-01-01
The long-lasting anticoagulant effect of vitamin K antagonists can be problematic in cases of adverse drug reactions or when patients are switched to another anticoagulant therapy. The objective of this study was to examine in silico the anticoagulant effect of rivaroxaban, an oral, direct Factor Xa inhibitor, combined with the residual effect of discontinued warfarin. Our simulations were based on the recommended anticoagulant dosing regimen for stroke prevention in patients with atrial fibrillation. The effects of the combination of discontinued warfarin plus rivaroxaban were simulated using an extended version of a previously validated blood coagulation computer model. A strong synergistic effect of the two distinct mechanisms of action was observed in the first 2-3 days after warfarin discontinuation; thereafter, the effect was close to additive. Nomograms for the introduction of rivaroxaban therapy after warfarin discontinuation were derived for Caucasian and Japanese patients using safety and efficacy criteria described previously, together with the coagulation model. The findings of our study provide a mechanistic pharmacologic rationale for dosing schedules during the therapy switch from warfarin to rivaroxaban and support the switching strategies as outlined in the Summary of Product Characteristics and Prescribing Information for rivaroxaban.
Software fault-tolerance by design diversity DEDIX: A tool for experiments
NASA Technical Reports Server (NTRS)
Avizienis, A.; Gunningberg, P.; Kelly, J. P. J.; Lyu, R. T.; Strigini, L.; Traverse, P. J.; Tso, K. S.; Voges, U.
1986-01-01
The use of multiple versions of a computer program, independently designed from a common specification, to reduce the effects of an error is discussed. If these versions are designed by independent programming teams, it is expected that a fault in one version will not have the same behavior as any fault in the other versions. Since the errors in the output of the versions are different and uncorrelated, it is possible to run the versions concurrently, cross-check their results at prespecified points, and mask errors. A DEsign DIversity eXperiments (DEDIX) testbed was implemented to study the influence of common mode errors which can result in a failure of the entire system. The layered design of DEDIX and its decision algorithm are described.
NASA Astrophysics Data System (ADS)
Avellar, J.; Duarte, L. G. S.; da Mota, L. A. C. P.; de Melo, N.; Skea, J. E. F.
2012-09-01
A set of Maple routines is presented, fully compatible with the new releases of Maple (14 and higher). The package deals with the numerical evolution of dynamical systems and provide flexible plotting of the results. The package also brings an initial conditions generator, a numerical solver manager, and a focusing set of routines that allow for better analysis of the graphical display of the results. The novelty that the package presents an optional C interface is maintained. This allows for fast numerical integration, even for the totally inexperienced Maple user, without any C expertise being required. Finally, the package provides the routines to calculate the fractal dimension of boundaries (via box counting). New version program summary Program Title: Ndynamics Catalogue identifier: %Leave blank, supplied by Elsevier. Licensing provisions: no. Programming language: Maple, C. Computer: Intel(R) Core(TM) i3 CPU M330 @ 2.13 GHz. Operating system: Windows 7. RAM: 3.0 GB Keywords: Dynamical systems, Box counting, Fractal dimension, Symbolic computation, Differential equations, Maple. Classification: 4.3. Catalogue identifier of previous version: ADKH_v1_0. Journal reference of previous version: Comput. Phys. Commun. 119 (1999) 256. Does the new version supersede the previous version?: Yes. Nature of problem Computation and plotting of numerical solutions of dynamical systems and the determination of the fractal dimension of the boundaries. Solution method The default method of integration is a fifth-order Runge-Kutta scheme, but any method of integration present on the Maple system is available via an argument when calling the routine. A box counting [1] method is used to calculate the fractal dimension [2] of the boundaries. Reasons for the new version The Ndynamics package met a demand of our research community for a flexible and friendly environment for analyzing dynamical systems. All the user has to do is create his/her own Maple session, with the system to be studied, and use the commands on the package to (for instance) calculate the fractal dimension of a certain boundary, without knowing or worrying about a single line of C programming. So the package combines the flexibility and friendly aspect of Maple with the fast and robust numerical integration of the compiled (for example C) basin. The package is old, but the problems it was designed to dealt with are still there. Since Maple evolved, the package stopped working, and we felt compelled to produce this version, fully compatible with the latest version of Maple, to make it again available to the Maple user. Summary of revisions Deprecated Maple Packages and Commands: Paraphrasing the Maple in-built help files, "Some Maple commands and packages are deprecated. A command (or package) is deprecated when its functionality has been replaced by an improved implementation. The newer command is said to supersede the older one, and use of the newer command is strongly recommended". So, we have examined our code to see if some of these occurrences could be dangerous for it. For example, the "readlib" command is unnecessary, and we have removed its occurrences from our code. We have checked and changed all the necessary commands in order for us to be safe in respect to danger from this source. Another change we had to make was related to the tools we have implemented in order to use the interface for performing the numerical integration in C, externally, via the use of the Maple command "ssystem". In the past, we had used, for the external C integration, the DJGPP system. But now we present the package with (free) Borland distribution. The compilation and compiling commands are now slightly changed. For example, to compile only, we had used "gcc-c"; now, we use "bcc32-c", etc. All this installation (Borland) is explained on a "README" file we are submitting here to help the potential user. Restrictions Besides the inherent restrictions of numerical integration methods, this version of the package only deals with systems of first-order differential equations. Unusual features This package provides user-friendly software tools for analyzing the character of a dynamical system, whether it displays chaotic behaviour, and so on. Options within the package allow the user to specify characteristics that separate the trajectories into families of curves. In conjunction with the facilities for altering the user's viewpoint, this provides a graphical interface for the speedy and easy identification of regions with interesting dynamics. An unusual characteristic of the package is its interface for performing the numerical integrations in C using a fifth-order Runge-Kutta method (default). This potentially improves the speed of the numerical integration by some orders of magnitude and, in cases where it is necessary to calculate thousands of graphs in regions of difficult integration, this feature is very desirable. Besides that tool, somewhat more experienced users can produce their own C integrator and, by using the commands available in the package, use it as the C integrator provided with the package as long as the new integrator manages the input and output in the same format as the default one does. Running time This depends strongly on the dynamical system. With an Intel® Core™ i3 CPU M330 @ 2.13 GHz, the integration of 50 graphs, for a system of two first-order equations, typically takes less than a second to run (with the C integration interface). Without the C interface, it takes a few seconds. In order to calculate the fractal dimension, where we typically use 10,000 points to integrate, using the C interface it takes from 20 to 30 s. Without the C interface, it becomes really impractical, taking, sometimes, for the same case, almost an hour. For some cases, it takes many hours.
Survey of computer vision technology for UVA navigation
NASA Astrophysics Data System (ADS)
Xie, Bo; Fan, Xiang; Li, Sijian
2017-11-01
Navigation based on computer version technology, which has the characteristics of strong independence, high precision and is not susceptible to electrical interference, has attracted more and more attention in the filed of UAV navigation research. Early navigation project based on computer version technology mainly applied to autonomous ground robot. In recent years, the visual navigation system is widely applied to unmanned machine, deep space detector and underwater robot. That further stimulate the research of integrated navigation algorithm based on computer version technology. In China, with many types of UAV development and two lunar exploration, the three phase of the project started, there has been significant progress in the study of visual navigation. The paper expounds the development of navigation based on computer version technology in the filed of UAV navigation research and draw a conclusion that visual navigation is mainly applied to three aspects as follows.(1) Acquisition of UAV navigation parameters. The parameters, including UAV attitude, position and velocity information could be got according to the relationship between the images from sensors and carrier's attitude, the relationship between instant matching images and the reference images and the relationship between carrier's velocity and characteristics of sequential images.(2) Autonomous obstacle avoidance. There are many ways to achieve obstacle avoidance in UAV navigation. The methods based on computer version technology ,including feature matching, template matching, image frames and so on, are mainly introduced. (3) The target tracking, positioning. Using the obtained images, UAV position is calculated by using optical flow method, MeanShift algorithm, CamShift algorithm, Kalman filtering and particle filter algotithm. The paper expounds three kinds of mainstream visual system. (1) High speed visual system. It uses parallel structure, with which image detection and processing are carried out at high speed. The system is applied to rapid response system. (2) The visual system of distributed network. There are several discrete image data acquisition sensor in different locations, which transmit image data to the node processor to increase the sampling rate. (3) The visual system combined with observer. The system combines image sensors with the external observers to make up for lack of visual equipment. To some degree, these systems overcome lacks of the early visual system, including low frequency, low processing efficiency and strong noise. In the end, the difficulties of navigation based on computer version technology in practical application are briefly discussed. (1) Due to the huge workload of image operation , the real-time performance of the system is poor. (2) Due to the large environmental impact , the anti-interference ability of the system is poor.(3) Due to the ability to work in a particular environment, the system has poor adaptability.
ELAS - SCIENCE & TECHNOLOGY LABORATORY APPLICATIONS SOFTWARE (SILICON GRAPHICS VERSION)
NASA Technical Reports Server (NTRS)
Walters, D.
1994-01-01
The Science and Technology Laboratory Applications Software (ELAS) was originally designed to analyze and process digital imagery data, specifically remotely-sensed scanner data. This capability includes the processing of Landsat multispectral data; aircraft-acquired scanner data; digitized topographic data; and numerous other ancillary data, such as soil types and rainfall information, that can be stored in digitized form. ELAS has the subsequent capability to geographically reference this data to dozens of standard, as well as user created projections. As an integrated image processing system, ELAS offers the user of remotely-sensed data a wide range of capabilities in the areas of land cover analysis and general purpose image analysis. ELAS is designed for flexible use and operation and includes its own FORTRAN operating subsystem and an expandable set of FORTRAN application modules. Because all of ELAS resides in one "logical" FORTRAN program, data inputs and outputs, directives, and module switching are convenient for the user. There are over 230 modules presently available to aid the user in performing a wide range of land cover analyses and manipulation. The file management modules enable the user to allocate, define, access, and specify usage for all types of files (ELAS files, subfiles, external files etc.). Various other modules convert specific types of satellite, aircraft, and vector-polygon data into files that can be used by other ELAS modules. The user also has many module options which aid in displaying image data, such as magnification/reduction of the display; true color display; and several memory functions. Additional modules allow for the building and manipulation of polygonal areas of the image data. Finally, there are modules which allow the user to select and classify the image data. An important feature of the ELAS subsystem is that its structure allows new applications modules to be easily integrated in the future. ELAS has as a standard the flexibility to process data elements exceeding 8 bits in length, including floating point (noninteger) elements and 16 or 32 bit integers. Thus it is able to analyze and process "non-standard" nonimage data. The VAX (ERL-10017) and Concurrent (ERL-10013) versions of ELAS 9.0 are written in FORTRAN and ASSEMBLER for DEC VAX series computers running VMS and Concurrent computers running MTM. The Sun (SSC-00019), Masscomp (SSC-00020), and Silicon Graphics (SSC-00021) versions of ELAS 9.0 are written in FORTRAN 77 and C-LANGUAGE for Sun4 series computers running SunOS, Masscomp computers running UNIX, and Silicon Graphics IRIS computers running IRIX. The Concurrent version requires at least 15 bit addressing and a direct memory access channel. The VAX and Concurrent versions of ELAS both require floating-point hardware, at least 1Mb of RAM, and approximately 70Mb of disk space. Both versions also require a COMTAL display device in order to display images. For the Sun, Masscomp, and Silicon Graphics versions of ELAS, the disk storage required is approximately 115Mb, and a minimum of 8Mb of RAM is required for execution. The Sun version of ELAS requires either the X-Window System Version 11 Revision 4 or Sun OpenWindows Version 2. The Masscomp version requires a GA1000 display device and the associated "gp" library. The Silicon Graphics version requires Silicon Graphics' GL library. ELAS display functions will not work with a monochrome monitor. The standard distribution medium for the VAX version (ERL10017) is a set of two 9-track 1600 BPI magnetic tapes in DEC VAX BACKUP format. This version is also available on a TK50 tape cartridge in DEC VAX BACKUP format. The standard distribution medium for the Concurrent version (ERL-10013) is a set of two 9-track 1600 BPI magnetic tapes in Concurrent BACKUP format. The standard distribution medium for the Sun version (SSC-00019) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. The standard distribution medium for the Masscomp version, (SSC-00020) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. The standard distribution medium for the Silicon Graphics version (SSC-00021) is a .25 inch streaming magnetic IRIS tape cartridge in UNIX tar format. Version 9.0 was released in 1991. Sun4, SunOS, and Open Windows are trademarks of Sun Microsystems, Inc. MIT X Window System is licensed by Massachusetts Institute of Technology.
ELAS - SCIENCE & TECHNOLOGY LABORATORY APPLICATIONS SOFTWARE (CONCURRENT VERSION)
NASA Technical Reports Server (NTRS)
Pearson, R. W.
1994-01-01
The Science and Technology Laboratory Applications Software (ELAS) was originally designed to analyze and process digital imagery data, specifically remotely-sensed scanner data. This capability includes the processing of Landsat multispectral data; aircraft-acquired scanner data; digitized topographic data; and numerous other ancillary data, such as soil types and rainfall information, that can be stored in digitized form. ELAS has the subsequent capability to geographically reference this data to dozens of standard, as well as user created projections. As an integrated image processing system, ELAS offers the user of remotely-sensed data a wide range of capabilities in the areas of land cover analysis and general purpose image analysis. ELAS is designed for flexible use and operation and includes its own FORTRAN operating subsystem and an expandable set of FORTRAN application modules. Because all of ELAS resides in one "logical" FORTRAN program, data inputs and outputs, directives, and module switching are convenient for the user. There are over 230 modules presently available to aid the user in performing a wide range of land cover analyses and manipulation. The file management modules enable the user to allocate, define, access, and specify usage for all types of files (ELAS files, subfiles, external files etc.). Various other modules convert specific types of satellite, aircraft, and vector-polygon data into files that can be used by other ELAS modules. The user also has many module options which aid in displaying image data, such as magnification/reduction of the display; true color display; and several memory functions. Additional modules allow for the building and manipulation of polygonal areas of the image data. Finally, there are modules which allow the user to select and classify the image data. An important feature of the ELAS subsystem is that its structure allows new applications modules to be easily integrated in the future. ELAS has as a standard the flexibility to process data elements exceeding 8 bits in length, including floating point (noninteger) elements and 16 or 32 bit integers. Thus it is able to analyze and process "non-standard" nonimage data. The VAX (ERL-10017) and Concurrent (ERL-10013) versions of ELAS 9.0 are written in FORTRAN and ASSEMBLER for DEC VAX series computers running VMS and Concurrent computers running MTM. The Sun (SSC-00019), Masscomp (SSC-00020), and Silicon Graphics (SSC-00021) versions of ELAS 9.0 are written in FORTRAN 77 and C-LANGUAGE for Sun4 series computers running SunOS, Masscomp computers running UNIX, and Silicon Graphics IRIS computers running IRIX. The Concurrent version requires at least 15 bit addressing and a direct memory access channel. The VAX and Concurrent versions of ELAS both require floating-point hardware, at least 1Mb of RAM, and approximately 70Mb of disk space. Both versions also require a COMTAL display device in order to display images. For the Sun, Masscomp, and Silicon Graphics versions of ELAS, the disk storage required is approximately 115Mb, and a minimum of 8Mb of RAM is required for execution. The Sun version of ELAS requires either the X-Window System Version 11 Revision 4 or Sun OpenWindows Version 2. The Masscomp version requires a GA1000 display device and the associated "gp" library. The Silicon Graphics version requires Silicon Graphics' GL library. ELAS display functions will not work with a monochrome monitor. The standard distribution medium for the VAX version (ERL10017) is a set of two 9-track 1600 BPI magnetic tapes in DEC VAX BACKUP format. This version is also available on a TK50 tape cartridge in DEC VAX BACKUP format. The standard distribution medium for the Concurrent version (ERL-10013) is a set of two 9-track 1600 BPI magnetic tapes in Concurrent BACKUP format. The standard distribution medium for the Sun version (SSC-00019) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. The standard distribution medium for the Masscomp version, (SSC-00020) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. The standard distribution medium for the Silicon Graphics version (SSC-00021) is a .25 inch streaming magnetic IRIS tape cartridge in UNIX tar format. Version 9.0 was released in 1991. Sun4, SunOS, and Open Windows are trademarks of Sun Microsystems, Inc. MIT X Window System is licensed by Massachusetts Institute of Technology.
ELAS - SCIENCE & TECHNOLOGY LABORATORY APPLICATIONS SOFTWARE (SUN VERSION)
NASA Technical Reports Server (NTRS)
Walters, D.
1994-01-01
The Science and Technology Laboratory Applications Software (ELAS) was originally designed to analyze and process digital imagery data, specifically remotely-sensed scanner data. This capability includes the processing of Landsat multispectral data; aircraft-acquired scanner data; digitized topographic data; and numerous other ancillary data, such as soil types and rainfall information, that can be stored in digitized form. ELAS has the subsequent capability to geographically reference this data to dozens of standard, as well as user created projections. As an integrated image processing system, ELAS offers the user of remotely-sensed data a wide range of capabilities in the areas of land cover analysis and general purpose image analysis. ELAS is designed for flexible use and operation and includes its own FORTRAN operating subsystem and an expandable set of FORTRAN application modules. Because all of ELAS resides in one "logical" FORTRAN program, data inputs and outputs, directives, and module switching are convenient for the user. There are over 230 modules presently available to aid the user in performing a wide range of land cover analyses and manipulation. The file management modules enable the user to allocate, define, access, and specify usage for all types of files (ELAS files, subfiles, external files etc.). Various other modules convert specific types of satellite, aircraft, and vector-polygon data into files that can be used by other ELAS modules. The user also has many module options which aid in displaying image data, such as magnification/reduction of the display; true color display; and several memory functions. Additional modules allow for the building and manipulation of polygonal areas of the image data. Finally, there are modules which allow the user to select and classify the image data. An important feature of the ELAS subsystem is that its structure allows new applications modules to be easily integrated in the future. ELAS has as a standard the flexibility to process data elements exceeding 8 bits in length, including floating point (noninteger) elements and 16 or 32 bit integers. Thus it is able to analyze and process "non-standard" nonimage data. The VAX (ERL-10017) and Concurrent (ERL-10013) versions of ELAS 9.0 are written in FORTRAN and ASSEMBLER for DEC VAX series computers running VMS and Concurrent computers running MTM. The Sun (SSC-00019), Masscomp (SSC-00020), and Silicon Graphics (SSC-00021) versions of ELAS 9.0 are written in FORTRAN 77 and C-LANGUAGE for Sun4 series computers running SunOS, Masscomp computers running UNIX, and Silicon Graphics IRIS computers running IRIX. The Concurrent version requires at least 15 bit addressing and a direct memory access channel. The VAX and Concurrent versions of ELAS both require floating-point hardware, at least 1Mb of RAM, and approximately 70Mb of disk space. Both versions also require a COMTAL display device in order to display images. For the Sun, Masscomp, and Silicon Graphics versions of ELAS, the disk storage required is approximately 115Mb, and a minimum of 8Mb of RAM is required for execution. The Sun version of ELAS requires either the X-Window System Version 11 Revision 4 or Sun OpenWindows Version 2. The Masscomp version requires a GA1000 display device and the associated "gp" library. The Silicon Graphics version requires Silicon Graphics' GL library. ELAS display functions will not work with a monochrome monitor. The standard distribution medium for the VAX version (ERL10017) is a set of two 9-track 1600 BPI magnetic tapes in DEC VAX BACKUP format. This version is also available on a TK50 tape cartridge in DEC VAX BACKUP format. The standard distribution medium for the Concurrent version (ERL-10013) is a set of two 9-track 1600 BPI magnetic tapes in Concurrent BACKUP format. The standard distribution medium for the Sun version (SSC-00019) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. The standard distribution medium for the Masscomp version, (SSC-00020) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. The standard distribution medium for the Silicon Graphics version (SSC-00021) is a .25 inch streaming magnetic IRIS tape cartridge in UNIX tar format. Version 9.0 was released in 1991. Sun4, SunOS, and Open Windows are trademarks of Sun Microsystems, Inc. MIT X Window System is licensed by Massachusetts Institute of Technology.
ELAS - SCIENCE & TECHNOLOGY LABORATORY APPLICATIONS SOFTWARE (MASSCOMP VERSION)
NASA Technical Reports Server (NTRS)
Walters, D.
1994-01-01
The Science and Technology Laboratory Applications Software (ELAS) was originally designed to analyze and process digital imagery data, specifically remotely-sensed scanner data. This capability includes the processing of Landsat multispectral data; aircraft-acquired scanner data; digitized topographic data; and numerous other ancillary data, such as soil types and rainfall information, that can be stored in digitized form. ELAS has the subsequent capability to geographically reference this data to dozens of standard, as well as user created projections. As an integrated image processing system, ELAS offers the user of remotely-sensed data a wide range of capabilities in the areas of land cover analysis and general purpose image analysis. ELAS is designed for flexible use and operation and includes its own FORTRAN operating subsystem and an expandable set of FORTRAN application modules. Because all of ELAS resides in one "logical" FORTRAN program, data inputs and outputs, directives, and module switching are convenient for the user. There are over 230 modules presently available to aid the user in performing a wide range of land cover analyses and manipulation. The file management modules enable the user to allocate, define, access, and specify usage for all types of files (ELAS files, subfiles, external files etc.). Various other modules convert specific types of satellite, aircraft, and vector-polygon data into files that can be used by other ELAS modules. The user also has many module options which aid in displaying image data, such as magnification/reduction of the display; true color display; and several memory functions. Additional modules allow for the building and manipulation of polygonal areas of the image data. Finally, there are modules which allow the user to select and classify the image data. An important feature of the ELAS subsystem is that its structure allows new applications modules to be easily integrated in the future. ELAS has as a standard the flexibility to process data elements exceeding 8 bits in length, including floating point (noninteger) elements and 16 or 32 bit integers. Thus it is able to analyze and process "non-standard" nonimage data. The VAX (ERL-10017) and Concurrent (ERL-10013) versions of ELAS 9.0 are written in FORTRAN and ASSEMBLER for DEC VAX series computers running VMS and Concurrent computers running MTM. The Sun (SSC-00019), Masscomp (SSC-00020), and Silicon Graphics (SSC-00021) versions of ELAS 9.0 are written in FORTRAN 77 and C-LANGUAGE for Sun4 series computers running SunOS, Masscomp computers running UNIX, and Silicon Graphics IRIS computers running IRIX. The Concurrent version requires at least 15 bit addressing and a direct memory access channel. The VAX and Concurrent versions of ELAS both require floating-point hardware, at least 1Mb of RAM, and approximately 70Mb of disk space. Both versions also require a COMTAL display device in order to display images. For the Sun, Masscomp, and Silicon Graphics versions of ELAS, the disk storage required is approximately 115Mb, and a minimum of 8Mb of RAM is required for execution. The Sun version of ELAS requires either the X-Window System Version 11 Revision 4 or Sun OpenWindows Version 2. The Masscomp version requires a GA1000 display device and the associated "gp" library. The Silicon Graphics version requires Silicon Graphics' GL library. ELAS display functions will not work with a monochrome monitor. The standard distribution medium for the VAX version (ERL10017) is a set of two 9-track 1600 BPI magnetic tapes in DEC VAX BACKUP format. This version is also available on a TK50 tape cartridge in DEC VAX BACKUP format. The standard distribution medium for the Concurrent version (ERL-10013) is a set of two 9-track 1600 BPI magnetic tapes in Concurrent BACKUP format. The standard distribution medium for the Sun version (SSC-00019) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. The standard distribution medium for the Masscomp version, (SSC-00020) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. The standard distribution medium for the Silicon Graphics version (SSC-00021) is a .25 inch streaming magnetic IRIS tape cartridge in UNIX tar format. Version 9.0 was released in 1991. Sun4, SunOS, and Open Windows are trademarks of Sun Microsystems, Inc. MIT X Window System is licensed by Massachusetts Institute of Technology.
ELAS - SCIENCE & TECHNOLOGY LABORATORY APPLICATIONS SOFTWARE (DEC VAX VERSION)
NASA Technical Reports Server (NTRS)
Junkin, B. G.
1994-01-01
The Science and Technology Laboratory Applications Software (ELAS) was originally designed to analyze and process digital imagery data, specifically remotely-sensed scanner data. This capability includes the processing of Landsat multispectral data; aircraft-acquired scanner data; digitized topographic data; and numerous other ancillary data, such as soil types and rainfall information, that can be stored in digitized form. ELAS has the subsequent capability to geographically reference this data to dozens of standard, as well as user created projections. As an integrated image processing system, ELAS offers the user of remotely-sensed data a wide range of capabilities in the areas of land cover analysis and general purpose image analysis. ELAS is designed for flexible use and operation and includes its own FORTRAN operating subsystem and an expandable set of FORTRAN application modules. Because all of ELAS resides in one "logical" FORTRAN program, data inputs and outputs, directives, and module switching are convenient for the user. There are over 230 modules presently available to aid the user in performing a wide range of land cover analyses and manipulation. The file management modules enable the user to allocate, define, access, and specify usage for all types of files (ELAS files, subfiles, external files etc.). Various other modules convert specific types of satellite, aircraft, and vector-polygon data into files that can be used by other ELAS modules. The user also has many module options which aid in displaying image data, such as magnification/reduction of the display; true color display; and several memory functions. Additional modules allow for the building and manipulation of polygonal areas of the image data. Finally, there are modules which allow the user to select and classify the image data. An important feature of the ELAS subsystem is that its structure allows new applications modules to be easily integrated in the future. ELAS has as a standard the flexibility to process data elements exceeding 8 bits in length, including floating point (noninteger) elements and 16 or 32 bit integers. Thus it is able to analyze and process "non-standard" nonimage data. The VAX (ERL-10017) and Concurrent (ERL-10013) versions of ELAS 9.0 are written in FORTRAN and ASSEMBLER for DEC VAX series computers running VMS and Concurrent computers running MTM. The Sun (SSC-00019), Masscomp (SSC-00020), and Silicon Graphics (SSC-00021) versions of ELAS 9.0 are written in FORTRAN 77 and C-LANGUAGE for Sun4 series computers running SunOS, Masscomp computers running UNIX, and Silicon Graphics IRIS computers running IRIX. The Concurrent version requires at least 15 bit addressing and a direct memory access channel. The VAX and Concurrent versions of ELAS both require floating-point hardware, at least 1Mb of RAM, and approximately 70Mb of disk space. Both versions also require a COMTAL display device in order to display images. For the Sun, Masscomp, and Silicon Graphics versions of ELAS, the disk storage required is approximately 115Mb, and a minimum of 8Mb of RAM is required for execution. The Sun version of ELAS requires either the X-Window System Version 11 Revision 4 or Sun OpenWindows Version 2. The Masscomp version requires a GA1000 display device and the associated "gp" library. The Silicon Graphics version requires Silicon Graphics' GL library. ELAS display functions will not work with a monochrome monitor. The standard distribution medium for the VAX version (ERL10017) is a set of two 9-track 1600 BPI magnetic tapes in DEC VAX BACKUP format. This version is also available on a TK50 tape cartridge in DEC VAX BACKUP format. The standard distribution medium for the Concurrent version (ERL-10013) is a set of two 9-track 1600 BPI magnetic tapes in Concurrent BACKUP format. The standard distribution medium for the Sun version (SSC-00019) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. The standard distribution medium for the Masscomp version, (SSC-00020) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. The standard distribution medium for the Silicon Graphics version (SSC-00021) is a .25 inch streaming magnetic IRIS tape cartridge in UNIX tar format. Version 9.0 was released in 1991. Sun4, SunOS, and Open Windows are trademarks of Sun Microsystems, Inc. MIT X Window System is licensed by Massachusetts Institute of Technology.
Kahlert, Daniela; Schlicht, Wolfgang
2015-08-21
Traffic safety and pedestrian friendliness are considered to be important conditions for older people's motivation to walk through their environment. This study uses an experimental study design with computer-simulated living environments to investigate the effect of micro-scale environmental factors (parking spaces and green verges with trees) on older people's perceptions of both motivational antecedents (dependent variables). Seventy-four consecutively recruited older people were randomly assigned watching one of two scenarios (independent variable) on a computer screen. The scenarios simulated a stroll on a sidewalk, as it is 'typical' for a German city. In version 'A,' the subjects take a fictive walk on a sidewalk where a number of cars are parked partially on it. In version 'B', cars are in parking spaces separated from the sidewalk by grass verges and trees. Subjects assessed their impressions of both dependent variables. A multivariate analysis of covariance showed that subjects' ratings on perceived traffic safety and pedestrian friendliness were higher for Version 'B' compared to version 'A'. Cohen's d indicates medium (d = 0.73) and large (d = 1.23) effect sizes for traffic safety and pedestrian friendliness, respectively. The study suggests that elements of the built environment might affect motivational antecedents of older people's walking behavior.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boersma, C.; Mattioda, A. L.; Allamandola, L. J.
A significantly updated version of the NASA Ames PAH IR Spectroscopic Database, the first major revision since its release in 2010, is presented. The current version, version 2.00, contains 700 computational and 75 experimental spectra compared, respectively, with 583 and 60 in the initial release. The spectra span the 2.5-4000 μm (4000-2.5 cm{sup -1}) range. New tools are available on the site that allow one to analyze spectra in the database and compare them with imported astronomical spectra as well as a suite of IDL object classes (a collection of programs utilizing IDL's object-oriented programming capabilities) that permit offline analysismore » called the AmesPAHdbIDLSuite. Most noteworthy among the additions are the extension of the computational spectroscopic database to include a number of significantly larger polycyclic aromatic hydrocarbons (PAHs), the ability to visualize the molecular atomic motions corresponding to each vibrational mode, and a new tool that allows one to perform a non-negative least-squares fit of an imported astronomical spectrum with PAH spectra in the computational database. Finally, a methodology is described in the Appendix, and implemented using the AmesPAHdbIDLSuite, that allows the user to enforce charge balance during the fitting procedure.« less
SRT Status and Plans for Version-7
NASA Technical Reports Server (NTRS)
Susskind, Joel; Blaisdell, John; Iredell, Lena; Kouvaris, Louis
2015-01-01
The AIRS Science Team Version-6 retrieval algorithm is currently producing level-3 Climate Data Records (CDRs) from AIRS that have been proven useful to scientists in understanding climate processes. CDRs are gridded level-3 products which include all cases passing AIRS Climate QC. SRT has made significant further improvements to AIRS Version-6. Research is continuing at SRT toward the development of AIRS Version-7. At the last Science Team Meeting, we described results using SRT AIRS Version-6.19. SRT Version-6.19 is now an official build at JPL called 6.2. SRTs latest version is AIRS Version-6.22. We have also adapted AIRS Version-6.22 to run with CrISATMS. AIRS Version-6.22 and CrIS Version- 6.22 both run now on JPL computers, but are not yet official builds. The main reason for finalization of Version-7, and using it in the relatively near future for the future processing and reprocessing of old AIRS data, is to produce even better CDRs for use by climate scientists. For this reason all results shown in this talk use only AIRS Climate QC.
Benifla, J L; Goffinet, F; Darai, E; Madelenat, P
1994-12-01
Transabdominal amnioinfusion can be used to facilitate external cephalic version. Our technique involves filling the uterine cavity with 700 or 900 mL of 37C saline under continuous echographic monitoring. External cephalic version is done the next morning. We have used this procedure in six women, all of whom had previous unsuccessful attempts at external cephalic version. After amnioinfusion, all six patients were converted to cephalic presentation and delivered normally, without obstetric or neonatal complications.
A Cohomological Perspective on Algebraic Quantum Field Theory
NASA Astrophysics Data System (ADS)
Hawkins, Eli
2018-05-01
Algebraic quantum field theory is considered from the perspective of the Hochschild cohomology bicomplex. This is a framework for studying deformations and symmetries. Deformation is a possible approach to the fundamental challenge of constructing interacting QFT models. Symmetry is the primary tool for understanding the structure and properties of a QFT model. This perspective leads to a generalization of the algebraic quantum field theory framework, as well as a more general definition of symmetry. This means that some models may have symmetries that were not previously recognized or exploited. To first order, a deformation of a QFT model is described by a Hochschild cohomology class. A deformation could, for example, correspond to adding an interaction term to a Lagrangian. The cohomology class for such an interaction is computed here. However, the result is more general and does not require the undeformed model to be constructed from a Lagrangian. This computation leads to a more concrete version of the construction of perturbative algebraic quantum field theory.
ON states as resource units for universal quantum computation with photonic architectures
NASA Astrophysics Data System (ADS)
Sabapathy, Krishna Kumar; Weedbrook, Christian
2018-06-01
Universal quantum computation using photonic systems requires gates the Hamiltonians of which are of order greater than quadratic in the quadrature operators. We first review previous proposals to implement such gates, where specific non-Gaussian states are used as resources in conjunction with entangling gates such as the continuous-variable versions of controlled-phase and controlled-not gates. We then propose ON states which are superpositions of the vacuum and the N th Fock state, for use as non-Gaussian resource states. We show that ON states can be used to implement the cubic and higher-order quadrature phase gates to first order in gate strength. There are several advantages to this method such as reduced number of superpositions in the resource state preparation and greater control over the final gate. We also introduce useful figures of merit to characterize gate performance. Utilizing a supply of on-demand resource states one can potentially scale up implementation to greater accuracy, by repeated application of the basic circuit.
Heuristic algorithms for the minmax regret flow-shop problem with interval processing times.
Ćwik, Michał; Józefczyk, Jerzy
2018-01-01
An uncertain version of the permutation flow-shop with unlimited buffers and the makespan as a criterion is considered. The investigated parametric uncertainty is represented by given interval-valued processing times. The maximum regret is used for the evaluation of uncertainty. Consequently, the minmax regret discrete optimization problem is solved. Due to its high complexity, two relaxations are applied to simplify the optimization procedure. First of all, a greedy procedure is used for calculating the criterion's value, as such calculation is NP-hard problem itself. Moreover, the lower bound is used instead of solving the internal deterministic flow-shop. The constructive heuristic algorithm is applied for the relaxed optimization problem. The algorithm is compared with previously elaborated other heuristic algorithms basing on the evolutionary and the middle interval approaches. The conducted computational experiments showed the advantage of the constructive heuristic algorithm with regards to both the criterion and the time of computations. The Wilcoxon paired-rank statistical test confirmed this conclusion.
Prediction of Success in External Cephalic Version under Tocolysis: Still a Challenge.
Vaz de Macedo, Carolina; Clode, Nuno; Mendes da Graça, Luís
2015-01-01
External cephalic version is a procedure of fetal rotation to a cephalic presentation through manoeuvres applied to the maternal abdomen. There are several prognostic factors described in literature for external cephalic version success and prediction scores have been proposed, but their true implication in clinical practice is controversial. We aim to identify possible factors that could contribute to the success of an external cephalic version attempt in our population. We retrospectively examined 207 consecutive external cephalic version attempts under tocolysis conducted between January 1997 and July 2012. We consulted the department's database for the following variables: race, age, parity, maternal body mass index, gestational age, estimated fetal weight, breech category, placental location and amniotic fluid index. We performed descriptive and analytical statistics for each variable and binary logistic regression. External cephalic version was successful in 46.9% of cases (97/207). None of the included variables was associated with the outcome of external cephalic version attempts after adjustment for confounding factors. We present a success rate similar to what has been previously described in literature. However, in contrast to previous authors, we could not associate any of the analysed variables with success of the external cephalic version attempt. We believe this discrepancy is partly related to the type of statistical analysis performed. Even though there are numerous prognostic factors identified for the success in external cephalic version, care must be taken when counselling and selecting patients for this procedure. The data obtained suggests that external cephalic version should continue being offered to all eligible patients regardless of prognostic factors for success.
On Certain Topological Indices of Boron Triangular Nanotubes
NASA Astrophysics Data System (ADS)
Aslam, Adnan; Ahmad, Safyan; Gao, Wei
2017-08-01
The topological index gives information about the whole structure of a chemical graph, especially degree-based topological indices that are very useful. Boron triangular nanotubes are now replacing usual carbon nanotubes due to their excellent properties. We have computed general Randić (Rα), first Zagreb (M1) and second Zagreb (M2), atom-bond connectivity (ABC), and geometric-arithmetic (GA) indices of boron triangular nanotubes. Also, we have computed the fourth version of atom-bond connectivity (ABC4) and the fifth version of geometric-arithmetic (GA5) indices of boron triangular nanotubes.
Computer-automated opponent for manned air-to-air combat simulations
NASA Technical Reports Server (NTRS)
Hankins, W. W., III
1979-01-01
Two versions of a real-time digital-computer program that operates a fighter airplane interactively against a human pilot in simulated air combat were evaluated. They function by replacing one of two pilots in the Langley differential maneuvering simulator. Both versions make maneuvering decisions from identical information and logic; they differ essentially in the aerodynamic models that they control. One is very complete, but the other is much simpler, primarily characterizing the airplane's performance (lift, drag, and thrust). Both models competed extremely well against highly trained U.S. fighter pilots.
NASA Technical Reports Server (NTRS)
Mcgaw, Michael A.; Saltsman, James F.
1993-01-01
A recently developed high-temperature fatigue life prediction computer code is presented and an example of its usage given. The code discussed is based on the Total Strain version of Strainrange Partitioning (TS-SRP). Included in this code are procedures for characterizing the creep-fatigue durability behavior of an alloy according to TS-SRP guidelines and predicting cyclic life for complex cycle types for both isothermal and thermomechanical conditions. A reasonably extensive materials properties database is included with the code.
Leonardi, Rosalia; Maiorana, Francesco; Giordano, Daniela
2008-06-01
Many of us use and maintain files on more than 1 computer--a desktop part of the time, and a notebook, a palmtop, or removable devices at other times. It can be easy to forget which device contains the latest version of a particular file, and time-consuming searches often ensue. One way to solve this problem is to use software that synchronizes the files. This allows users to maintain updated versions of the same file in several locations.
USSAERO version D computer program development using ANSI standard FORTRAN 77 and DI-3000 graphics
NASA Technical Reports Server (NTRS)
Wiese, M. R.
1986-01-01
The D version of the Unified Subsonic Supersonic Aerodynamic Analysis (USSAERO) program is the result of numerous modifications and enhancements to the B01 version. These changes include conversion to ANSI standard FORTRAN 77; use of the DI-3000 graphics package; removal of the overlay structure; a revised input format; the addition of an input data analysis routine; and increasing the number of aeronautical components allowed.
JEFI: a cash flow analysis program (Version 3.0 for Windows). [Computer program].
Bruce Hansen; Jeff Palmer
1998-01-01
JEFFI/3 is a Windows-version of JEFFI/2. The differences between the two versions are the new interface, an investment term of 1 to 30 years (instead of 4 to 30), and a rich set of detailed online help documents. JEFFI/3 still retains a number of unique features of JEFFII2 related to treatment of the final year cash flows, depreciation, working capital, and derivation...
Cross-cultural validation of the National Eye Institute Visual Function Questionnaire.
Mollazadegan, Kaziwe; Huang, Jinhai; Khadka, Jyoti; Wang, Qinmei; Yang, Feng; Gao, Rongrong; Pesudovs, Konrad
2014-05-01
To assess the native and the previously Rasch-modified National Eye Institute Visual Function Questionnaire (NEI VFQ) scales in a Chinese population. Eye Hospital of Wenzhou Medical University, Wenzhou, China. Questionnaire development. Patients on the waiting list for cataract surgery completed the 39-item NEI VFQ (NEI VFQ-39). Rasch analysis was performed in 3 steps as follows: (1) Assess the psychometric properties of the original NEI VFQ. (2) Reassess the previously proposed Rasch-modified NEI VFQ scales by Pesudovs et al. (2010) in Chinese populations. (3) Compare the scores of previously recommended scales of the NEI VFQ with new Rasch-modified scales of the same questionnaire using Bland-Altman plots. Four hundred thirty-five patients (median age 70 years; range 35 to 90 years) completed the NEI VFQ-39. Response categories for 4 question types were dysfunctional and therefore repaired. The original NEI VFQ-39 and NEI VFQ-25 showed good measurement precision. However, both versions showed multidimensionality, misfitting items, suboptimum targeting, and nonfunctioning subscales. Using the previously proposed Rasch-modified scales of the NEI VFQ yielded valid measurement of each construct in the 39-item and 25-item questionnaire. Comparison between the earlier proposed NEI VFQ scales and the new versions developed in this population showed good agreement. The original NEI VFQ was once again found to be flawed. The previously proposed Rasch-analyzed versions of the NEI VFQ and the new Chinese versions showed good agreement. Copyright © 2014 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
Automated Instructional Management Systems (AIMS) Version III, Users Manual.
ERIC Educational Resources Information Center
New York Inst. of Tech., Old Westbury.
This document sets forth the procedures necessary to utilize and understand the operating characteristics of the Automated Instructional Management System - Version III, a computer-based system for management of educational processes. Directions for initialization, including internal and user files; system and operational input requirements;…
USERS MANUAL: LANDFILL GAS EMISSIONS MODEL - VERSION 2.0
The document is a user's guide for a computer model, Version 2.0 of the Landfill Gas Emissions Model (LandGEM), for estimating air pollution emissions from municipal solid waste (MSW) landfills. The model can be used to estimate emission rates for methane, carbon dioxide, nonmet...
AIRPOL-4A : an introduction and user's guide.
DOT National Transportation Integrated Search
1976-01-01
This report details the mechanics of implementing the computer program AIRPOL-4A. AIRPOL-4A supersedes AIRPOL-4. The upgrade from version 4 to version 4A is a result of the new emissions guidelines contained in Supplement 5 to AP-42, April 1975, by t...
NASA Astrophysics Data System (ADS)
Efthimiou, George C.; Kovalets, Ivan V.; Venetsanos, Alexandros; Andronopoulos, Spyros; Argyropoulos, Christos D.; Kakosimos, Konstantinos
2017-12-01
An improved inverse modelling method to estimate the location and the emission rate of an unknown point stationary source of passive atmospheric pollutant in a complex urban geometry is incorporated in the Computational Fluid Dynamics code ADREA-HF and presented in this paper. The key improvement in relation to the previous version of the method lies in a two-step segregated approach. At first only the source coordinates are analysed using a correlation function of measured and calculated concentrations. In the second step the source rate is identified by minimizing a quadratic cost function. The validation of the new algorithm is performed by simulating the MUST wind tunnel experiment. A grid-independent flow field solution is firstly attained by applying successive refinements of the computational mesh and the final wind flow is validated against the measurements quantitatively and qualitatively. The old and new versions of the source term estimation method are tested on a coarse and a fine mesh. The new method appeared to be more robust, giving satisfactory estimations of source location and emission rate on both grids. The performance of the old version of the method varied between failure and success and appeared to be sensitive to the selection of model error magnitude that needs to be inserted in its quadratic cost function. The performance of the method depends also on the number and the placement of sensors constituting the measurement network. Of significant interest for the practical application of the method in urban settings is the number of concentration sensors required to obtain a ;satisfactory; determination of the source. The probability of obtaining a satisfactory solution - according to specified criteria -by the new method has been assessed as function of the number of sensors that constitute the measurement network.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Müller, C.; Hughes, E. D.; Niederauer, G. F.
1998-10-01
Los Alamos National Laboratory (LANL) and Forschungszentrum Karlsruhe (FzK) are developing GASFLOW, a three-dimensional (3D) fluid dynamics field code as a best- estimate tool to characterize local phenomena within a flow field. Examples of 3D phenomena include circulation patterns; flow stratification; hydrogen distribution mixing and stratification; combustion and flame propagation; effects of noncondensable gas distribution on local condensation and evaporation; and aerosol entrainment, transport, and deposition. An analysis with GASFLOW will result in a prediction of the gas composition and discrete particle distribution in space and time throughout the facility and the resulting pressure and temperature loadings on the wallsmore » and internal structures with or without combustion. A major application of GASFLOW is for predicting the transport, mixing, and combustion of hydrogen and other gases in nuclear reactor containment and other facilities. It has been applied to situations involving transporting and distributing combustible gas mixtures. It has been used to study gas dynamic behavior in low-speed, buoyancy-driven flows, as well as sonic flows or diffusion dominated flows; and during chemically reacting flows, including deflagrations. The effects of controlling such mixtures by safety systems can be analyzed. The code version described in this manual is designated GASFLOW 2.1, which combines previous versions of the United States Nuclear Regulatory Commission code HMS (for Hydrogen Mixing Studies) and the Department of Energy and FzK versions of GASFLOW. The code was written in standard Fortran 90. This manual comprises three volumes. Volume I describes the governing physical equations and computational model. Volume II describes how to use the code to set up a model geometry, specify gas species and material properties, define initial and boundary conditions, and specify different outputs, especially graphical displays. Sample problems are included. Volume III contains some of the assessments performed by LANL and FzK« less
NASA Astrophysics Data System (ADS)
Freitas, Saulo R.; Panetta, Jairo; Longo, Karla M.; Rodrigues, Luiz F.; Moreira, Demerval S.; Rosário, Nilton E.; Silva Dias, Pedro L.; Silva Dias, Maria A. F.; Souza, Enio P.; Freitas, Edmilson D.; Longo, Marcos; Frassoni, Ariane; Fazenda, Alvaro L.; Silva, Cláudio M. Santos e.; Pavani, Cláudio A. B.; Eiras, Denis; França, Daniela A.; Massaru, Daniel; Silva, Fernanda B.; Santos, Fernando C.; Pereira, Gabriel; Camponogara, Gláuber; Ferrada, Gonzalo A.; Campos Velho, Haroldo F.; Menezes, Isilda; Freire, Julliana L.; Alonso, Marcelo F.; Gácita, Madeleine S.; Zarzur, Maurício; Fonseca, Rafael M.; Lima, Rafael S.; Siqueira, Ricardo A.; Braz, Rodrigo; Tomita, Simone; Oliveira, Valter; Martins, Leila D.
2017-01-01
We present a new version of the Brazilian developments on the Regional Atmospheric Modeling System (BRAMS), in which different previous versions for weather, chemistry, and carbon cycle were unified in a single integrated modeling system software. This new version also has a new set of state-of-the-art physical parameterizations and greater computational parallel and memory usage efficiency. The description of the main model features includes several examples illustrating the quality of the transport scheme for scalars, radiative fluxes on surface, and model simulation of rainfall systems over South America at different spatial resolutions using a scale aware convective parameterization. Additionally, the simulation of the diurnal cycle of the convection and carbon dioxide concentration over the Amazon Basin, as well as carbon dioxide fluxes from biogenic processes over a large portion of South America, are shown. Atmospheric chemistry examples show the model performance in simulating near-surface carbon monoxide and ozone in the Amazon Basin and the megacity of Rio de Janeiro. For tracer transport and dispersion, the model capabilities to simulate the volcanic ash 3-D redistribution associated with the eruption of a Chilean volcano are demonstrated. The gain of computational efficiency is described in some detail. BRAMS has been applied for research and operational forecasting mainly in South America. Model results from the operational weather forecast of BRAMS on 5 km grid spacing in the Center for Weather Forecasting and Climate Studies, INPE/Brazil, since 2013 are used to quantify the model skill of near-surface variables and rainfall. The scores show the reliability of BRAMS for the tropical and subtropical areas of South America. Requirements for keeping this modeling system competitive regarding both its functionalities and skills are discussed. Finally, we highlight the relevant contribution of this work to building a South American community of model developers.
NASA Technical Reports Server (NTRS)
Freitas, Saulo R.; Panetta, Jairo; Longo, Karla M.; Rodrigues, Luiz F.; Moreira, Demerval S.; Rosario, Nilton E.; Silva Dias, Pedro L.; Silva Dias, Maria A. F.; Souza, Enio P.; Freitas, Edmilson D.;
2017-01-01
We present a new version of the Brazilian developments on the Regional Atmospheric Modeling System where different previous versions for weather, chemistry and carbon cycle were unified in a single integrated software system. The new version also has a new set of state-of-the-art physical parameterizations and greater computational parallel and memory usage efficiency. Together with the description of the main features are examples of the quality of the transport scheme for scalars, radiative fluxes on surface and model simulation of rainfall systems over South America in different spatial resolutions using a scale-aware convective parameterization. Besides, the simulation of the diurnal cycle of the convection and carbon dioxide concentration over the Amazon Basin, as well as carbon dioxide fluxes from biogenic processes over a large portion of South America are shown. Atmospheric chemistry examples present model performance in simulating near-surface carbon monoxide and ozone in Amazon Basin and Rio de Janeiro megacity. For tracer transport and dispersion, it is demonstrated the model capabilities to simulate the volcanic ash 3-d redistribution associated with the eruption of a Chilean volcano. Then, the gain of computational efficiency is described with some details. BRAMS has been applied for research and operational forecasting mainly in South America. Model results from the operational weather forecast of BRAMS on 5 km grid spacing in the Center for Weather Forecasting and Climate Studies, INPE/Brazil, since 2013 are used to quantify the model skill of near surface variables and rainfall. The scores show the reliability of BRAMS for the tropical and subtropical areas of South America. Requirements for keeping this modeling system competitive regarding on its functionalities and skills are discussed. At last, we highlight the relevant contribution of this work on the building up of a South American community of model developers.
Field verification of reconstructed dam-break flood, Laurel Run, Pennsylvania
Chen, Cheng-lung; Armbruster, Jeffrey T.
1979-01-01
A one-dimensional dam-break flood routing model is verified by using observed data on the flash flood resulting from the failure of Laurel Run Reservoir Dam near Johnstown, Pennsylvania. The model has been developed on the basis of an explicit scheme of the characteristics method with specified time intervals. The model combines one of the characteristic equations with the Rankine-Hugoniot shock equations to trace the corresponding characteristic backward to the known state for solving the depth and velocity of flow at the wave front. The previous version of the model has called for a modification of the method of solution to overcome the computational difficulty at the narrow breach and at any geomorphological constraints where channel geometry changes rapidly. The large reduction in the computational inaccuracies and oscillations was achieved by introducing the actual "storage width" in the equation of continuity and the imaginary "conveyance width" in the equation of motion. Close agreement between observed and computed peak stages at several stations downstream of the dam strongly suggests the validity and applicability of the model. However, small numerical noise appearing in the computed stage and discharge hydrographs at the dam site as well as discrepancy of attenuated peaks in the discharge hydrographs indicate the need for further model improvement.
Engineering and programming manual: Two-dimensional kinetic reference computer program (TDK)
NASA Technical Reports Server (NTRS)
Nickerson, G. R.; Dang, L. D.; Coats, D. E.
1985-01-01
The Two Dimensional Kinetics (TDK) computer program is a primary tool in applying the JANNAF liquid rocket thrust chamber performance prediction methodology. The development of a methodology that includes all aspects of rocket engine performance from analytical calculation to test measurements, that is physically accurate and consistent, and that serves as an industry and government reference is presented. Recent interest in rocket engines that operate at high expansion ratio, such as most Orbit Transfer Vehicle (OTV) engine designs, has required an extension of the analytical methods used by the TDK computer program. Thus, the version of TDK that is described in this manual is in many respects different from the 1973 version of the program. This new material reflects the new capabilities of the TDK computer program, the most important of which are described.
The Development and Validation of a Bilingual Version of the Vocabulary Size Test
ERIC Educational Resources Information Center
Karami, Hossein
2012-01-01
This paper reports an attempt to develop and validate a bilingual Persian version of the Vocabulary Size Test (VST). Due to the particular educational system in Iran, there is a dire need for a test that can effectively estimate English learners' vocabulary sizes. Previous research (Nguyen and Nation, 2011) has indicated that bilingual versions of…
ERIC Educational Resources Information Center
Sexton, Kathryn A.; Dugas, Michel J.
2009-01-01
This study examined the factor structure of the English version of the Intolerance of Uncertainty Scale (IUS; French version: M. H. Freeston, J. Rheaume, H. Letarte, M. J. Dugas, & R. Ladouceur, 1994; English version: K. Buhr & M. J. Dugas, 2002) using a substantially larger sample than has been used in previous studies. Nonclinical…
Embedding methods for the steady Euler equations
NASA Technical Reports Server (NTRS)
Chang, S. H.; Johnson, G. M.
1983-01-01
An approach to the numerical solution of the steady Euler equations is to embed the first-order Euler system in a second-order system and then to recapture the original solution by imposing additional boundary conditions. Initial development of this approach and computational experimentation with it were previously based on heuristic physical reasoning. This has led to the construction of a relaxation procedure for the solution of two-dimensional steady flow problems. The theoretical justification for the embedding approach is addressed. It is proven that, with the appropriate choice of embedding operator and additional boundary conditions, the solution to the embedded system is exactly the one to the original Euler equations. Hence, solving the embedded version of the Euler equations will not produce extraneous solutions.
ERIC Educational Resources Information Center
Singh, Gurmukh
2012-01-01
The present article is primarily targeted for the advanced college/university undergraduate students of chemistry/physics education, computational physics/chemistry, and computer science. The most recent software system such as MS Visual Studio .NET version 2010 is employed to perform computer simulations for modeling Bohr's quantum theory of…
Students' Mathematics Word Problem-Solving Achievement in a Computer-Based Story
ERIC Educational Resources Information Center
Gunbas, N.
2015-01-01
The purpose of this study was to investigate the effect of a computer-based story, which was designed in anchored instruction framework, on sixth-grade students' mathematics word problem-solving achievement. Problems were embedded in a story presented on a computer as computer story, and then compared with the paper-based version of the same story…
Author Correction: The genetic prehistory of the Baltic Sea region.
Mittnik, Alissa; Wang, Chuan-Chao; Pfrengle, Saskia; Daubaras, Mantas; Zariņa, Gunita; Hallgren, Fredrik; Allmäe, Raili; Khartanovich, Valery; Moiseyev, Vyacheslav; Tõrv, Mari; Furtwängler, Anja; Valtueña, Aida Andrades; Feldman, Michal; Economou, Christos; Oinonen, Markku; Vasks, Andrejs; Balanovska, Elena; Reich, David; Jankauskas, Rimantas; Haak, Wolfgang; Schiffels, Stephan; Krause, Johannes
2018-04-11
The original version of this Article omitted references to previous work, which are detailed in the associated Author Correction. These omissions have been corrected in both the PDF and HTML versions of the Article.
xdamp Version 6 : an IDL-based data and image manipulation program.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ballard, William Parker
2012-04-01
The original DAMP (DAta Manipulation Program) was written by Mark Hedemann of Sandia National Laboratories and used the CA-DISSPLA{trademark} (available from Computer Associates International, Inc., Garden City, NY) graphics package as its engine. It was used to plot, modify, and otherwise manipulate the one-dimensional data waveforms (data vs. time) from a wide variety of accelerators. With the waning of CA-DISSPLA and the increasing popularity of Unix(reg sign)-based workstations, a replacement was needed. This package uses the IDL(reg sign) software, available from Research Systems Incorporated, a Xerox company, in Boulder, Colorado, as the engine, and creates a set of widgets tomore » manipulate the data in a manner similar to the original DAMP and earlier versions of xdamp. IDL is currently supported on a wide variety of Unix platforms such as IBM(reg sign) workstations, Hewlett Packard workstations, SUN(reg sign) workstations, Microsoft(reg sign) Windows{trademark} computers, Macintosh(reg sign) computers and Digital Equipment Corporation VMS(reg sign) and Alpha(reg sign) systems. Thus, xdamp is portable across many platforms. We have verified operation, albeit with some minor IDL bugs, on personal computers using Windows 7 and Windows Vista; Unix platforms; and Macintosh computers. Version 6 is an update that uses the IDL Virtual Machine to resolve the need for licensing IDL.« less
A new modified conjugate gradient coefficient for solving system of linear equations
NASA Astrophysics Data System (ADS)
Hajar, N.; ‘Aini, N.; Shapiee, N.; Abidin, Z. Z.; Khadijah, W.; Rivaie, M.; Mamat, M.
2017-09-01
Conjugate gradient (CG) method is an evolution of computational method in solving unconstrained optimization problems. This approach is easy to implement due to its simplicity and has been proven to be effective in solving real-life application. Although this field has received copious amount of attentions in recent years, some of the new approaches of CG algorithm cannot surpass the efficiency of the previous versions. Therefore, in this paper, a new CG coefficient which retains the sufficient descent and global convergence properties of the original CG methods is proposed. This new CG is tested on a set of test functions under exact line search. Its performance is then compared to that of some of the well-known previous CG methods based on number of iterations and CPU time. The results show that the new CG algorithm has the best efficiency amongst all the methods tested. This paper also includes an application of the new CG algorithm for solving large system of linear equations
Fast access to the CMS detector condition data employing HTML5 technologies
NASA Astrophysics Data System (ADS)
Pierro, Giuseppe Antonio; Cavallari, Francesca; Di Guida, Salvatore; Innocente, Vincenzo
2011-12-01
This paper focuses on using HTML version 5 (HTML5) for accessing condition data for the CMS experiment, evaluating the benefits and risks posed by the use of this technology. According to the authors of HTML5, this technology attempts to solve issues found in previous iterations of HTML and addresses the needs of web applications, an area previously not adequately covered by HTML. We demonstrate that employing HTML5 brings important benefits in terms of access performance to the CMS condition data. The combined use of web storage and web sockets allows increasing the performance and reducing the costs in term of computation power, memory usage and network bandwidth for client and server. Above all, the web workers allow creating different scripts that can be executed using multi-thread mode, exploiting multi-core microprocessors. Web workers have been employed in order to substantially decrease the web page rendering time to display the condition data stored in the CMS condition database.
SAGE FOR WINDOWS (WSAGE) VERSION 1.0 SOLVENT ALTERNATIVES GUIDE - USER'S GUIDE
The guide provides instructions for using the Solvent Alternatives Guide (SAGE) for Windows, version 1.0. The guide assumes that the user is familiar with the fundamentals of operating Windows 3.1 (or higher) on a personal computer under the DOS 5.0 (or higher) operating system. ...
SAGE FOR MACINTOSH (MSAGE) VERSION 1.0 SOLVENT ALTERNATIVES GUIDE - USER'S GUIDE
The guide provides instructions for using the Solvent Alternatives Guide (SAGE) for Macintosh, version 1.0. The guide assumes that the user is familiar with the fundamentals of operating a
Macintosh personal computer under the System 7.0 (or higher) operating system. SAGE for ...
What's New with MS Office Suites
ERIC Educational Resources Information Center
Goldsborough, Reid
2012-01-01
If one buys a new PC, laptop, or netbook computer today, it probably comes preloaded with Microsoft Office 2010 Starter Edition. This is a significantly limited, advertising-laden version of Microsoft's suite of productivity programs, Microsoft Office. This continues the trend of PC makers providing ever more crippled versions of Microsoft's…
MEASUREMENT AND PRECISION, EXPERIMENTAL VERSION.
ERIC Educational Resources Information Center
Harvard Univ., Cambridge, MA. Harvard Project Physics.
THIS DOCUMENT IS AN EXPERIMENTAL VERSION OF A PROGRAMED TEXT ON MEASUREMENT AND PRECISION. PART I CONTAINS 24 FRAMES DEALING WITH PRECISION AND SIGNIFICANT FIGURES ENCOUNTERED IN VARIOUS MATHEMATICAL COMPUTATIONS AND MEASUREMENTS. PART II BEGINS WITH A BRIEF SECTION ON EXPERIMENTAL DATA, COVERING SUCH POINTS AS (1) ESTABLISHING THE ZERO POINT, (2)…
IMS Version 3 Student Data Base Maintenance Program.
ERIC Educational Resources Information Center
Brown, John R.
Computer routines that update the Instructional Management System (IMS) Version 3 student data base which supports the Southwest Regional Laboratory's (SWRL) student monitoring system are described. Written in IBM System 360 FORTRAN IV, the program updates the data base by adding, changing and deleting records, as well as adding and deleting…
MULTIPLE PROJECTIONS SYSTEM (MPS) - USER'S MANUAL VERSION 1.0
The report is a user's manual for version 1.0 of the Multiple Projections Systems (MPS), a computer system that can perform "what if" scenario analysis and report the final results (i.e., Rate of Further Progress - ROP - inventories) to EPA (i.e., the Aerometric Information Retri...
USSAERO computer program development, versions B and C
NASA Technical Reports Server (NTRS)
Woodward, F. A.
1980-01-01
Versions B and C of the unified subsonic and supersonic aerodynamic analysis program, USSAERO, are described. Version B incorporates a new symmetrical singularity method to provide improved surface pressure distributions on wings in subsonic flow. Version C extends the range of application of the program to include the analysis of multiple engine nacelles or finned external stores. In addition, nonlinear compressibility effects in high subsonic and supersonic flows are approximated using a correction based on the local Mach number at panel control points. Several examples are presented comparing the results of these programs with other panel methods and experimental data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Regnier, D.; Dubray, N.; Verriere, M.
The time-dependent generator coordinate method (TDGCM) is a powerful method to study the large amplitude collective motion of quantum many-body systems such as atomic nuclei. Under the Gaussian Overlap Approximation (GOA), the TDGCM leads to a local, time-dependent Schrödinger equation in a multi-dimensional collective space. In this study, we present the version 2.0 of the code FELIX that solves the collective Schrödinger equation in a finite element basis. This new version features: (i) the ability to solve a generalized TDGCM+GOA equation with a metric term in the collective Hamiltonian, (ii) support for new kinds of finite elements and different typesmore » of quadrature to compute the discretized Hamiltonian and overlap matrices, (iii) the possibility to leverage the spectral element scheme, (iv) an explicit Krylov approximation of the time propagator for time integration instead of the implicit Crank–Nicolson method implemented in the first version, (v) an entirely redesigned workflow. We benchmark this release on an analytic problem as well as on realistic two-dimensional calculations of the low-energy fission of 240Pu and 256Fm. Low to moderate numerical precision calculations are most efficiently performed with simplex elements with a degree 2 polynomial basis. Higher precision calculations should instead use the spectral element method with a degree 4 polynomial basis. Finally, we emphasize that in a realistic calculation of fission mass distributions of 240Pu, FELIX-2.0 is about 20 times faster than its previous release (within a numerical precision of a few percents).« less
Castellví, Pere; Forero, Carlos G; Codony, Miquel; Vilagut, Gemma; Brugulat, Pilar; Medina, Antonia; Gabilondo, Andrea; Mompart, Anna; Colom, Joan; Tresserras, Ricard; Ferrer, Montse; Stewart-Brown, Sarah; Alonso, Jordi
2014-04-01
Mental well-being has aroused interest in Europe as an indicator of population health. The Warwick-Edinburgh Mental Well-Being Scale (WEMWBS) was developed in the United Kingdom showing good face validity and has been previously adapted into Spanish. The aim of this study is to assess the validity and reliability of the Spanish version of WEMWBS in the general population. Cross-sectional home face-to-face interview survey with computer-assisted personal interviewing was administered with the 2011 Catalan Health Interview Survey Wave 3, which is representative of the non-institutionalized general population of Catalonia, Spain. A total of 1,900 participants 15+ years of age were interviewed. The Spanish version of WEMWBS was administered together with socioeconomic and health-related variables, with a hypothesized level of association. Similar to the original, confirmatory factor analysis fits a one-factor model adequately (CFI = 0.974; TLI = 0.970; RMSEA = 0.059; χ (2) = 584.82; df = 77; p < .001) and has a high internal consistency (Cronbach's alpha = 0.930; Guttman's lambda 2 = 0.932). The WEMWBS discriminated between population groups in all health-related and socioeconomic variables, except in gender (p = 0.119), with a magnitude similar to that hypothesized. Overall, mental well-being was higher for the general population of Catalonia (average and whole distribution) than that for Scotland general population. The Spanish version of WEMWBS showed good psychometric properties similar to the UK original scale. Whether better mental well-being in Catalonia is due to methodological or substantive cultural, social, or environmental factors should be further researched.
Regnier, D.; Dubray, N.; Verriere, M.; ...
2017-12-20
The time-dependent generator coordinate method (TDGCM) is a powerful method to study the large amplitude collective motion of quantum many-body systems such as atomic nuclei. Under the Gaussian Overlap Approximation (GOA), the TDGCM leads to a local, time-dependent Schrödinger equation in a multi-dimensional collective space. In this study, we present the version 2.0 of the code FELIX that solves the collective Schrödinger equation in a finite element basis. This new version features: (i) the ability to solve a generalized TDGCM+GOA equation with a metric term in the collective Hamiltonian, (ii) support for new kinds of finite elements and different typesmore » of quadrature to compute the discretized Hamiltonian and overlap matrices, (iii) the possibility to leverage the spectral element scheme, (iv) an explicit Krylov approximation of the time propagator for time integration instead of the implicit Crank–Nicolson method implemented in the first version, (v) an entirely redesigned workflow. We benchmark this release on an analytic problem as well as on realistic two-dimensional calculations of the low-energy fission of 240Pu and 256Fm. Low to moderate numerical precision calculations are most efficiently performed with simplex elements with a degree 2 polynomial basis. Higher precision calculations should instead use the spectral element method with a degree 4 polynomial basis. Finally, we emphasize that in a realistic calculation of fission mass distributions of 240Pu, FELIX-2.0 is about 20 times faster than its previous release (within a numerical precision of a few percents).« less
NASA Astrophysics Data System (ADS)
Pugnaghi, Sergio; Guerrieri, Lorenzo; Corradini, Stefano; Merucci, Luca
2016-07-01
Volcanic plume removal (VPR) is a procedure developed to retrieve the ash optical depth, effective radius and mass, and sulfur dioxide mass contained in a volcanic cloud from the thermal radiance at 8.7, 11, and 12 µm. It is based on an estimation of a virtual image representing what the sensor would have seen in a multispectral thermal image if the volcanic cloud were not present. Ash and sulfur dioxide were retrieved by the first version of the VPR using a very simple atmospheric model that ignored the layer above the volcanic cloud. This new version takes into account the layer of atmosphere above the cloud as well as thermal radiance scattering along the line of sight of the sensor. In addition to improved results, the new version also offers an easier and faster preliminary preparation and includes other types of volcanic particles (andesite, obsidian, pumice, ice crystals, and water droplets). As in the previous version, a set of parameters regarding the volcanic area, particle types, and sensor is required to run the procedure. However, in the new version, only the mean plume temperature is required as input data. In this work, a set of parameters to compute the volcanic cloud transmittance in the three quoted bands, for all the aforementioned particles, for both Mt. Etna (Italy) and Eyjafjallajökull (Iceland) volcanoes, and for the Terra and Aqua MODIS instruments is presented. Three types of tests are carried out to verify the results of the improved VPR. The first uses all the radiative transfer simulations performed to estimate the above mentioned parameters. The second one makes use of two synthetic images, one for Mt. Etna and one for Eyjafjallajökull volcanoes. The third one compares VPR and Look-Up Table (LUT) retrievals analyzing the true image of Eyjafjallajökull volcano acquired by MODIS aboard the Aqua satellite on 11 May 2010 at 14:05 GMT.
An Interactive Version of MULR04 With Enhanced Graphic Capability
ERIC Educational Resources Information Center
Burkholder, Joel H.
1978-01-01
An existing computer program for computing multiple regression analyses is made interactive in order to alleviate core storage requirements. Also, some improvements in the graphics aspects of the program are included. (JKS)
Investigating AI with Basic and Logo. Teaching Your Computer to Be Intelligent.
ERIC Educational Resources Information Center
Mandell, Alan; Lucking, Robert
1988-01-01
Discusses artificial intelligence, its definitions, and potential applications. Provides listings of Logo and BASIC versions for programs along with REM statements needed to make modifications for use with Apple computers. (RT)
Efficient Preconditioning for the p-Version Finite Element Method in Two Dimensions
1989-10-01
paper, we study fast parallel preconditioners for systems of equations arising from the p-version finite element method. The p-version finite element...computations and the solution of a relatively small global auxiliary problem. We study two different methods. In the first (Section 3), the global...20], will be studied in the next section. Problem (3.12) is obviously much more easily solved than the original problem ,nd the procedure is highly
Modal analysis and dynamic stresses for acoustically excited shuttle insulation tiles
NASA Technical Reports Server (NTRS)
Ojalvo, I. U.; Ogilvie, P. L.
1975-01-01
Improvements and extensions to the RESIST computer program developed for determining the normalized modal stress response of shuttle insulation tiles are described. The new version of RESIST can accommodate primary structure panels with closed-cell stringers, in addition to the capability for treating open-cell stringers. In addition, the present version of RESIST numerically solves vibration problems several times faster than its predecessor. A new digital computer program, titled ARREST (Acoustic Response of Reusable Shuttle Tiles) is also described. Starting with modal information contained on output tapes from RESIST computer runs, ARREST determines RMS stresses, deflections and accelerations of shuttle panels with reusable surface insulation tiles. Both programs are applicable to stringer stiffened structural panels with or without reusable surface insulation titles.
2013-01-01
Background Gene expression data could likely be a momentous help in the progress of proficient cancer diagnoses and classification platforms. Lately, many researchers analyze gene expression data using diverse computational intelligence methods, for selecting a small subset of informative genes from the data for cancer classification. Many computational methods face difficulties in selecting small subsets due to the small number of samples compared to the huge number of genes (high-dimension), irrelevant genes, and noisy genes. Methods We propose an enhanced binary particle swarm optimization to perform the selection of small subsets of informative genes which is significant for cancer classification. Particle speed, rule, and modified sigmoid function are introduced in this proposed method to increase the probability of the bits in a particle’s position to be zero. The method was empirically applied to a suite of ten well-known benchmark gene expression data sets. Results The performance of the proposed method proved to be superior to other previous related works, including the conventional version of binary particle swarm optimization (BPSO) in terms of classification accuracy and the number of selected genes. The proposed method also requires lower computational time compared to BPSO. PMID:23617960
XTALOPT version r11: An open-source evolutionary algorithm for crystal structure prediction
NASA Astrophysics Data System (ADS)
Avery, Patrick; Falls, Zackary; Zurek, Eva
2018-01-01
Version 11 of XTALOPT, an evolutionary algorithm for crystal structure prediction, has now been made available for download from the CPC library or the XTALOPT website, http://xtalopt.github.io. Whereas the previous versions of XTALOPT were published under the Gnu Public License (GPL), the current version is made available under the 3-Clause BSD License, which is an open source license that is recognized by the Open Source Initiative. Importantly, the new version can be executed via a command line interface (i.e., it does not require the use of a Graphical User Interface). Moreover, the new version is written as a stand-alone program, rather than an extension to AVOGADRO.
Package-X 2.0: A Mathematica package for the analytic calculation of one-loop integrals
NASA Astrophysics Data System (ADS)
Patel, Hiren H.
2017-09-01
This article summarizes new features and enhancements of the first major update of Package-X. Package-X 2.0 can now generate analytic expressions for arbitrarily high rank dimensionally regulated tensor integrals with up to four distinct propagators, each with arbitrary integer weight, near an arbitrary even number of spacetime dimensions, giving UV divergent, IR divergent, and finite parts at (almost) any real-valued kinematic point. Additionally, it can generate multivariable Taylor series expansions of these integrals around any non-singular kinematic point to arbitrary order. All special functions and abbreviations output by Package-X 2.0 support Mathematica's arbitrary precision evaluation capabilities to deal with issues of numerical stability. Finally, tensor algebraic routines of Package-X have been polished and extended to support open fermion chains both on and off shell. The documentation (equivalent to over 100 printed pages) is accessed through Mathematica's Wolfram Documentation Center and contains information on all Package-X symbols, with over 300 basic usage examples, 3 project-scale tutorials, and instructions on linking to FEYNCALC and LOOPTOOLS. Program files doi:http://dx.doi.org/10.17632/yfkwrd4d5t.1 Licensing provisions: CC by 4.0 Programming language: Mathematica (Wolfram Language) Journal reference of previous version: H. H. Patel, Comput. Phys. Commun 197, 276 (2015) Does the new version supersede the previous version?: Yes Summary of revisions: Extension to four point one-loop integrals with higher powers of denominator factors, separate extraction of UV and IR divergent parts, testing for power IR divergences, construction of Taylor series expansions of one-loop integrals, numerical evaluation with arbitrary precision arithmetic, manipulation of fermion chains, improved tensor algebraic routines, and much expanded documentation. Nature of problem: Analytic calculation of one-loop integrals in relativistic quantum field theory. Solution method: Passarino-Veltman reduction formula, Denner-Dittmaier reduction formulae, and additional algorithms described in the manuscript. Restrictions: One-loop integrals are limited to those involving no more than four denominator factors.
NASA Technical Reports Server (NTRS)
Chanteur, G.; Khanfir, R.
1995-01-01
We have designed a full compressible MHD code working on unstructured meshes in order to be able to compute accurately sharp structures embedded in large scale simulations. The code is based on a finite volume method making use of a kinetic flux splitting. A bidimensional version of the code has been used to simulate the interaction of a moving interstellar medium, magnetized or unmagnetized with a rotating and magnetized heliopspheric plasma source. Being aware that these computations are not realistic due to the restriction to two dimensions, we present it to demonstrate the ability of this new code to handle this problem. An axisymetric version, now under development, will be operational in a few months. Ultimately we plan to run a full 3d version.
Spacecraft Orbit Design and Analysis (SODA). Version 2.0: User's guide
NASA Technical Reports Server (NTRS)
Stallcup, Scott S.; Davis, John S.; Zsoldos, Jeffrey S.
1991-01-01
The Spacecraft Orbit Design and Analysis (SODA) computer program, Version 2.0, is discussed. SODA is a spaceflight mission planning system that consists of six program modules integrated around a common database and user interface. SODA runs on a VAX/VMS computer with an Evans and Sutherland PS300 graphics workstation. In the current version, three program modules produce an interactive three dimensional animation of one or more satellites in planetary orbit. Satellite visibility and sensor coverage capabilities are also provided. Circular and rectangular, off nadir, fixed and scanning sensors are supported. One module produces an interactive three dimensional animation of the solar system. Another module calculates cumulative satellite sensor coverage and revisit time for one or more satellites. Currently, Earth, Moon, and Mars systems are supported for all modules except the solar system module.
How is Version 6 different than earlier versions?
Atmospheric Science Data Center
2015-10-28
... integrated a priori CO profile. Second, the diagnostic 'Water Vapor Climatology Content' has been deleted. This diagnostic was included in previous products because of a data quality issue with the NCEP water vapor profiles. MERRA-based water vapor ...
76 FR 44965 - Notice of Revision of Standard Forms 39 and 39-A
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-27
... the outdated reference to the Federal Personnel Manual; and to include the current citation of 5 CFR....opm.gov/forms/html.sf.asp for agency use. These versions supersede all previous versions. Please...
UTM TCL 2.0 Software Version Description (SVD) Document
NASA Technical Reports Server (NTRS)
Mcguirk, Patrick
2017-01-01
This is the Unmanned Aircraft Systems (UAS) Traffic Management (UTM) Technical Capability Level(TCL) 2.0 Software Version Description (SVD) document. This UTM TCL 2.0 SVD describes the following four topics: 1. Software Release Contents: A listing of the files comprising this release 2. Installation Instructions: How to install the release and get it running 3. Changes Since Previous Release: General updates since the previous UTM release 4. Known Issues: Known issues and limitations in this release
ImSET: Impact of Sector Energy Technologies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roop, Joseph M.; Scott, Michael J.; Schultz, Robert W.
2005-07-19
This version of the Impact of Sector Energy Technologies (ImSET) model represents the ''next generation'' of the previously developed Visual Basic model (ImBUILD 2.0) that was developed in 2003 to estimate the macroeconomic impacts of energy-efficient technology in buildings. More specifically, a special-purpose version of the 1997 benchmark national Input-Output (I-O) model was designed specifically to estimate the national employment and income effects of the deployment of Office of Energy Efficiency and Renewable Energy (EERE) -developed energy-saving technologies. In comparison with the previous versions of the model, this version allows for more complete and automated analysis of the essential featuresmore » of energy efficiency investments in buildings, industry, transportation, and the electric power sectors. This version also incorporates improvements in the treatment of operations and maintenance costs, and improves the treatment of financing of investment options. ImSET is also easier to use than extant macroeconomic simulation models and incorporates information developed by each of the EERE offices as part of the requirements of the Government Performance and Results Act.« less