NASA Astrophysics Data System (ADS)
Kuipers, J.; Ueda, T.; Vermaseren, J. A. M.; Vollinga, J.
2013-05-01
We present version 4.0 of the symbolic manipulation system FORM. The most important new features are manipulation of rational polynomials and the factorization of expressions. Many other new functions and commands are also added; some of them are very general, while others are designed for building specific high level packages, such as one for Gröbner bases. New is also the checkpoint facility, that allows for periodic backups during long calculations. Finally, FORM 4.0 has become available as open source under the GNU General Public License version 3. Program summaryProgram title: FORM. Catalogue identifier: AEOT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOT_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 151599 No. of bytes in distributed program, including test data, etc.: 1 078 748 Distribution format: tar.gz Programming language: The FORM language. FORM itself is programmed in a mixture of C and C++. Computer: All. Operating system: UNIX, LINUX, Mac OS, Windows. Classification: 5. Nature of problem: FORM defines a symbolic manipulation language in which the emphasis lies on fast processing of very large formulas. It has been used successfully for many calculations in Quantum Field Theory and mathematics. In speed and size of formulas that can be handled it outperforms other systems typically by an order of magnitude. Special in this version: The version 4.0 contains many new features. Most important are factorization and rational arithmetic. The program has also become open source under the GPL. The code in CPC is for reference. You are encouraged to upload the most recent sources from www.nikhef.nl/form/formcvs.php because of frequent bug fixes. Solution method: See "Nature of Problem", above. Additional comments: NOTE: The code in CPC is for reference. You are encouraged to upload the most recent sources from www.nikhef.nl/form/formcvs.php because of frequent bug fixes.
QDENSITY—A Mathematica quantum computer simulation
NASA Astrophysics Data System (ADS)
Juliá-Díaz, Bruno; Burdis, Joseph M.; Tabakin, Frank
2009-03-01
This Mathematica 6.0 package is a simulation of a Quantum Computer. The program provides a modular, instructive approach for generating the basic elements that make up a quantum circuit. The main emphasis is on using the density matrix, although an approach using state vectors is also implemented in the package. The package commands are defined in Qdensity.m which contains the tools needed in quantum circuits, e.g., multiqubit kets, projectors, gates, etc. New version program summaryProgram title: QDENSITY 2.0 Catalogue identifier: ADXH_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXH_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 26 055 No. of bytes in distributed program, including test data, etc.: 227 540 Distribution format: tar.gz Programming language: Mathematica 6.0 Operating system: Any which supports Mathematica; tested under Microsoft Windows XP, Macintosh OS X, and Linux FC4 Catalogue identifier of previous version: ADXH_v1_0 Journal reference of previous version: Comput. Phys. Comm. 174 (2006) 914 Classification: 4.15 Does the new version supersede the previous version?: Offers an alternative, more up to date, implementation Nature of problem: Analysis and design of quantum circuits, quantum algorithms and quantum clusters. Solution method: A Mathematica package is provided which contains commands to create and analyze quantum circuits. Several Mathematica notebooks containing relevant examples: Teleportation, Shor's Algorithm and Grover's search are explained in detail. A tutorial, Tutorial.nb is also enclosed. Reasons for new version: The package has been updated to make it fully compatible with Mathematica 6.0 Summary of revisions: The package has been updated to make it fully compatible with Mathematica 6.0 Running time: Most examples included in the package, e.g., the tutorial, Shor's examples, Teleportation examples and Grover's search, run in less than a minute on a Pentium 4 processor (2.6 GHz). The running time for a quantum computation depends crucially on the number of qubits employed.
A new version of Visual tool for estimating the fractal dimension of images
NASA Astrophysics Data System (ADS)
Grossu, I. V.; Felea, D.; Besliu, C.; Jipa, Al.; Bordeianu, C. C.; Stan, E.; Esanu, T.
2010-04-01
This work presents a new version of a Visual Basic 6.0 application for estimating the fractal dimension of images (Grossu et al., 2009 [1]). The earlier version was limited to bi-dimensional sets of points, stored in bitmap files. The application was extended for working also with comma separated values files and three-dimensional images. New version program summaryProgram title: Fractal Analysis v02 Catalogue identifier: AEEG_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEG_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 9999 No. of bytes in distributed program, including test data, etc.: 4 366 783 Distribution format: tar.gz Programming language: MS Visual Basic 6.0 Computer: PC Operating system: MS Windows 98 or later RAM: 30 M Classification: 14 Catalogue identifier of previous version: AEEG_v1_0 Journal reference of previous version: Comput. Phys. Comm. 180 (2009) 1999 Does the new version supersede the previous version?: Yes Nature of problem: Estimating the fractal dimension of 2D and 3D images. Solution method: Optimized implementation of the box-counting algorithm. Reasons for new version:The previous version was limited to bitmap image files. The new application was extended in order to work with objects stored in comma separated values (csv) files. The main advantages are: Easier integration with other applications (csv is a widely used, simple text file format); Less resources consumed and improved performance (only the information of interest, the "black points", are stored); Higher resolution (the points coordinates are loaded into Visual Basic double variables [2]); Possibility of storing three-dimensional objects (e.g. the 3D Sierpinski gasket). In this version the optimized box-counting algorithm [1] was extended to the three-dimensional case. Summary of revisions:The application interface was changed from SDI (single document interface) to MDI (multi-document interface). One form was added in order to provide a graphical user interface for the new functionalities (fractal analysis of 2D and 3D images stored in csv files). Additional comments: User friendly graphical interface; Easy deployment mechanism. Running time: In the first approximation, the algorithm is linear. References:[1] I.V. Grossu, C. Besliu, M.V. Rusu, Al. Jipa, C.C. Bordeianu, D. Felea, Comput. Phys. Comm. 180 (2009) 1999-2001.[2] F. Balena, Programming Microsoft Visual Basic 6.0, Microsoft Press, US, 1999.
Program package for multicanonical simulations of U(1) lattice gauge theory-Second version
NASA Astrophysics Data System (ADS)
Bazavov, Alexei; Berg, Bernd A.
2013-03-01
A new version STMCMUCA_V1_1 of our program package is available. It eliminates compatibility problems of our Fortran 77 code, originally developed for the g77 compiler, with Fortran 90 and 95 compilers. New version program summaryProgram title: STMC_U1MUCA_v1_1 Catalogue identifier: AEET_v1_1 Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html Programming language: Fortran 77 compatible with Fortran 90 and 95 Computers: Any capable of compiling and executing Fortran code Operating systems: Any capable of compiling and executing Fortran code RAM: 10 MB and up depending on lattice size used No. of lines in distributed program, including test data, etc.: 15059 No. of bytes in distributed program, including test data, etc.: 215733 Keywords: Markov chain Monte Carlo, multicanonical, Wang-Landau recursion, Fortran, lattice gauge theory, U(1) gauge group, phase transitions of continuous systems Classification: 11.5 Catalogue identifier of previous version: AEET_v1_0 Journal Reference of previous version: Computer Physics Communications 180 (2009) 2339-2347 Does the new version supersede the previous version?: Yes Nature of problem: Efficient Markov chain Monte Carlo simulation of U(1) lattice gauge theory (or other continuous systems) close to its phase transition. Measurements and analysis of the action per plaquette, the specific heat, Polyakov loops and their structure factors. Solution method: Multicanonical simulations with an initial Wang-Landau recursion to determine suitable weight factors. Reweighting to physical values using logarithmic coding and calculating jackknife error bars. Reasons for the new version: The previous version was developed for the g77 compiler Fortran 77 version. Compiler errors were encountered with Fortran 90 and Fortran 95 compilers (specified below). Summary of revisions: epsilon=one/10**10 is replaced by epsilon/10.0D10 in the parameter statements of the subroutines u1_bmha.f, u1_mucabmha.f, u1wl_backup.f, u1wlread_backup.f of the folder Libs/U1_par. For the tested compilers script files are added in the folder ExampleRuns and readme.txt files are now provided in all subfolders of ExampleRuns. The gnuplot driver files produced by the routine hist_gnu.f of Libs/Fortran are adapted to syntax required by gnuplot version 4.0 and higher. Restrictions: Due to the use of explicit real*8 initialization the conversion into real*4 will require extra changes besides replacing the implicit.sta file by its real*4 version. Unusual features: The programs have to be compiled the script files like those contained in the folder ExampleRuns as explained in the original paper. Running time: The prepared test runs took up to 74 minutes to execute on a 2 GHz PC.
TIM, a ray-tracing program for METATOY research and its dissemination
NASA Astrophysics Data System (ADS)
Lambert, Dean; Hamilton, Alasdair C.; Constable, George; Snehanshu, Harsh; Talati, Sharvil; Courtial, Johannes
2012-03-01
TIM (The Interactive METATOY) is a ray-tracing program specifically tailored towards our research in METATOYs, which are optical components that appear to be able to create wave-optically forbidden light-ray fields. For this reason, TIM possesses features not found in other ray-tracing programs. TIM can either be used interactively or by modifying the openly available source code; in both cases, it can easily be run as an applet embedded in a web page. Here we describe the basic structure of TIM's source code and how to extend it, and we give examples of how we have used TIM in our own research. Program summaryProgram title: TIM Catalogue identifier: AEKY_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKY_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License No. of lines in distributed program, including test data, etc.: 124 478 No. of bytes in distributed program, including test data, etc.: 4 120 052 Distribution format: tar.gz Programming language: Java Computer: Any computer capable of running the Java Virtual Machine (JVM) 1.6 Operating system: Any; developed under Mac OS X Version 10.6 RAM: Typically 145 MB (interactive version running under Mac OS X Version 10.6) Classification: 14, 18 External routines: JAMA [1] (source code included) Nature of problem: Visualisation of scenes that include scene objects that create wave-optically forbidden light-ray fields. Solution method: Ray tracing. Unusual features: Specifically designed to visualise wave-optically forbidden light-ray fields; can visualise ray trajectories; can visualise geometric optic transformations; can create anaglyphs (for viewing with coloured "3D glasses") and random-dot autostereograms of the scene; integrable into web pages. Running time: Problem-dependent; typically seconds for a simple scene.
The orbifolder: A tool to study the low-energy effective theory of heterotic orbifolds
NASA Astrophysics Data System (ADS)
Nilles, H. P.; Ramos-Sánchez, S.; Vaudrevange, P. K. S.; Wingerter, A.
2012-06-01
The orbifolder is a program developed in C++ that computes and analyzes the low-energy effective theory of heterotic orbifold compactifications. The program includes routines to compute the massless spectrum, to identify the allowed couplings in the superpotential, to automatically generate large sets of orbifold models, to identify phenomenologically interesting models (e.g. MSSM-like models) and to analyze their vacuum configurations. Program summaryProgram title: orbifolder Catalogue identifier: AELR_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AELR_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License version 3 No. of lines in distributed program, including test data, etc.: 145 572 No. of bytes in distributed program, including test data, etc.: 930 517 Distribution format: tar.gz Programming language:C++ Computer: Personal computer Operating system: Tested on Linux (Fedora 15, Ubuntu 11, SuSE 11) Word size: 32 bits or 64 bits Classification: 11.1 External routines: Boost (http://www.boost.org/), GSL (http://www.gnu.org/software/gsl/) Nature of problem: Calculating the low-energy spectrum of heterotic orbifold compactifications. Solution method: Quadratic equations on a lattice; representation theory; polynomial algebra. Running time: Less than a second per model.
A general spectral method for the numerical simulation of one-dimensional interacting fermions
NASA Astrophysics Data System (ADS)
Clason, Christian; von Winckel, Gregory
2012-08-01
This software implements a general framework for the direct numerical simulation of systems of interacting fermions in one spatial dimension. The approach is based on a specially adapted nodal spectral Galerkin method, where the basis functions are constructed to obey the antisymmetry relations of fermionic wave functions. An efficient Matlab program for the assembly of the stiffness and potential matrices is presented, which exploits the combinatorial structure of the sparsity pattern arising from this discretization to achieve optimal run-time complexity. This program allows the accurate discretization of systems with multiple fermions subject to arbitrary potentials, e.g., for verifying the accuracy of multi-particle approximations such as Hartree-Fock in the few-particle limit. It can be used for eigenvalue computations or numerical solutions of the time-dependent Schrödinger equation. The new version includes a Python implementation of the presented approach. New version program summaryProgram title: assembleFermiMatrix Catalogue identifier: AEKO_v1_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKO_v1_1.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 332 No. of bytes in distributed program, including test data, etc.: 5418 Distribution format: tar.gz Programming language: MATLAB/GNU Octave, Python Computer: Any architecture supported by MATLAB, GNU Octave or Python Operating system: Any supported by MATLAB, GNU Octave or Python RAM: Depends on the data Classification: 4.3, 2.2. External routines: Python 2.7+, NumPy 1.3+, SciPy 0.10+ Catalogue identifier of previous version: AEKO_v1_0 Journal reference of previous version: Comput. Phys. Commun. 183 (2012) 405 Does the new version supersede the previous version?: Yes Nature of problem: The direct numerical solution of the multi-particle one-dimensional Schrödinger equation in a quantum well is challenging due to the exponential growth in the number of degrees of freedom with increasing particles. Solution method: A nodal spectral Galerkin scheme is used where the basis functions are constructed to obey the antisymmetry relations of the fermionic wave function. The assembly of these matrices is performed efficiently by exploiting the combinatorial structure of the sparsity patterns. Reasons for new version: A Python implementation is now included. Summary of revisions: Added a Python implementation; small documentation fixes in Matlab implementation. No change in features of the package. Restrictions: Only one-dimensional computational domains with homogeneous Dirichlet or periodic boundary conditions are supported. Running time: Seconds to minutes.
Hyper-Fractal Analysis: A visual tool for estimating the fractal dimension of 4D objects
NASA Astrophysics Data System (ADS)
Grossu, I. V.; Grossu, I.; Felea, D.; Besliu, C.; Jipa, Al.; Esanu, T.; Bordeianu, C. C.; Stan, E.
2013-04-01
This work presents a new version of a Visual Basic 6.0 application for estimating the fractal dimension of images and 3D objects (Grossu et al. (2010) [1]). The program was extended for working with four-dimensional objects stored in comma separated values files. This might be of interest in biomedicine, for analyzing the evolution in time of three-dimensional images. New version program summaryProgram title: Hyper-Fractal Analysis (Fractal Analysis v03) Catalogue identifier: AEEG_v3_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEG_v3_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 745761 No. of bytes in distributed program, including test data, etc.: 12544491 Distribution format: tar.gz Programming language: MS Visual Basic 6.0 Computer: PC Operating system: MS Windows 98 or later RAM: 100M Classification: 14 Catalogue identifier of previous version: AEEG_v2_0 Journal reference of previous version: Comput. Phys. Comm. 181 (2010) 831-832 Does the new version supersede the previous version? Yes Nature of problem: Estimating the fractal dimension of 4D images. Solution method: Optimized implementation of the 4D box-counting algorithm. Reasons for new version: Inspired by existing applications of 3D fractals in biomedicine [3], we extended the optimized version of the box-counting algorithm [1, 2] to the four-dimensional case. This might be of interest in analyzing the evolution in time of 3D images. The box-counting algorithm was extended in order to support 4D objects, stored in comma separated values files. A new form was added for generating 2D, 3D, and 4D test data. The application was tested on 4D objects with known dimension, e.g. the Sierpinski hypertetrahedron gasket, Df=ln(5)/ln(2) (Fig. 1). The algorithm could be extended, with minimum effort, to higher number of dimensions. Easy integration with other applications by using the very simple comma separated values file format for storing multi-dimensional images. Implementation of χ2 test as a criterion for deciding whether an object is fractal or not. User friendly graphical interface. Hyper-Fractal Analysis-Test on the Sierpinski hypertetrahedron 4D gasket (Df=ln(5)/ln(2)≅2.32). Running time: In a first approximation, the algorithm is linear [2]. References: [1] V. Grossu, D. Felea, C. Besliu, Al. Jipa, C.C. Bordeianu, E. Stan, T. Esanu, Computer Physics Communications, 181 (2010) 831-832. [2] I.V. Grossu, C. Besliu, M.V. Rusu, Al. Jipa, C. C. Bordeianu, D. Felea, Computer Physics Communications, 180 (2009) 1999-2001. [3] J. Ruiz de Miras, J. Navas, P. Villoslada, F.J. Esteban, Computer Methods and Programs in Biomedicine, 104 Issue 3 (2011) 452-460.
CIF2Cell: Generating geometries for electronic structure programs
NASA Astrophysics Data System (ADS)
Björkman, Torbjörn
2011-05-01
The CIF2Cell program generates the geometrical setup for a number of electronic structure programs based on the crystallographic information in a Crystallographic Information Framework (CIF) file. The program will retrieve the space group number, Wyckoff positions and crystallographic parameters, make a sensible choice for Bravais lattice vectors (primitive or principal cell) and generate all atomic positions. Supercells can be generated and alloys are handled gracefully. The code currently has output interfaces to the electronic structure programs ABINIT, CASTEP, CPMD, Crystal, Elk, Exciting, EMTO, Fleur, RSPt, Siesta and VASP. Program summaryProgram title: CIF2Cell Catalogue identifier: AEIM_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIM_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU GPL version 3 No. of lines in distributed program, including test data, etc.: 12 691 No. of bytes in distributed program, including test data, etc.: 74 933 Distribution format: tar.gz Programming language: Python (versions 2.4-2.7) Computer: Any computer that can run Python (versions 2.4-2.7) Operating system: Any operating system that can run Python (versions 2.4-2.7) Classification: 7.3, 7.8, 8 External routines: PyCIFRW [1] Nature of problem: Generate the geometrical setup of a crystallographic cell for a variety of electronic structure programs from data contained in a CIF file. Solution method: The CIF file is parsed using routines contained in the library PyCIFRW [1], and crystallographic as well as bibliographic information is extracted. The program then generates the principal cell from symmetry information, crystal parameters, space group number and Wyckoff sites. Reduction to a primitive cell is then performed, and the resulting cell is output to suitably named files along with documentation of the information source generated from any bibliographic information contained in the CIF file. If the space group symmetries is not present in the CIF file the program will fall back on internal tables, so only the minimal input of space group, crystal parameters and Wyckoff positions are required. Additional key features are handling of alloys and supercell generation. Additional comments: Currently implements support for the following general purpose electronic structure programs: ABINIT [2,3], CASTEP [4], CPMD [5], Crystal [6], Elk [7], exciting [8], EMTO [9], Fleur [10], RSPt [11], Siesta [12] and VASP [13-16]. Running time: The examples provided in the distribution take only seconds to run.
Modular reweighting software for statistical mechanical analysis of biased equilibrium data
NASA Astrophysics Data System (ADS)
Sindhikara, Daniel J.
2012-07-01
Here a simple, useful, modular approach and software suite designed for statistical reweighting and analysis of equilibrium ensembles is presented. Statistical reweighting is useful and sometimes necessary for analysis of equilibrium enhanced sampling methods, such as umbrella sampling or replica exchange, and also in experimental cases where biasing factors are explicitly known. Essentially, statistical reweighting allows extrapolation of data from one or more equilibrium ensembles to another. Here, the fundamental separable steps of statistical reweighting are broken up into modules - allowing for application to the general case and avoiding the black-box nature of some “all-inclusive” reweighting programs. Additionally, the programs included are, by-design, written with little dependencies. The compilers required are either pre-installed on most systems, or freely available for download with minimal trouble. Examples of the use of this suite applied to umbrella sampling and replica exchange molecular dynamics simulations will be shown along with advice on how to apply it in the general case. New version program summaryProgram title: Modular reweighting version 2 Catalogue identifier: AEJH_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJH_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 179 118 No. of bytes in distributed program, including test data, etc.: 8 518 178 Distribution format: tar.gz Programming language: C++, Python 2.6+, Perl 5+ Computer: Any Operating system: Any RAM: 50-500 MB Supplementary material: An updated version of the original manuscript (Comput. Phys. Commun. 182 (2011) 2227) is available Classification: 4.13 Catalogue identifier of previous version: AEJH_v1_0 Journal reference of previous version: Comput. Phys. Commun. 182 (2011) 2227 Does the new version supersede the previous version?: Yes Nature of problem: While equilibrium reweighting is ubiquitous, there are no public programs available to perform the reweighting in the general case. Further, specific programs often suffer from many library dependencies and numerical instability. Solution method: This package is written in a modular format that allows for easy applicability of reweighting in the general case. Modules are small, numerically stable, and require minimal libraries. Reasons for new version: Some minor bugs, some upgrades needed, error analysis added. analyzeweight.py/analyzeweight.py2 has been replaced by “multihist.py”. This new program performs all the functions of its predecessor while being versatile enough to handle other types of histograms and probability analysis. “bootstrap.py” was added. This script performs basic bootstrap resampling allowing for error analysis of data. “avg_dev_distribution.py” was added. This program computes the averages and standard deviations of multiple distributions, making error analysis (e.g. from bootstrap resampling) easier to visualize. WRE.cpp was slightly modified purely for cosmetic reasons. The manual was updated for clarity and to reflect version updates. Examples were removed from the manual in favor of online tutorials (packaged examples remain). Examples were updated to reflect the new format. An additional example is included to demonstrate error analysis. Running time: Preprocessing scripts 1-5 minutes, WHAM engine <1 minute, postprocess script ∼1-5 minutes.
Critic: a new program for the topological analysis of solid-state electron densities
NASA Astrophysics Data System (ADS)
Otero-de-la-Roza, A.; Blanco, M. A.; Pendás, A. Martín; Luaña, Víctor
2009-01-01
In this paper we introduce CRITIC, a new program for the topological analysis of the electron densities of crystalline solids. Two different versions of the code are provided, one adapted to the LAPW (Linear Augmented Plane Wave) density calculated by the WIEN2K package and the other to the ab initio Perturbed Ion ( aiPI) density calculated with the PI7 code. Using the converged ground state densities, CRITIC can locate their critical points, determine atomic basins and integrate properties within them, and generate several graphical representations which include topological atomic basins and primary bundles, contour maps of ρ and ∇ρ, vector maps of ∇ρ, chemical graphs, etc. Program summaryProgram title: CRITIC Catalogue identifier: AECB_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECB_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GPL, version 3 No. of lines in distributed program, including test data, etc.: 1 206 843 No. of bytes in distributed program, including test data, etc.: 12 648 065 Distribution format: tar.gz Programming language: FORTRAN 77 and 90 Computer: Any computer capable of compiling Fortran Operating system: Unix, GNU/Linux Classification: 7.3 Nature of problem: Topological analysis of the electron density in periodic solids. Solution method: The automatic localization of the electron density critical points is based on a recursive partitioning of the Wigner-Seitz cell into tetrahedra followed by a Newton search from significant points on each tetrahedra. Plotting of and integration on the atomic basins is currently based on a new implementation of Keith's promega algorithm. Running time: Variable, depending on the task. From seconds to a few minutes for the localization of critical points. Hours to days for the determination of the atomic basins shape and properties. Times correspond to a typical 2007 PC.
Multithreaded transactions in scientific computing. The Growth06_v2 program
NASA Astrophysics Data System (ADS)
Daniluk, Andrzej
2009-07-01
Writing a concurrent program can be more difficult than writing a sequential program. Programmer needs to think about synchronization, race conditions and shared variables. Transactions help reduce the inconvenience of using threads. A transaction is an abstraction, which allows programmers to group a sequence of actions on the program into a logical, higher-level computation unit. This paper presents a new version of the GROWTHGr and GROWTH06 programs. New version program summaryProgram title: GROWTH06_v2 Catalogue identifier: ADVL_v2_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVL_v2_1.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 65 255 No. of bytes in distributed program, including test data, etc.: 865 985 Distribution format: tar.gz Programming language: Object Pascal Computer: Pentium-based PC Operating system: Windows 9x, XP, NT, Vista RAM: more than 1 MB Classification: 4.3, 7.2, 6.2, 8, 14 Catalogue identifier of previous version: ADVL_v2_0 Journal reference of previous version: Comput. Phys. Comm. 175 (2006) 678 Does the new version supersede the previous version?: Yes Nature of problem: The programs compute the RHEED intensities during the growth of thin epitaxial structures prepared using the molecular beam epitaxy (MBE). The computations are based on the use of kinematical diffraction theory. Solution method: Epitaxial growth of thin films is modelled by a set of non-linear differential equations [1]. The Runge-Kutta method with adaptive stepsize control was used for solving initial value problem for non-linear differential equations [2]. Reasons for new version: According to the users' suggestions functionality of the program has been improved. Moreover, new use cases have been added which make the handling of the program easier and more efficient than the previous ones [3]. Summary of revisions:The design pattern (See Fig. 2 of Ref. [3]) has been modified according to the scheme shown on Fig. 1. A graphical user interface (GUI) for the program has been reconstructed. Fig. 2 presents a hybrid diagram of a GUI that shows how onscreen objects connect to use cases. The program has been compiled with English/USA regional and language options. Note: The figures mentioned above are contained in the program distribution file. Unusual features: The program is distributed in the form of source project GROWTH06_v2.dpr with associated files, and should be compiled using Borland Delphi compilers versions 6 or latter (including Borland Developer Studio 2006 and Code Gear compilers for Delphi). Additional comments: Two figures are included in the program distribution file. These are captioned Static classes model for Transaction design pattern. A model of a window that shows how onscreen objects connect to use cases. Running time: The typical running time is machine and user-parameters dependent. References: [1] A. Daniluk, Comput. Phys. Comm. 170 (2005) 265. [2] W.H. Press, B.P. Flannery, S.A. Teukolsky, W.T. Vetterling, Numerical Recipes in Pascal: The Art of Scientific Computing, first ed., Cambridge University Press, 1989. [3] M. Brzuszek, A. Daniluk, Comput. Phys. Comm. 175 (2006) 678.
xPerm: fast index canonicalization for tensor computer algebra
NASA Astrophysics Data System (ADS)
Martín-García, José M.
2008-10-01
We present a very fast implementation of the Butler-Portugal algorithm for index canonicalization with respect to permutation symmetries. It is called xPerm, and has been written as a combination of a Mathematica package and a C subroutine. The latter performs the most demanding parts of the computations and can be linked from any other program or computer algebra system. We demonstrate with tests and timings the effectively polynomial performance of the Butler-Portugal algorithm with respect to the number of indices, though we also show a case in which it is exponential. Our implementation handles generic tensorial expressions with several dozen indices in hundredths of a second, or one hundred indices in a few seconds, clearly outperforming all other current canonicalizers. The code has been already under intensive testing for several years and has been essential in recent investigations in large-scale tensor computer algebra. Program summaryProgram title: xPerm Catalogue identifier: AEBH_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEBH_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 93 582 No. of bytes in distributed program, including test data, etc.: 1 537 832 Distribution format: tar.gz Programming language: C and Mathematica (version 5.0 or higher) Computer: Any computer running C and Mathematica (version 5.0 or higher) Operating system: Linux, Unix, Windows XP, MacOS RAM:: 20 Mbyte Word size: 64 or 32 bits Classification: 1.5, 5 Nature of problem: Canonicalization of indexed expressions with respect to permutation symmetries. Solution method: The Butler-Portugal algorithm. Restrictions: Multiterm symmetries are not considered. Running time: A few seconds with generic expressions of up to 100 indices. The xPermDoc.nb notebook supplied with the distribution takes approximately one and a half hours to execute in full.
MinFinder v2.0: An improved version of MinFinder
NASA Astrophysics Data System (ADS)
Tsoulos, Ioannis G.; Lagaris, Isaac E.
2008-10-01
A new version of the "MinFinder" program is presented that offers an augmented linking procedure for Fortran-77 subprograms, two additional stopping rules and a new start-point rejection mechanism that saves a significant portion of gradient and function evaluations. The method is applied on a set of standard test functions and the results are reported. New version program summaryProgram title: MinFinder v2.0 Catalogue identifier: ADWU_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADWU_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC Licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 14 150 No. of bytes in distributed program, including test data, etc.: 218 144 Distribution format: tar.gz Programming language used: GNU C++, GNU FORTRAN, GNU C Computer: The program is designed to be portable in all systems running the GNU C++ compiler Operating system: Linux, Solaris, FreeBSD RAM: 200 000 bytes Classification: 4.9 Catalogue identifier of previous version: ADWU_v1_0 Journal reference of previous version: Computer Physics Communications 174 (2006) 166-179 Does the new version supersede the previous version?: Yes Nature of problem: A multitude of problems in science and engineering are often reduced to minimizing a function of many variables. There are instances that a local optimum does not correspond to the desired physical solution and hence the search for a better solution is required. Local optimization techniques can be trapped in any local minimum. Global optimization is then the appropriate tool. For example, solving a non-linear system of equations via optimization, one may encounter many local minima that do not correspond to solutions, i.e. they are far from zero. Solution method: Using a uniform pdf, points are sampled from a rectangular domain. A clustering technique, based on a typical distance and a gradient criterion, is used to decide from which points a local search should be started. Further searching is terminated when all the local minima inside the search domain are thought to be found. This is accomplished via three stopping rules: the "double-box" stopping rule, the "observables" stopping rule and the "expected minimizers" stopping rule. Reasons for the new version: The link procedure for source code in Fortran 77 is enhanced, two additional stopping rules are implemented and a new criterion for accepting-start points, that economizes on function and gradient calls, is introduced. Summary of revisions:Addition of command line parameters to the utility program make_program. Augmentation of the link process for Fortran 77 subprograms, by linking the final executable with the g2c library. Addition of two probabilistic stopping rules. Introduction of a rejection mechanism to the Checking step of the original method, that reduces the number of gradient evaluations. Additional comments: A technical report describing the revisions, experiments and test runs is packaged with the source code. Running time: Depending on the objective function.
A numerical differentiation library exploiting parallel architectures
NASA Astrophysics Data System (ADS)
Voglis, C.; Hadjidoukas, P. E.; Lagaris, I. E.; Papageorgiou, D. G.
2009-08-01
We present a software library for numerically estimating first and second order partial derivatives of a function by finite differencing. Various truncation schemes are offered resulting in corresponding formulas that are accurate to order O(h), O(h), and O(h), h being the differencing step. The derivatives are calculated via forward, backward and central differences. Care has been taken that only feasible points are used in the case where bound constraints are imposed on the variables. The Hessian may be approximated either from function or from gradient values. There are three versions of the software: a sequential version, an OpenMP version for shared memory architectures and an MPI version for distributed systems (clusters). The parallel versions exploit the multiprocessing capability offered by computer clusters, as well as modern multi-core systems and due to the independent character of the derivative computation, the speedup scales almost linearly with the number of available processors/cores. Program summaryProgram title: NDL (Numerical Differentiation Library) Catalogue identifier: AEDG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDG_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 73 030 No. of bytes in distributed program, including test data, etc.: 630 876 Distribution format: tar.gz Programming language: ANSI FORTRAN-77, ANSI C, MPI, OPENMP Computer: Distributed systems (clusters), shared memory systems Operating system: Linux, Solaris Has the code been vectorised or parallelized?: Yes RAM: The library uses O(N) internal storage, N being the dimension of the problem Classification: 4.9, 4.14, 6.5 Nature of problem: The numerical estimation of derivatives at several accuracy levels is a common requirement in many computational tasks, such as optimization, solution of nonlinear systems, etc. The parallel implementation that exploits systems with multiple CPUs is very important for large scale and computationally expensive problems. Solution method: Finite differencing is used with carefully chosen step that minimizes the sum of the truncation and round-off errors. The parallel versions employ both OpenMP and MPI libraries. Restrictions: The library uses only double precision arithmetic. Unusual features: The software takes into account bound constraints, in the sense that only feasible points are used to evaluate the derivatives, and given the level of the desired accuracy, the proper formula is automatically employed. Running time: Running time depends on the function's complexity. The test run took 15 ms for the serial distribution, 0.6 s for the OpenMP and 4.2 s for the MPI parallel distribution on 2 processors.
Numerical evaluation of multi-loop integrals for arbitrary kinematics with SecDec 2.0
NASA Astrophysics Data System (ADS)
Borowka, Sophia; Carter, Jonathon; Heinrich, Gudrun
2013-02-01
We present the program SecDec 2.0, which contains various new features. First, it allows the numerical evaluation of multi-loop integrals with no restriction on the kinematics. Dimensionally regulated ultraviolet and infrared singularities are isolated via sector decomposition, while threshold singularities are handled by a deformation of the integration contour in the complex plane. As an application, we present numerical results for various massive two-loop four-point diagrams. SecDec 2.0 also contains new useful features for the calculation of more general parameter integrals, related for example to phase space integrals. Program summaryProgram title: SecDec 2.0 Catalogue identifier: AEIR_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIR_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 156829 No. of bytes in distributed program, including test data, etc.: 2137907 Distribution format: tar.gz Programming language: Wolfram Mathematica, Perl, Fortran/C++. Computer: From a single PC to a cluster, depending on the problem. Operating system: Unix, Linux. RAM: Depending on the complexity of the problem Classification: 4.4, 5, 11.1. Catalogue identifier of previous version: AEIR_v1_0 Journal reference of previous version: Comput. Phys. Comm. 182(2011)1566 Does the new version supersede the previous version?: Yes Nature of problem: Extraction of ultraviolet and infrared singularities from parametric integrals appearing in higher order perturbative calculations in gauge theories. Numerical integration in the presence of integrable singularities (e.g., kinematic thresholds). Solution method: Algebraic extraction of singularities in dimensional regularization using iterated sector decomposition. This leads to a Laurent series in the dimensional regularization parameter ɛ, where the coefficients are finite integrals over the unit hypercube. Those integrals are evaluated numerically by Monte Carlo integration. The integrable singularities are handled by choosing a suitable integration contour in the complex plane, in an automated way. Reasons for new version: In the previous version the calculation of multi-scale integrals was restricted to the Euclidean region. Now multi-loop integrals with arbitrary physical kinematics can be evaluated. Another major improvement is the possibility of full parallelization. Summary of revisions: No restriction on the kinematics for multi-loop integrals. The integrand can be constructed from the topological cuts of the diagram. Possibility of full parallelization. Numerical integration of multi-loop integrals written in C++ rather than Fortran. Possibility to loop over ranges of parameters. Restrictions: Depending on the complexity of the problem, limited by memory and CPU time. The restriction that multi-scale integrals could only be evaluated at Euclidean points is superseded in version 2.0. Running time: Between a few minutes and several days, depending on the complexity of the problem. Test runs provided take only seconds.
NASA Astrophysics Data System (ADS)
Lundberg, J.; Conrad, J.; Rolke, W.; Lopez, A.
2010-03-01
A C++ class was written for the calculation of frequentist confidence intervals using the profile likelihood method. Seven combinations of Binomial, Gaussian, Poissonian and Binomial uncertainties are implemented. The package provides routines for the calculation of upper and lower limits, sensitivity and related properties. It also supports hypothesis tests which take uncertainties into account. It can be used in compiled C++ code, in Python or interactively via the ROOT analysis framework. Program summaryProgram title: TRolke version 2.0 Catalogue identifier: AEFT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFT_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: MIT license No. of lines in distributed program, including test data, etc.: 3431 No. of bytes in distributed program, including test data, etc.: 21 789 Distribution format: tar.gz Programming language: ISO C++. Computer: Unix, GNU/Linux, Mac. Operating system: Linux 2.6 (Scientific Linux 4 and 5, Ubuntu 8.10), Darwin 9.0 (Mac-OS X 10.5.8). RAM:˜20 MB Classification: 14.13. External routines: ROOT ( http://root.cern.ch/drupal/) Nature of problem: The problem is to calculate a frequentist confidence interval on the parameter of a Poisson process with statistical or systematic uncertainties in signal efficiency or background. Solution method: Profile likelihood method, Analytical Running time:<10 seconds per extracted limit.
Milne, a routine for the numerical solution of Milne's problem
NASA Astrophysics Data System (ADS)
Rawat, Ajay; Mohankumar, N.
2010-11-01
The routine Milne provides accurate numerical values for the classical Milne's problem of neutron transport for the planar one speed and isotropic scattering case. The solution is based on the Case eigen-function formalism. The relevant X functions are evaluated accurately by the Double Exponential quadrature. The calculated quantities are the extrapolation distance and the scalar and the angular fluxes. Also, the H function needed in astrophysical calculations is evaluated as a byproduct. Program summaryProgram title: Milne Catalogue identifier: AEGS_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGS_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 701 No. of bytes in distributed program, including test data, etc.: 6845 Distribution format: tar.gz Programming language: Fortran 77 Computer: PC under Linux or Windows Operating system: Ubuntu 8.04 (Kernel version 2.6.24-16-generic), Windows-XP Classification: 4.11, 21.1, 21.2 Nature of problem: The X functions are integral expressions. The convergence of these regular and Cauchy Principal Value integrals are impaired by the singularities of the integrand in the complex plane. The DE quadrature scheme tackles these singularities in a robust manner compared to the standard Gauss quadrature. Running time: The test included in the distribution takes a few seconds to run.
Lambert W function for applications in physics
NASA Astrophysics Data System (ADS)
Veberič, Darko
2012-12-01
The Lambert W(x) function and its possible applications in physics are presented. The actual numerical implementation in C++ consists of Halley's and Fritsch's iterations with initial approximations based on branch-point expansion, asymptotic series, rational fits, and continued-logarithm recursion. Program summaryProgram title: LambertW Catalogue identifier: AENC_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENC_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License version 3 No. of lines in distributed program, including test data, etc.: 1335 No. of bytes in distributed program, including test data, etc.: 25 283 Distribution format: tar.gz Programming language: C++ (with suitable wrappers it can be called from C, Fortran etc.), the supplied command-line utility is suitable for other scripting languages like sh, csh, awk, perl etc. Computer: All systems with a C++ compiler. Operating system: All Unix flavors, Windows. It might work with others. RAM: Small memory footprint, less than 1 MB Classification: 1.1, 4.7, 11.3, 11.9. Nature of problem: Find fast and accurate numerical implementation for the Lambert W function. Solution method: Halley's and Fritsch's iterations with initial approximations based on branch-point expansion, asymptotic series, rational fits, and continued logarithm recursion. Additional comments: Distribution file contains the command-line utility lambert-w. Doxygen comments, included in the source files. Makefile. Running time: The tests provided take only a few seconds to run.
The Invar tensor package: Differential invariants of Riemann
NASA Astrophysics Data System (ADS)
Martín-García, J. M.; Yllanes, D.; Portugal, R.
2008-10-01
The long standing problem of the relations among the scalar invariants of the Riemann tensor is computationally solved for all 6ṡ10 objects with up to 12 derivatives of the metric. This covers cases ranging from products of up to 6 undifferentiated Riemann tensors to cases with up to 10 covariant derivatives of a single Riemann. We extend our computer algebra system Invar to produce within seconds a canonical form for any of those objects in terms of a basis. The process is as follows: (1) an invariant is converted in real time into a canonical form with respect to the permutation symmetries of the Riemann tensor; (2) Invar reads a database of more than 6ṡ10 relations and applies those coming from the cyclic symmetry of the Riemann tensor; (3) then applies the relations coming from the Bianchi identity, (4) the relations coming from commutations of covariant derivatives, (5) the dimensionally-dependent identities for dimension 4, and finally (6) simplifies invariants that can be expressed as product of dual invariants. Invar runs on top of the tensor computer algebra systems xTensor (for Mathematica) and Canon (for Maple). Program summaryProgram title:Invar Tensor Package v2.0 Catalogue identifier:ADZK_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZK_v2_0.html Program obtainable from:CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions:Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.:3 243 249 No. of bytes in distributed program, including test data, etc.:939 Distribution format:tar.gz Programming language:Mathematica and Maple Computer:Any computer running Mathematica versions 5.0 to 6.0 or Maple versions 9 and 11 Operating system:Linux, Unix, Windows XP, MacOS RAM:100 Mb Word size:64 or 32 bits Supplementary material:The new database of relations is much larger than that for the previous version and therefore has not been included in the distribution. To obtain the Mathematica and Maple database files click on this link. Classification:1.5, 5 Does the new version supersede the previous version?:Yes. The previous version (1.0) only handled algebraic invariants. The current version (2.0) has been extended to cover differential invariants as well. Nature of problem:Manipulation and simplification of scalar polynomial expressions formed from the Riemann tensor and its covariant derivatives. Solution method:Algorithms of computational group theory to simplify expressions with tensors that obey permutation symmetries. Tables of syzygies of the scalar invariants of the Riemann tensor. Reasons for new version:With this new version, the user can manipulate differential invariants of the Riemann tensor. Differential invariants are required in many physical problems in classical and quantum gravity. Summary of revisions:The database of syzygies has been expanded by a factor of 30. New commands were added in order to deal with the enlarged database and to manipulate the covariant derivative. Restrictions:The present version only handles scalars, and not expressions with free indices. Additional comments:The distribution file for this program is over 53 Mbytes and therefore is not delivered directly when download or Email is requested. Instead a html file giving details of how the program can be obtained is sent. Running time:One second to fully reduce any monomial of the Riemann tensor up to degree 7 or order 10 in terms of independent invariants. The Mathematica notebook included in the distribution takes approximately 5 minutes to run.
NASA Astrophysics Data System (ADS)
Chang, Chao-Hsi; Wang, Jian-Xiong; Wu, Xing-Gang
2010-06-01
An upgraded (second) version of the package GENXICC (A Generator for Hadronic Production of the Double Heavy Baryons Ξ, Ξ and Ξ by C.H. Chang, J.X. Wang and X.G. Wu [its first version in: Comput. Phys. Comm. 177 (2007) 467]) is presented. Users, with this version being implemented in PYTHIA and a GNU C compiler, may simulate full events of these processes in various experimental environments conveniently. In comparison with the previous version, in order to implement it in PYTHIA properly, a subprogram for the fragmentation of the produced double heavy diquark to the relevant baryon is supplied and the interface of the generator to PYTHIA is changed accordingly. In the subprogram, with explanation, certain necessary assumptions (approximations) are made in order to conserve the momenta and the QCD 'color' flow for the fragmentation. Program summaryProgram title: GENXICC2.0 Catalogue identifier: ADZJ_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZJ_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 102 482 No. of bytes in distributed program, including test data, etc.: 1 469 519 Distribution format: tar.gz Programming language: Fortran 77/90 Computer: Any LINUX based on PC with FORTRAN 77 or FORTRAN 90 and GNU C compiler as well Operating system: Linux RAM: About 2.0 MByte Classification: 11.2 Catalogue identifier of previous version: ADZJ_v1_0 Journal reference of previous version: Comput. Phys. Comm. 177 (2007) 467 Does the new version supersede the previous version?: No Nature of problem: Hadronic production of double heavy baryons Ξ, Ξ and Ξ Solution method: The code is based on NRQCD framework. With proper options, it can generate weighted and un-weighted events of hadronic double heavy baryon production. When the hadronizations of the produced jets and double heavy diquark are taken into account in the production, the upgraded version with proper interface to PYTHIA can generate full events. Reasons for new version: Responding to the feedback from users, we improve the generator mainly by carefully completing the 'final non-perturbative process', i.e. the formulation of the double heavy baryon from relevant intermediate diquark. In the present version, the information for fragmentation about momentum-flow and the color-flow, that is necessary for PYTHIA to generate full events, is retained although reasonable approximations are made. In comparison with the original version, the upgraded one can implement it in PYTHIA properly to do the full event simulation of the double heavy baryon production. Summary of revisions:We try to explain the treatment of the momentum distribution of the process more clearly than the original version, and show how the final baryon is generated through the typical intermediate diquark precisely. We present color flow of the involved processes precisely and the corresponding changes for the program are made. The corresponding changes of the program are explained in the paper. Restrictions: The color flow, particularly, in the piece of code programming of the fragmentation from the produced colorful double heavy diquark into a relevant double heavy baryon, is treated carefully so as to implement it in PYTHIA properly. Running time: It depends on which option is chosen to configure PYTHIA when generating full events and also on which mechanism is chosen to generate the events. Typically, for the most complicated case with gluon-gluon fusion mechanism to generate the mixed events via the intermediate diquark in (cc)[ and (cc)[ states, under the option, IDWTUP=1, to generate 1000 events, takes about 20 hours on a 1.8 GHz Intel P4-processor machine, whereas under the option, IDWTUP=3, even to generate 106 events takes about 40 minutes on the same machine.
Efficient self-consistency for magnetic tight binding
NASA Astrophysics Data System (ADS)
Soin, Preetma; Horsfield, A. P.; Nguyen-Manh, D.
2011-06-01
Tight binding can be extended to magnetic systems by including an exchange interaction on an atomic site that favours net spin polarisation. We have used a published model, extended to include long-ranged Coulomb interactions, to study defects in iron. We have found that achieving self-consistency using conventional techniques was either unstable or very slow. By formulating the problem of achieving charge and spin self-consistency as a search for stationary points of a Harris-Foulkes functional, extended to include spin, we have derived a much more efficient scheme based on a Newton-Raphson procedure. We demonstrate the capabilities of our method by looking at vacancies and self-interstitials in iron. Self-consistency can indeed be achieved in a more efficient and stable manner, but care needs to be taken to manage this. The algorithm is implemented in the code PLATO. Program summaryProgram title:PLATO Catalogue identifier: AEFC_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFC_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 228 747 No. of bytes in distributed program, including test data, etc.: 1 880 369 Distribution format: tar.gz Programming language: C and PERL Computer: Apple Macintosh, PC, Unix machines Operating system: Unix, Linux, Mac OS X, Windows XP Has the code been vectorised or parallelised?: Yes. Up to 256 processors tested RAM: Up to 2 Gbytes per processor Classification: 7.3 External routines: LAPACK, BLAS and optionally ScaLAPACK, BLACS, PBLAS, FFTW Catalogue identifier of previous version: AEFC_v1_0 Journal reference of previous version: Comput. Phys. Comm. 180 (2009) 2616 Does the new version supersede the previous version?: Yes Nature of problem: Achieving charge and spin self-consistency in magnetic tight binding can be very difficult. Our existing schemes failed altogether, or were very slow. Solution method: A new scheme for achieving self-consistency in orthogonal tight binding has been introduced that explicitly evaluates the first and second derivatives of the energy with respect to input charge and spin, and then uses these to search for stationary values of the energy. Reasons for new version: Bug fixes and new functionality. Summary of revisions: New charge and spin mixing scheme for orthogonal tight binding. Numerous small bug fixes. Restrictions: The new mixing scheme scales poorly with system size. In particular the memory usage scales as number of atoms to the power 4. It is restricted to systems with about 200 atoms or less. Running time: Test cases will run in a few minutes, large calculations may run for several days.
NASA Astrophysics Data System (ADS)
Cipolla, Sam J.
2009-09-01
New version program summaryProgram title: ISICS2008 Catalogue identifier: ADDS_v4_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADDS_v4_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 5420 No. of bytes in distributed program, including test data, etc.: 107 669 Distribution format: tar.gz Programming language: C Computer: 80 486 or higher level PCs Operating system: Windows XP and all earlier operating systems Classification: 16.7 Catalogue identifier of previous version: ADDS_v3_0 Journal reference of previous version: Comput. Phys. Comm. 179 (2008) 616 Does the new version supersede the previous version?: Yes Nature of problem: Ionization and X-ray production cross section calculations for ion-atom collisions. Solution method: Numerical integration of form factor using a logarithmic transform and Gaussian quadrature, plus exact integration limits. Reasons for new version: Addition of relativistic treatment of both projectile and K-shell electrons. Summary of revisions: A new addition to ISICS is the option (R) to calculate ECPSSR cross sections that account for the relativistic treatment of both projectile and K-shell electron, as proposed recently by Lapicki [1], accordingly as σKRECPSSR=Cṡ(1+0.07(()ṡσ(√{(mKRυ1R)}/Z,ςθ), where υ1R is the relativistic projectile velocity. The option can also be invoked in calculating ECPSShsR, where hsR stands for the Hartree-Slater description of the K-shell electron, which was already incorporated into ISICS2006 [2,3], and is now expressed in this option as, σKRECPSShsR=CṡhsR((2υ1R)/(Zςθ),Z/137)ṡ(1+0.07(()ṡσ(υ1R/Z,ςθ) using the function hsR that is already incorporated into ISICS2006. It should be noted that these expressions are corrected versions [4] from the ones published in Ref. [1]. In this new version, ISICS2008, the option line in the main menu that read "Use Relativistic Proj. velocity" has been replaced by "R option for K-shell … Uses Rel. Proj. vel.". As before, various combinations of options can be utilized and each is denoted in the output. Restrictions: The consumed CPU time increases with the atomic shell (K,L,M), but execution is still very fast. Additional comments: A revised User Manual is included in the distribution file. Running time: This depends on which shell and the number of different energies to be used in the calculation. The running time is not significantly changed from the previous version. As before, to calculate K-shell cross sections for protons striking carbon for 19 different proton energies it took less than 10 s; to calculate M-shell cross sections for protons on gold for 21 proton energies it took 4.2 min. References:G. Lapicki, J. Phys. B: At. Mol. Op. Phys. 41 (2008) 115201. S. Cipolla, Comput. Phys. Comm. 176 (2007) 157. S. Cipolla, Nucl. Instrum. Methods Phys. Res. B 261 (2007) 142. G. Lapicki, private communication.
FLY MPI-2: a parallel tree code for LSS
NASA Astrophysics Data System (ADS)
Becciani, U.; Comparato, M.; Antonuccio-Delogu, V.
2006-04-01
New version program summaryProgram title: FLY 3.1 Catalogue identifier: ADSC_v2_0 Licensing provisions: yes Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADSC_v2_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland No. of lines in distributed program, including test data, etc.: 158 172 No. of bytes in distributed program, including test data, etc.: 4 719 953 Distribution format: tar.gz Programming language: Fortran 90, C Computer: Beowulf cluster, PC, MPP systems Operating system: Linux, Aix RAM: 100M words Catalogue identifier of previous version: ADSC_v1_0 Journal reference of previous version: Comput. Phys. Comm. 155 (2003) 159 Does the new version supersede the previous version?: yes Nature of problem: FLY is a parallel collisionless N-body code for the calculation of the gravitational force Solution method: FLY is based on the hierarchical oct-tree domain decomposition introduced by Barnes and Hut (1986) Reasons for the new version: The new version of FLY is implemented by using the MPI-2 standard: the distributed version 3.1 was developed by using the MPICH2 library on a PC Linux cluster. Today the FLY performance allows us to consider the FLY code among the most powerful parallel codes for tree N-body simulations. Another important new feature regards the availability of an interface with hydrodynamical Paramesh based codes. Simulations must follow a box large enough to accurately represent the power spectrum of fluctuations on very large scales so that we may hope to compare them meaningfully with real data. The number of particles then sets the mass resolution of the simulation, which we would like to make as fine as possible. The idea to build an interface between two codes, that have different and complementary cosmological tasks, allows us to execute complex cosmological simulations with FLY, specialized for DM evolution, and a code specialized for hydrodynamical components that uses a Paramesh block structure. Summary of revisions: The parallel communication schema was totally changed. The new version adopts the MPICH2 library. Now FLY can be executed on all Unix systems having an MPI-2 standard library. The main data structure, is declared in a module procedure of FLY (fly_h.F90 routine). FLY creates the MPI Window object for one-sided communication for all the shared arrays, with a call like the following: CALL MPI_WIN_CREATE(POS, SIZE, REAL8, MPI_INFO_NULL, MPI_COMM_WORLD, WIN_POS, IERR) the following main window objects are created: win_pos, win_vel, win_acc: particles positions velocities and accelerations, win_pos_cell, win_mass_cell, win_quad, win_subp, win_grouping: cells positions, masses, quadrupole momenta, tree structure and grouping cells. Other windows are created for dynamic load balance and global counters. Restrictions: The program uses the leapfrog integrator schema, but could be changed by the user. Unusual features: FLY uses the MPI-2 standard: the MPICH2 library on Linux systems was adopted. To run this version of FLY the working directory must be shared among all the processors that execute FLY. Additional comments: Full documentation for the program is included in the distribution in the form of a README file, a User Guide and a Reference manuscript. Running time: IBM Linux Cluster 1350, 512 nodes with 2 processors for each node and 2 GB RAM for each processor, at Cineca, was adopted to make performance tests. Processor type: Intel Xeon Pentium IV 3.0 GHz and 512 KB cache (128 nodes have Nocona processors). Internal Network: Myricom LAN Card "C" Version and "D" Version. Operating System: Linux SuSE SLES 8. The code was compiled using the mpif90 compiler version 8.1 and with basic optimization options in order to have performances that could be useful compared with other generic clusters Processors
Visual tool for estimating the fractal dimension of images
NASA Astrophysics Data System (ADS)
Grossu, I. V.; Besliu, C.; Rusu, M. V.; Jipa, Al.; Bordeianu, C. C.; Felea, D.
2009-10-01
This work presents a new Visual Basic 6.0 application for estimating the fractal dimension of images, based on an optimized version of the box-counting algorithm. Following the attempt to separate the real information from "noise", we considered also the family of all band-pass filters with the same band-width (specified as parameter). The fractal dimension can be thus represented as a function of the pixel color code. The program was used for the study of paintings cracks, as an additional tool which can help the critic to decide if an artistic work is original or not. Program summaryProgram title: Fractal Analysis v01 Catalogue identifier: AEEG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEG_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 29 690 No. of bytes in distributed program, including test data, etc.: 4 967 319 Distribution format: tar.gz Programming language: MS Visual Basic 6.0 Computer: PC Operating system: MS Windows 98 or later RAM: 30M Classification: 14 Nature of problem: Estimating the fractal dimension of images. Solution method: Optimized implementation of the box-counting algorithm. Use of a band-pass filter for separating the real information from "noise". User friendly graphical interface. Restrictions: Although various file-types can be used, the application was mainly conceived for the 8-bit grayscale, windows bitmap file format. Running time: In a first approximation, the algorithm is linear.
MADANALYSIS 5, a user-friendly framework for collider phenomenology
NASA Astrophysics Data System (ADS)
Conte, Eric; Fuks, Benjamin; Serret, Guillaume
2013-01-01
We present MADANALYSIS 5, a new framework for phenomenological investigations at particle colliders. Based on a C++ kernel, this program allows us to efficiently perform, in a straightforward and user-friendly fashion, sophisticated physics analyses of event files such as those generated by a large class of Monte Carlo event generators. MADANALYSIS 5 comes with two modes of running. The first one, easier to handle, uses the strengths of a powerful PYTHON interface in order to implement physics analyses by means of a set of intuitive commands. The second one requires one to implement the analyses in the C++ programming language, directly within the core of the analysis framework. This opens unlimited possibilities concerning the level of complexity which can be reached, being only limited by the programming skills and the originality of the user. Program summaryProgram title: MadAnalysis 5 Catalogue identifier: AENO_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENO_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Permission to use, copy, modify and distribute this program is granted under the terms of the GNU General Public License. No. of lines in distributed program, including test data, etc.: 31087 No. of bytes in distributed program, including test data, etc.: 399105 Distribution format: tar.gz Programming language: PYTHON, C++. Computer: All platforms on which Python version 2.7, Root version 5.27 and the g++ compiler are available. Compatibility with newer versions of these programs is also ensured. However, the Python version must be below version 3.0. Operating system: Unix, Linux and Mac OS operating systems on which the above-mentioned versions of Python and Root, as well as g++, are available. Classification: 11.1. External routines: ROOT (http://root.cern.ch/drupal/) Nature of problem: Implementing sophisticated phenomenological analyses in high-energy physics through a flexible, efficient and straightforward fashion, starting from event files such as those produced by Monte Carlo event generators. The event files can have been matched or not to parton-showering and can have been processed or not by a (fast) simulation of a detector. According to the sophistication level of the event files (parton-level, hadron-level, reconstructed-level), one must note that several input formats are possible. Solution method: We implement an interface allowing the production of predefined as well as user-defined histograms for a large class of kinematical distributions after applying a set of event selection cuts specified by the user. This therefore allows us to devise robust and novel search strategies for collider experiments, such as those currently running at the Large Hadron Collider at CERN, in a very efficient way. Restrictions: Unsupported event file format. Unusual features: The code is fully based on object representations for events, particles, reconstructed objects and cuts, which facilitates the implementation of an analysis. Running time: It depends on the purposes of the user and on the number of events to process. It varies from a few seconds to the order of the minute for several millions of events.
Simulation of ultra-high energy photon propagation with PRESHOWER 2.0
NASA Astrophysics Data System (ADS)
Homola, P.; Engel, R.; Pysz, A.; Wilczyński, H.
2013-05-01
In this paper we describe a new release of the PRESHOWER program, a tool for Monte Carlo simulation of propagation of ultra-high energy photons in the magnetic field of the Earth. The PRESHOWER program is designed to calculate magnetic pair production and bremsstrahlung and should be used together with other programs to simulate extensive air showers induced by photons. The main new features of the PRESHOWER code include a much faster algorithm applied in the procedures of simulating the processes of gamma conversion and bremsstrahlung, update of the geomagnetic field model, and a minor correction. The new simulation procedure increases the flexibility of the code so that it can also be applied to other magnetic field configurations such as, for example, encountered in the vicinity of the sun or neutron stars. Program summaryProgram title: PRESHOWER 2.0 Catalog identifier: ADWG_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADWG_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 3968 No. of bytes in distributed program, including test data, etc.: 37198 Distribution format: tar.gz Programming language: C, FORTRAN 77. Computer: Intel-Pentium based PC. Operating system: Linux or Unix. RAM:< 100 kB Classification: 1.1. Does the new version supercede the previous version?: Yes Catalog identifier of previous version: ADWG_v1_0 Journal reference of previous version: Comput. Phys. Comm. 173 (2005) 71 Nature of problem: Simulation of a cascade of particles initiated by UHE photon in magnetic field. Solution method: The primary photon is tracked until its conversion into an e+ e- pair. If conversion occurs each individual particle in the resultant preshower is checked for either bremsstrahlung radiation (electrons) or secondary gamma conversion (photons). Reasons for new version: Slow and outdated algorithm in the old version (a significant speed up is possible); Extension of the program to allow simulations also for extraterrestrial magnetic field configurations (e.g. neutron stars) and very long path lengths. Summary of revisions: A veto algorithm was introduced in the gamma conversion and bremsstrahlung tracking procedures. The length of the tracking step is now variable along the track and depends on the probability of the process expected to occur. The new algorithm reduces significantly the number of tracking steps and speeds up the execution of the program. The geomagnetic field model has been updated to IGRF-11, allowing for interpolations up to the year 2015. Numerical Recipes procedures to calculate modified Bessel functions have been replaced with an open source CERN routine DBSKA. One minor bug has been fixed. Restrictions: Gamma conversion into particles other than an electron pair is not considered. Spatial structure of the cascade is neglected. Additional comments: The following routines are supplied in the package, IGRF [1, 2], DBSKA [3], ran2 [4] Running time: 100 preshower events with primary energy 1020 eV require a 2.66 GHz CPU time of about 200 sec.; at the energy of 1021 eV, 600 sec.
Continuous-time quantum Monte Carlo impurity solvers
NASA Astrophysics Data System (ADS)
Gull, Emanuel; Werner, Philipp; Fuchs, Sebastian; Surer, Brigitte; Pruschke, Thomas; Troyer, Matthias
2011-04-01
Continuous-time quantum Monte Carlo impurity solvers are algorithms that sample the partition function of an impurity model using diagrammatic Monte Carlo techniques. The present paper describes codes that implement the interaction expansion algorithm originally developed by Rubtsov, Savkin, and Lichtenstein, as well as the hybridization expansion method developed by Werner, Millis, Troyer, et al. These impurity solvers are part of the ALPS-DMFT application package and are accompanied by an implementation of dynamical mean-field self-consistency equations for (single orbital single site) dynamical mean-field problems with arbitrary densities of states. Program summaryProgram title: dmft Catalogue identifier: AEIL_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIL_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: ALPS LIBRARY LICENSE version 1.1 No. of lines in distributed program, including test data, etc.: 899 806 No. of bytes in distributed program, including test data, etc.: 32 153 916 Distribution format: tar.gz Programming language: C++ Operating system: The ALPS libraries have been tested on the following platforms and compilers: Linux with GNU Compiler Collection (g++ version 3.1 and higher), and Intel C++ Compiler (icc version 7.0 and higher) MacOS X with GNU Compiler (g++ Apple-version 3.1, 3.3 and 4.0) IBM AIX with Visual Age C++ (xlC version 6.0) and GNU (g++ version 3.1 and higher) compilers Compaq Tru64 UNIX with Compq C++ Compiler (cxx) SGI IRIX with MIPSpro C++ Compiler (CC) HP-UX with HP C++ Compiler (aCC) Windows with Cygwin or coLinux platforms and GNU Compiler Collection (g++ version 3.1 and higher) RAM: 10 MB-1 GB Classification: 7.3 External routines: ALPS [1], BLAS/LAPACK, HDF5 Nature of problem: (See [2].) Quantum impurity models describe an atom or molecule embedded in a host material with which it can exchange electrons. They are basic to nanoscience as representations of quantum dots and molecular conductors and play an increasingly important role in the theory of "correlated electron" materials as auxiliary problems whose solution gives the "dynamical mean field" approximation to the self-energy and local correlation functions. Solution method: Quantum impurity models require a method of solution which provides access to both high and low energy scales and is effective for wide classes of physically realistic models. The continuous-time quantum Monte Carlo algorithms for which we present implementations here meet this challenge. Continuous-time quantum impurity methods are based on partition function expansions of quantum impurity models that are stochastically sampled to all orders using diagrammatic quantum Monte Carlo techniques. For a review of quantum impurity models and their applications and of continuous-time quantum Monte Carlo methods for impurity models we refer the reader to [2]. Additional comments: Use of dmft requires citation of this paper. Use of any ALPS program requires citation of the ALPS [1] paper. Running time: 60 s-8 h per iteration.
Generating and using truly random quantum states in Mathematica
NASA Astrophysics Data System (ADS)
Miszczak, Jarosław Adam
2012-01-01
The problem of generating random quantum states is of a great interest from the quantum information theory point of view. In this paper we present a package for Mathematica computing system harnessing a specific piece of hardware, namely Quantis quantum random number generator (QRNG), for investigating statistical properties of quantum states. The described package implements a number of functions for generating random states, which use Quantis QRNG as a source of randomness. It also provides procedures which can be used in simulations not related directly to quantum information processing. Program summaryProgram title: TRQS Catalogue identifier: AEKA_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKA_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 7924 No. of bytes in distributed program, including test data, etc.: 88 651 Distribution format: tar.gz Programming language: Mathematica, C Computer: Requires a Quantis quantum random number generator (QRNG, http://www.idquantique.com/true-random-number-generator/products-overview.html) and supporting a recent version of Mathematica Operating system: Any platform supporting Mathematica; tested with GNU/Linux (32 and 64 bit) RAM: Case dependent Classification: 4.15 Nature of problem: Generation of random density matrices. Solution method: Use of a physical quantum random number generator. Running time: Generating 100 random numbers takes about 1 second, generating 1000 random density matrices takes more than a minute.
A brief introduction to PYTHIA 8.1
NASA Astrophysics Data System (ADS)
Sjöstrand, Torbjörn; Mrenna, Stephen; Skands, Peter
2008-06-01
The PYTHIA program is a standard tool for the generation of high-energy collisions, comprising a coherent set of physics models for the evolution from a few-body hard process to a complex multihadronic final state. It contains a library of hard processes and models for initial- and final-state parton showers, multiple parton-parton interactions, beam remnants, string fragmentation and particle decays. It also has a set of utilities and interfaces to external programs. While previous versions were written in Fortran, PYTHIA 8 represents a complete rewrite in C++. The current release is the first main one after this transition, and does not yet in every respect replace the old code. It does contain some new physics aspects, on the other hand, that should make it an attractive option especially for LHC physics studies. Program summaryProgram title:PYTHIA 8.1 Catalogue identifier: ACTU_v3_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ACTU_v3_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GPL version 2 No. of lines in distributed program, including test data, etc.: 176 981 No. of bytes in distributed program, including test data, etc.: 2 411 876 Distribution format: tar.gz Programming language: C++ Computer: Commodity PCs Operating system: Linux; should also work on other systems RAM: 8 megabytes Classification: 11.2 Does the new version supersede the previous version?: yes, partly Nature of problem: High-energy collisions between elementary particles normally give rise to complex final states, with large multiplicities of hadrons, leptons, photons and neutrinos. The relation between these final states and the underlying physics description is not a simple one, for two main reasons. Firstly, we do not even in principle have a complete understanding of the physics. Secondly, any analytical approach is made intractable by the large multiplicities. Solution method: Complete events are generated by Monte Carlo methods. The complexity is mastered by a subdivision of the full problem into a set of simpler separate tasks. All main aspects of the events are simulated, such as hard-process selection, initial- and final-state radiation, beam remnants, fragmentation, decays, and so on. Therefore events should be directly comparable with experimentally observable ones. The programs can be used to extract physics from comparisons with existing data, or to study physics at future experiments. Reasons for new version: Improved and expanded physics models, transition from Fortran to C++. Summary of revisions: New user interface, transverse-momentum-ordered showers, interleaving with multiple interactions, and much more. Restrictions: Depends on the problem studied. Running time: 10-1000 events per second, depending on process studied. References: [1] T. Sjöstrand, P. Edén, C. Friberg, L. Lönnblad, G. Miu, S. Mrenna, E. Norrbin, Comput. Phys. Comm. 135 (2001) 238.
A parallel solver for huge dense linear systems
NASA Astrophysics Data System (ADS)
Badia, J. M.; Movilla, J. L.; Climente, J. I.; Castillo, M.; Marqués, M.; Mayo, R.; Quintana-Ortí, E. S.; Planelles, J.
2011-11-01
HDSS (Huge Dense Linear System Solver) is a Fortran Application Programming Interface (API) to facilitate the parallel solution of very large dense systems to scientists and engineers. The API makes use of parallelism to yield an efficient solution of the systems on a wide range of parallel platforms, from clusters of processors to massively parallel multiprocessors. It exploits out-of-core strategies to leverage the secondary memory in order to solve huge linear systems O(100.000). The API is based on the parallel linear algebra library PLAPACK, and on its Out-Of-Core (OOC) extension POOCLAPACK. Both PLAPACK and POOCLAPACK use the Message Passing Interface (MPI) as the communication layer and BLAS to perform the local matrix operations. The API provides a friendly interface to the users, hiding almost all the technical aspects related to the parallel execution of the code and the use of the secondary memory to solve the systems. In particular, the API can automatically select the best way to store and solve the systems, depending of the dimension of the system, the number of processes and the main memory of the platform. Experimental results on several parallel platforms report high performance, reaching more than 1 TFLOP with 64 cores to solve a system with more than 200 000 equations and more than 10 000 right-hand side vectors. New version program summaryProgram title: Huge Dense System Solver (HDSS) Catalogue identifier: AEHU_v1_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHU_v1_1.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 87 062 No. of bytes in distributed program, including test data, etc.: 1 069 110 Distribution format: tar.gz Programming language: Fortran90, C Computer: Parallel architectures: multiprocessors, computer clusters Operating system: Linux/Unix Has the code been vectorized or parallelized?: Yes, includes MPI primitives. RAM: Tested for up to 190 GB Classification: 6.5 External routines: MPI ( http://www.mpi-forum.org/), BLAS ( http://www.netlib.org/blas/), PLAPACK ( http://www.cs.utexas.edu/~plapack/), POOCLAPACK ( ftp://ftp.cs.utexas.edu/pub/rvdg/PLAPACK/pooclapack.ps) (code for PLAPACK and POOCLAPACK is included in the distribution). Catalogue identifier of previous version: AEHU_v1_0 Journal reference of previous version: Comput. Phys. Comm. 182 (2011) 533 Does the new version supersede the previous version?: Yes Nature of problem: Huge scale dense systems of linear equations, Ax=B, beyond standard LAPACK capabilities. Solution method: The linear systems are solved by means of parallelized routines based on the LU factorization, using efficient secondary storage algorithms when the available main memory is insufficient. Reasons for new version: In many applications we need to guarantee a high accuracy in the solution of very large linear systems and we can do it by using double-precision arithmetic. Summary of revisions: Version 1.1 Can be used to solve linear systems using double-precision arithmetic. New version of the initialization routine. The user can choose the kind of arithmetic and the values of several parameters of the environment. Running time: About 5 hours to solve a system with more than 200 000 equations and more than 10 000 right-hand side vectors using double-precision arithmetic on an eight-node commodity cluster with a total of 64 Intel cores.
HELAC-PHEGAS: A generator for all parton level processes
NASA Astrophysics Data System (ADS)
Cafarella, Alessandro; Papadopoulos, Costas G.; Worek, Malgorzata
2009-10-01
The updated version of the HELAC-PHEGAS event generator is presented. The matrix elements are calculated through Dyson-Schwinger recursive equations using color connection representation. Phase-space generation is based on a multichannel approach, including optimization. HELAC-PHEGAS generates parton level events with all necessary information, in the most recent Les Houches Accord format, for the study of any process within the Standard Model in hadron and lepton colliders. New version program summaryProgram title: HELAC-PHEGAS Catalogue identifier: ADMS_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADMS_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 35 986 No. of bytes in distributed program, including test data, etc.: 380 214 Distribution format: tar.gz Programming language: Fortran Computer: All Operating system: Linux Classification: 11.1, 11.2 External routines: Optionally Les Houches Accord (LHA) PDF Interface library ( http://projects.hepforge.org/lhapdf/) Catalogue identifier of previous version: ADMS_v1_0 Journal reference of previous version: Comput. Phys. Comm. 132 (2000) 306 Does the new version supersede the previous version?: Yes, partly Nature of problem: One of the most striking features of final states in current and future colliders is the large number of events with several jets. Being able to predict their features is essential. To achieve this, the calculations need to describe as accurately as possible the full matrix elements for the underlying hard processes. Even at leading order, perturbation theory based on Feynman graphs runs into computational problems, since the number of graphs contributing to the amplitude grows as n!. Solution method: Recursive algorithms based on Dyson-Schwinger equations have been developed recently in order to overcome the computational obstacles. The calculation of the amplitude, using Dyson-Schwinger recursive equations, results in a computational cost growing asymptotically as 3 n, where n is the number of particles involved in the process. Off-shell subamplitudes are introduced, for which a recursion relation has been obtained allowing to express an n-particle amplitude in terms of subamplitudes, with 1-, 2-, … up to (n-1) particles. The color connection representation is used in order to treat amplitudes involving colored particles. In the present version HELAC-PHEGAS can be used to efficiently obtain helicity amplitudes, total cross sections, parton-level event samples in LHA format, for arbitrary multiparticle processes in the Standard Model in leptonic, pp¯ and pp collisions. Reasons for new version: Substantial improvements, major functionality upgrade. Summary of revisions: Color connection representation, efficient integration over PDF via the PARNI algorithm, interface to LHAPDF, parton level events generated in the most recent LHA format, k reweighting for Parton Shower matching, numerical predictions for amplitudes for arbitrary processes for phase-space points provided by the user, new user interface and the possibility to run over computer clusters. Running time: Depending on the process studied. Usually from seconds to hours. References:A. Kanaki, C.G. Papadopoulos, Comput. Phys. Comm. 132 (2000) 306. C.G. Papadopoulos, Comput. Phys. Comm. 137 (2001) 247. URL: http://www.cern.ch/helac-phegas.
A program for the Bayesian Neural Network in the ROOT framework
NASA Astrophysics Data System (ADS)
Zhong, Jiahang; Huang, Run-Sheng; Lee, Shih-Chang
2011-12-01
We present a Bayesian Neural Network algorithm implemented in the TMVA package (Hoecker et al., 2007 [1]), within the ROOT framework (Brun and Rademakers, 1997 [2]). Comparing to the conventional utilization of Neural Network as discriminator, this new implementation has more advantages as a non-parametric regression tool, particularly for fitting probabilities. It provides functionalities including cost function selection, complexity control and uncertainty estimation. An example of such application in High Energy Physics is shown. The algorithm is available with ROOT release later than 5.29. Program summaryProgram title: TMVA-BNN Catalogue identifier: AEJX_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJX_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: BSD license No. of lines in distributed program, including test data, etc.: 5094 No. of bytes in distributed program, including test data, etc.: 1,320,987 Distribution format: tar.gz Programming language: C++ Computer: Any computer system or cluster with C++ compiler and UNIX-like operating system Operating system: Most UNIX/Linux systems. The application programs were thoroughly tested under Fedora and Scientific Linux CERN. Classification: 11.9 External routines: ROOT package version 5.29 or higher ( http://root.cern.ch) Nature of problem: Non-parametric fitting of multivariate distributions Solution method: An implementation of Neural Network following the Bayesian statistical interpretation. Uses Laplace approximation for the Bayesian marginalizations. Provides the functionalities of automatic complexity control and uncertainty estimation. Running time: Time consumption for the training depends substantially on the size of input sample, the NN topology, the number of training iterations, etc. For the example in this manuscript, about 7 min was used on a PC/Linux with 2.0 GHz processors.
Fast computation of close-coupling exchange integrals using polynomials in a tree representation
NASA Astrophysics Data System (ADS)
Wallerberger, Markus; Igenbergs, Katharina; Schweinzer, Josef; Aumayr, Friedrich
2011-03-01
The semi-classical atomic-orbital close-coupling method is a well-known approach for the calculation of cross sections in ion-atom collisions. It strongly relies on the fast and stable computation of exchange integrals. We present an upgrade to earlier implementations of the Fourier-transform method. For this purpose, we implement an extensive library for symbolic storage of polynomials, relying on sophisticated tree structures to allow fast manipulation and numerically stable evaluation. Using this library, we considerably speed up creation and computation of exchange integrals. This enables us to compute cross sections for more complex collision systems. Program summaryProgram title: TXINT Catalogue identifier: AEHS_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHS_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 12 332 No. of bytes in distributed program, including test data, etc.: 157 086 Distribution format: tar.gz Programming language: Fortran 95 Computer: All with a Fortran 95 compiler Operating system: All with a Fortran 95 compiler RAM: Depends heavily on input, usually less than 100 MiB Classification: 16.10 Nature of problem: Analytical calculation of one- and two-center exchange matrix elements for the close-coupling method in the impact parameter model. Solution method: Similar to the code of Hansen and Dubois [1], we use the Fourier-transform method suggested by Shakeshaft [2] to compute the integrals. However, we heavily speed up the calculation using a library for symbolic manipulation of polynomials. Restrictions: We restrict ourselves to a defined collision system in the impact parameter model. Unusual features: A library for symbolic manipulation of polynomials, where polynomials are stored in a space-saving left-child right-sibling binary tree. This provides stable numerical evaluation and fast mutation while maintaining full compatibility with the original code. Additional comments: This program makes heavy use of the new features provided by the Fortran 90 standard, most prominently pointers, derived types and allocatable structures and a small portion of Fortran 95. Only newer compilers support these features. Following compilers support all features needed by the program. GNU Fortran Compiler "gfortran" from version 4.3.0 GNU Fortran 95 Compiler "g95" from version 4.2.0 Intel Fortran Compiler "ifort" from version 11.0
NASA Astrophysics Data System (ADS)
Oh, Kwang Jin; Kang, Ji Hoon; Myung, Hun Joo
2012-02-01
We have revised a general purpose parallel molecular dynamics simulation program mm_par using the object-oriented programming. We parallelized the revised version using a hierarchical scheme in order to utilize more processors for a given system size. The benchmark result will be presented here. New version program summaryProgram title: mm_par2.0 Catalogue identifier: ADXP_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXP_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 2 390 858 No. of bytes in distributed program, including test data, etc.: 25 068 310 Distribution format: tar.gz Programming language: C++ Computer: Any system operated by Linux or Unix Operating system: Linux Classification: 7.7 External routines: We provide wrappers for FFTW [1], Intel MKL library [2] FFT routine, and Numerical recipes [3] FFT, random number generator, and eigenvalue solver routines, SPRNG [4] random number generator, Mersenne Twister [5] random number generator, space filling curve routine. Catalogue identifier of previous version: ADXP_v1_0 Journal reference of previous version: Comput. Phys. Comm. 174 (2006) 560 Does the new version supersede the previous version?: Yes Nature of problem: Structural, thermodynamic, and dynamical properties of fluids and solids from microscopic scales to mesoscopic scales. Solution method: Molecular dynamics simulation in NVE, NVT, and NPT ensemble, Langevin dynamics simulation, dissipative particle dynamics simulation. Reasons for new version: First, object-oriented programming has been used, which is known to be open for extension and closed for modification. It is also known to be better for maintenance. Second, version 1.0 was based on atom decomposition and domain decomposition scheme [6] for parallelization. However, atom decomposition is not popular due to its poor scalability. On the other hand, domain decomposition scheme is better for scalability. It still has a limitation in utilizing a large number of cores on recent petascale computers due to the requirement that the domain size is larger than the potential cutoff distance. To go beyond such a limitation, a hierarchical parallelization scheme has been adopted in this new version and implemented using MPI [7] and OPENMP [8]. Summary of revisions: (1) Object-oriented programming has been used. (2) A hierarchical parallelization scheme has been adopted. (3) SPME routine has been fully parallelized with parallel 3D FFT using volumetric decomposition scheme [9]. K.J.O. thanks Mr. Seung Min Lee for useful discussion on programming and debugging. Running time: Running time depends on system size and methods used. For test system containing a protein (PDB id: 5DHFR) with CHARMM22 force field [10] and 7023 TIP3P [11] waters in simulation box having dimension 62.23 Å×62.23 Å×62.23 Å, the benchmark results are given in Fig. 1. Here the potential cutoff distance was set to 12 Å and the switching function was applied from 10 Å for the force calculation in real space. For the SPME [12] calculation, K, K, and K were set to 64 and the interpolation order was set to 4. To do the fast Fourier transform, we used Intel MKL library. All bonds including hydrogen atoms were constrained using SHAKE/RATTLE algorithms [13,14]. The code was compiled using Intel compiler version 11.1 and mvapich2 version 1.5. Fig. 2 shows performance gains from using CUDA-enabled version [15] of mm_par for 5DHFR simulation in water on Intel Core2Quad 2.83 GHz and GeForce GTX 580. Even though mm_par2.0 is not ported yet for GPU, its performance data would be useful to expect mm_par2.0 performance on GPU. Timing results for 1000 MD steps. 1, 2, 4, and 8 in the figure mean the number of OPENMP threads. Timing results for 1000 MD steps from double precision simulation on CPU, single precision simulation on GPU, and double precision simulation on GPU.
A Wideband Fast Multipole Method for the two-dimensional complex Helmholtz equation
NASA Astrophysics Data System (ADS)
Cho, Min Hyung; Cai, Wei
2010-12-01
A Wideband Fast Multipole Method (FMM) for the 2D Helmholtz equation is presented. It can evaluate the interactions between N particles governed by the fundamental solution of 2D complex Helmholtz equation in a fast manner for a wide range of complex wave number k, which was not easy with the original FMM due to the instability of the diagonalized conversion operator. This paper includes the description of theoretical backgrounds, the FMM algorithm, software structures, and some test runs. Program summaryProgram title: 2D-WFMM Catalogue identifier: AEHI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHI_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 4636 No. of bytes in distributed program, including test data, etc.: 82 582 Distribution format: tar.gz Programming language: C Computer: Any Operating system: Any operating system with gcc version 4.2 or newer Has the code been vectorized or parallelized?: Multi-core processors with shared memory RAM: Depending on the number of particles N and the wave number k Classification: 4.8, 4.12 External routines: OpenMP ( http://openmp.org/wp/) Nature of problem: Evaluate interaction between N particles governed by the fundamental solution of 2D Helmholtz equation with complex k. Solution method: Multilevel Fast Multipole Algorithm in a hierarchical quad-tree structure with cutoff level which combines low frequency method and high frequency method. Running time: Depending on the number of particles N, wave number k, and number of cores in CPU. CPU time increases as N log N.
NASA Astrophysics Data System (ADS)
Angeli, C.; Cimiraglia, R.
2013-02-01
A symbolic program performing the Formal Reduction of Density Operators (FRODO), formerly developed in the MuPAD computer algebra system with the purpose of evaluating the matrix elements of the electronic Hamiltonian between internally contracted functions in a complete active space (CAS) scheme, has been rewritten in Mathematica. New version : A program summaryProgram title: FRODO Catalogue identifier: ADV Y _v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVY_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 3878 No. of bytes in distributed program, including test data, etc.: 170729 Distribution format: tar.gz Programming language: Mathematica Computer: Any computer on which the Mathematica computer algebra system can be installed Operating system: Linux Classification: 5 Catalogue identifier of previous version: ADV Y _v1_0 Journal reference of previous version: Comput. Phys. Comm. 171(2005)63 Does the new version supersede the previous version?: No Nature of problem. In order to improve on the CAS-SCF wavefunction one can resort to multireference perturbation theory or configuration interaction based on internally contracted functions (ICFs) which are obtained by application of the excitation operators to the reference CAS-SCF wavefunction. The previous formulation of such matrix elements in the MuPAD computer algebra system, has been rewritten using Mathematica. Solution method: The method adopted consists in successively eliminating all occurrences of inactive orbital indices (core and virtual) from the products of excitation operators which appear in the definition of the ICFs and in the electronic Hamiltonian expressed in the second quantization formalism. Reasons for new version: Some years ago we published in this journal a couple of papers [1, 2] hereafter to be referred to as papers I and II, respectively dedicated to the automated evaluation of the matrix elements of the molecular electronic Hamiltonian between internally contracted functions [3] (ICFs). In paper II the program FRODO (after Formal Reduction Of Density Operators) was presented with the purpose of providing working formulas for each occurrence of the ICFs. The original FRODO program was written in the MuPAD computer algebra system [4] and was actively used in our group for the generation of the matrix elements to be employed in the third-order n-electron valence state perturbation theory (NEVPT) [5-8] as well as in the internally contracted configuration interaction (IC-CI) [9]. We present a new version of the program FRODO written in the Mathematica system [10]. The reason for the rewriting of the program lies in the fact that, on the one hand, MuPAD does not seem to be any longer available as a stand-alone system and, on the other hand, Mathematica, due to its ubiquitousness, appears to be increasingly the computer algebra system most widely used nowadays. Restrictions: The program is limited to no more than doubly excited ICFs. Running time: The examples described in the Readme file take a few seconds to run. References: [1] C. Angeli, R. Cimiraglia, Comp. Phys. Comm. 166 (2005) 53. [2] C. Angeli, R. Cimiraglia, Comp. Phys. Comm. 171 (2005) 63. [3] H.-J. Werner, P. J. Knowles, Adv. Chem. Phys. 89 (1988) 5803. [4] B. Fuchssteiner, W. Oevel: http://www.mupad.de Mupad research group, university of Paderborn. Mupad version 2.5.3 for Linux. [5] C. Angeli, R. Cimiraglia, S. Evangelisti, T. Leininger, J.-P. Malrieu, J. Chem. Phys. 114 (2001) 10252. [6] C. Angeli, R. Cimiraglia, J.-P. Malrieu, J. Chem. Phys. 117 (2002) 9138. [7] C. Angeli, B. Bories, A. Cavallini, R. Cimiraglia, J. Chem. Phys. 124 (2006) 054108. [8] C. Angeli, M. Pastore, R. Cimiraglia, Theor. Chem. Acc. 117 (2007) 743. [9] C. Angeli, R. Cimiraglia, Mol. Phys. in press, DOI:10.1080/00268976.2012.689872 [10] http://www.wolfram.com/Mathematica. Mathematica version 8 for Linux.
NASA Astrophysics Data System (ADS)
Miszczak, Jarosław Adam
2013-01-01
The presented package for the Mathematica computing system allows the harnessing of quantum random number generators (QRNG) for investigating the statistical properties of quantum states. The described package implements a number of functions for generating random states. The new version of the package adds the ability to use the on-line quantum random number generator service and implements new functions for retrieving lists of random numbers. Thanks to the introduced improvements, the new version provides faster access to high-quality sources of random numbers and can be used in simulations requiring large amount of random data. New version program summaryProgram title: TRQS Catalogue identifier: AEKA_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKA_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 18 134 No. of bytes in distributed program, including test data, etc.: 2 520 49 Distribution format: tar.gz Programming language: Mathematica, C. Computer: Any supporting Mathematica in version 7 or higher. Operating system: Any platform supporting Mathematica; tested with GNU/Linux (32 and 64 bit). RAM: Case-dependent Supplementary material: Fig. 1 mentioned below can be downloaded. Classification: 4.15. External routines: Quantis software library (http://www.idquantique.com/support/quantis-trng.html) Catalogue identifier of previous version: AEKA_v1_0 Journal reference of previous version: Comput. Phys. Comm. 183(2012)118 Does the new version supersede the previous version?: Yes Nature of problem: Generation of random density matrices and utilization of high-quality random numbers for the purpose of computer simulation. Solution method: Use of a physical quantum random number generator and an on-line service providing access to the source of true random numbers generated by quantum real number generator. Reasons for new version: Added support for the high-speed on-line quantum random number generator and improved methods for retrieving lists of random numbers. Summary of revisions: The presented version provides two signicant improvements. The first one is the ability to use the on-line Quantum Random Number Generation service developed by PicoQuant GmbH and the Nano-Optics groups at the Department of Physics of Humboldt University. The on-line service supported in the version 2.0 of the TRQS package provides faster access to true randomness sources constructed using the laws of quantum physics. The service is freely available at https://qrng.physik.hu-berlin.de/. The use of this service allows using the presented package with the need of a physical quantum random number generator. The second improvement introduced in this version is the ability to retrieve arrays of random data directly for the used source. This increases the speed of the random number generation, especially in the case of an on-line service, where it reduces the time necessary to establish the connection. Thanks to the speed improvement of the presented version, the package can now be used in simulations requiring larger amounts of random data. Moreover, the functions for generating random numbers provided by the current version of the package more closely follow the pattern of functions for generating pseudo- random numbers provided in Mathematica. Additional comments: Speed comparison: The implementation of the support for the QRNG on-line service provides a noticeable improvement in the speed of random number generation. For the samples of real numbers of size 101; 102,…,107 the times required to generate these samples using Quantis USB device and QRNG service are compared in Fig. 1. The presented results show that the use of the on-line service provides faster access to random numbers. One should note, however, that the speed gain can increase or decrease depending on the connection speed between the computer and the server providing random numbers. Running time: Depends on the used source of randomness and the amount of random data used in the experiment. References: [1] M. Wahl, M. Leifgen, M. Berlin, T. Röhlicke, H.-J. Rahn, O. Benson., An ultrafast quantum random number generator with provably bounded output bias based on photon arrival time measurements, Applied Physics Letters, Vol. 098, 171105 (2011). http://dx.doi.org/10.1063/1.3578456.
CADNA_C: A version of CADNA for use with C or C++ programs
NASA Astrophysics Data System (ADS)
Lamotte, Jean-Luc; Chesneaux, Jean-Marie; Jézéquel, Fabienne
2010-11-01
The CADNA library enables one to estimate round-off error propagation using a probabilistic approach. The CADNA_C version enables this estimation in C or C++ programs, while the previous version had been developed for Fortran programs. The CADNA_C version has the same features as the previous one: with CADNA the numerical quality of any simulation program can be controlled. Furthermore by detecting all the instabilities which may occur at run time, a numerical debugging of the user code can be performed. CADNA provides new numerical types on which round-off errors can be estimated. Slight modifications are required to control a code with CADNA, mainly changes in variable declarations, input and output. New version program summaryProgram title: CADNA_C Catalogue identifier: AEGQ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGQ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 60 075 No. of bytes in distributed program, including test data, etc.: 710 781 Distribution format: tar.gz Programming language: C++ Computer: PC running LINUX with an i686 or an ia64 processor, UNIX workstations including SUN, IBM Operating system: LINUX, UNIX Classification: 6.5 Catalogue identifier of previous version: AEAT_v1_0 Journal reference of previous version: Comput. Phys. Comm. 178 (2008) 933 Does the new version supersede the previous version?: No Nature of problem: A simulation program which uses floating-point arithmetic generates round-off errors, due to the rounding performed at each assignment and at each arithmetic operation. Round-off error propagation may invalidate the result of a program. The CADNA library enables one to estimate round-off error propagation in any simulation program and to detect all numerical instabilities that may occur at run time. Solution method: The CADNA library [1-3] implements Discrete Stochastic Arithmetic [4,5] which is based on a probabilistic model of round-off errors. The program is run several times with a random rounding mode generating different results each time. From this set of results, CADNA estimates the number of exact significant digits in the result that would have been computed with standard floating-point arithmetic. Reasons for new version: The previous version (AEAT_v1_0) enables the estimation of round-off error propagation in Fortran programs [2]. The new version has been developed to enable this estimation in C or C++ programs. Summary of revisions: The CADNA_C source code consists of one assembly language file (cadna_rounding.s) and twenty-three C++ language files (including three header files). cadna_rounding.s is a symbolic link to the assembly file corresponding to the processor and the C++ compiler used. This assembly file contains routines which are frequently called in the CADNA_C C++ files to change the rounding mode. The C++ language files contain the definition of the stochastic types on which the control of accuracy can be performed, CADNA_C specific functions (for instance to enable or disable the detection of numerical instabilities), the definition of arithmetic and relational operators which are overloaded for stochastic variables and the definition of mathematical functions which can be used with stochastic arguments. As a remark, on 64-bit processors, the mathematical library associated with the GNU C++ compiler may provide incorrect results or generate severe bugs with rounding towards -∞ and +∞, which the random rounding mode is based on. Therefore, if CADNA_C is used on a 64-bit processor with the GNU C++ compiler, mathematical functions are computed with rounding to the nearest, otherwise they are computed with the random rounding mode. It must be pointed out that the knowledge of the accuracy of the argument of a mathematical function is never lost. Additional comments: In the library archive, users are advised to read the INSTALL file first. The doc directory contains a user guide named ug.cadna.pdf and a reference guide named, ref_cadna.pdf. The user guide shows how to control the numerical accuracy of a program using CADNA, provides installation instructions and describes test runs.The reference guide briefly describes each function of the library. The source code (which consists of C++ and assembly files) is located in the src directory. The examples directory contains seven test runs which illustrate the use of the CADNA library and the benefits of Discrete Stochastic Arithmetic. Running time: The version of a code which uses CADNA runs at least three times slower than its floating-point version. This cost depends on the computer architecture and can be higher if the detection of numerical instabilities is enabled. In this case, the cost may be related to the number of instabilities detected.
ROOT — A C++ framework for petabyte data storage, statistical analysis and visualization
NASA Astrophysics Data System (ADS)
Antcheva, I.; Ballintijn, M.; Bellenot, B.; Biskup, M.; Brun, R.; Buncic, N.; Canal, Ph.; Casadei, D.; Couet, O.; Fine, V.; Franco, L.; Ganis, G.; Gheata, A.; Maline, D. Gonzalez; Goto, M.; Iwaszkiewicz, J.; Kreshuk, A.; Segura, D. Marcos; Maunder, R.; Moneta, L.; Naumann, A.; Offermann, E.; Onuchin, V.; Panacek, S.; Rademakers, F.; Russo, P.; Tadel, M.
2011-06-01
A new stable version ("production version") v5.28.00 of ROOT [1] has been published [2]. It features several major improvements in many areas, most noteworthy data storage performance as well as statistics and graphics features. Some of these improvements have already been predicted in the original publication Antcheva et al. (2009) [3]. This version will be maintained for at least 6 months; new minor revisions ("patch releases") will be published [4] to solve problems reported with this version. New version program summaryProgram title: ROOT Catalogue identifier: AEFA_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFA_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU Lesser Public License v.2.1 No. of lines in distributed program, including test data, etc.: 2 934 693 No. of bytes in distributed program, including test data, etc.: 1009 Distribution format: tar.gz Programming language: C++ Computer: Intel i386, Intel x86-64, Motorola PPC, Sun Sparc, HP PA-RISC Operating system: GNU/Linux, Windows XP/Vista/7, Mac OS X, FreeBSD, OpenBSD, Solaris, HP-UX, AIX Has the code been vectorized or parallelized?: Yes RAM: > 55 Mbytes Classification: 4, 9, 11.9, 14 Catalogue identifier of previous version: AEFA_v1_0 Journal reference of previous version: Comput. Phys. Commun. 180 (2009) 2499 Does the new version supersede the previous version?: Yes Nature of problem: Storage, analysis and visualization of scientific data Solution method: Object store, wide range of analysis algorithms and visualization methods Reasons for new version: Added features and corrections of deficiencies Summary of revisions: The release notes at http://root.cern.ch/root/v528/Version528.news.html give a module-oriented overview of the changes in v5.28.00. Highlights include File format Reading of TTrees has been improved dramatically with respect to CPU time (30%) and notably with respect to disk space. Histograms A new TEfficiency class has been provided to handle the calculation of efficiencies and their uncertainties, TH2Poly for polygon-shaped bins (e.g. maps), TKDE for kernel density estimation, and TSVDUnfold for singular value decomposition. Graphics Kerning is now supported in TLatex, PostScript and PDF; a table of contents can be added to PDF files. A new font provides italic symbols. A TPad containing GL can be stored in a binary (i.e. non-vector) image file; add support for full-scene anti-aliasing. Usability enhancements to EVE. Math New interfaces for generating random number according to a given distribution, goodness of fit tests of unbinned data, binning multidimensional data, and several advanced statistical functions were added. RooFit Introduction of HistFactory; major additions to RooStats. TMVA Updated to version 4.1.0, adding e.g. the support for simultaneous classification of multiple output classes for several multivariate methods. PROOF Many new features, adding to PROOF's usability, plus improvements and fixes. PyROOT Support of Python 3 has been added. Tutorials Several new tutorials were provided for above new features (notably RooStats). A detailed list of all the changes is available at http://root.cern.ch/root/htmldoc/examples/V5. Additional comments: For an up-to-date author list see: http://root.cern.ch/drupal/content/root-development-team and http://root.cern.ch/drupal/content/former-root-developers. The distribution file for this program is over 30 Mbytes and therefore is not delivered directly when download or E-mail is requested. Instead a html file giving details of how the program can be obtained is sent. Running time: Depending on the data size and complexity of analysis algorithms. References: id="pr0100" view="all">http://root.cern.ch. http://root.cern.ch/drupal/content/production-version-528. I. Antcheva, M. Ballintijn, B. Bellenot, M. Biskup, R. Brun, N. Buncic, Ph. Canal, D. Casadei, O. Couet, V. Fine, L. Franco, G. Ganis, A. Gheata, D. Gonzalez Maline, M. Goto, J. Iwaszkiewicz, A. Kreshuk, D. Marcos Segura, R. Maunder, L. Moneta, A. Naumann, E. Offermann, V. Onuchin, S. Panacek, F. Rademakers, P. Russo, M. Tadel, ROOT — A C++ framework for petabyte data storage, statistical analysis and visualization, Comput. Phys. Commun. 180 (2009) 2499. http://root.cern.ch/drupal/content/root-version-v5-28-00-patch-release-notes.
NASA Astrophysics Data System (ADS)
Lee, J. S.; Carena, M.; Ellis, J.; Pilaftsis, A.; Wagner, C. E. M.
2009-02-01
We describe the Fortran code CPsuperH2.0, which contains several improvements and extensions of its predecessor CPsuperH. It implements improved calculations of the Higgs-boson pole masses, notably a full treatment of the 4×4 neutral Higgs propagator matrix including the Goldstone boson and a more complete treatment of threshold effects in self-energies and Yukawa couplings, improved treatments of two-body Higgs decays, some important three-body decays, and two-loop Higgs-mediated contributions to electric dipole moments. CPsuperH2.0 also implements an integrated treatment of several B-meson observables, including the branching ratios of B→μμ, B→ττ, B→τν, B→Xγ and the latter's CP-violating asymmetry A, and the supersymmetric contributions to the Bs,d0-B¯s,d0 mass differences. These additions make CPsuperH2.0 an attractive integrated tool for analyzing supersymmetric CP and flavour physics as well as searches for new physics at high-energy colliders such as the Tevatron, LHC and linear colliders. Program summaryProgram title: CPsuperH2.0 Catalogue identifier: ADSR_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADSR_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 13 290 No. of bytes in distributed program, including test data, etc.: 89 540 Distribution format: tar.gz Programming language: Fortran 77 Computer: PC running under Linux and computers in Unix environment Operating system: Linux RAM: 32 Mbytes Classification: 11.1 Catalogue identifier of the previous version: ADSR_v1_0 Journal reference of the previous version: CPC 156 (2004) 283 Does the new version supersede the previous version?: Yes Nature of problem: The calculations of mass spectrum, decay widths and branching ratios of the neutral and charged Higgs bosons in the Minimal Supersymmetric Standard Model with explicit CP violation have been improved. The program is based on recent renormalization-group-improved diagrammatic calculations that include dominant higher-order logarithmic and threshold corrections, b-quark Yukawa-coupling resummation effects and improved treatment of Higgs-boson pole-mass shifts. The couplings of the Higgs bosons to the Standard Model gauge bosons and fermions, to their supersymmetric partners and all the trilinear and quartic Higgs-boson self-couplings are also calculated. The new implementations include a full treatment of the 4×4(2×2) neutral (charged) Higgs propagator matrix together with the center-of-mass dependent Higgs-boson couplings to gluons and photons, two-loop Higgs-mediated contributions to electric dipole moments, and an integrated treatment of several B-meson observables. Solution method: One-dimensional numerical integration for several Higgs-decay modes, iterative treatment of the threshold corrections and Higgs-boson pole masses, and the numerical diagonalization of the neutralino mass matrix. Reasons for new version: Mainly to provide a coherent numerical framework which calculates consistently observables for both low- and high-energy experiments. Summary of revisions: Improved treatment of Higgs-boson masses and propagators. Improved treatment of Higgs-boson couplings and decays. Higgs-mediated two-loop electric dipole moments. B-meson observables. Running time: Less than 0.1 seconds. The program may be obtained from http://www.hep.man.ac.uk/u/jslee/CPsuperH.html.
Automated symbolic calculations in nonequilibrium thermodynamics
NASA Astrophysics Data System (ADS)
Kröger, Martin; Hütter, Markus
2010-12-01
We cast the Jacobi identity for continuous fields into a local form which eliminates the need to perform any partial integration to the expense of performing variational derivatives. This allows us to test the Jacobi identity definitely and efficiently and to provide equations between different components defining a potential Poisson bracket. We provide a simple Mathematica TM notebook which allows to perform this task conveniently, and which offers some additional functionalities of use within the framework of nonequilibrium thermodynamics: reversible equations of change for fields, and the conservation of entropy during the reversible dynamics. Program summaryProgram title: Poissonbracket.nb Catalogue identifier: AEGW_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGW_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 227 952 No. of bytes in distributed program, including test data, etc.: 268 918 Distribution format: tar.gz Programming language: Mathematica TM 7.0 Computer: Any computer running Mathematica TM 6.0 and later versions Operating system: Linux, MacOS, Windows RAM: 100 Mb Classification: 4.2, 5, 23 Nature of problem: Testing the Jacobi identity can be a very complex task depending on the structure of the Poisson bracket. The Mathematica TM notebook provided here solves this problem using a novel symbolic approach based on inherent properties of the variational derivative, highly suitable for the present tasks. As a by product, calculations performed with the Poisson bracket assume a compact form. Solution method: The problem is first cast into a form which eliminates the need to perform partial integration for arbitrary functionals at the expense of performing variational derivatives. The corresponding equations are conveniently obtained using the symbolic programming environment Mathematica TM. Running time: For the test cases and most typical cases in the literature, the running time is of the order of seconds or minutes, respectively.
NASA Astrophysics Data System (ADS)
Stoitsov, M. V.; Schunck, N.; Kortelainen, M.; Michel, N.; Nam, H.; Olsen, E.; Sarich, J.; Wild, S.
2013-06-01
We describe the new version 2.00d of the code HFBTHO that solves the nuclear Skyrme-Hartree-Fock (HF) or Skyrme-Hartree-Fock-Bogoliubov (HFB) problem by using the cylindrical transformed deformed harmonic oscillator basis. In the new version, we have implemented the following features: (i) the modified Broyden method for non-linear problems, (ii) optional breaking of reflection symmetry, (iii) calculation of axial multipole moments, (iv) finite temperature formalism for the HFB method, (v) linear constraint method based on the approximation of the Random Phase Approximation (RPA) matrix for multi-constraint calculations, (vi) blocking of quasi-particles in the Equal Filling Approximation (EFA), (vii) framework for generalized energy density with arbitrary density-dependences, and (viii) shared memory parallelism via OpenMP pragmas. Program summaryProgram title: HFBTHO v2.00d Catalog identifier: ADUI_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADUI_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License version 3 No. of lines in distributed program, including test data, etc.: 167228 No. of bytes in distributed program, including test data, etc.: 2672156 Distribution format: tar.gz Programming language: FORTRAN-95. Computer: Intel Pentium-III, Intel Xeon, AMD-Athlon, AMD-Opteron, Cray XT5, Cray XE6. Operating system: UNIX, LINUX, WindowsXP. RAM: 200 Mwords Word size: 8 bits Classification: 17.22. Does the new version supercede the previous version?: Yes Catalog identifier of previous version: ADUI_v1_0 Journal reference of previous version: Comput. Phys. Comm. 167 (2005) 43 Nature of problem: The solution of self-consistent mean-field equations for weakly-bound paired nuclei requires a correct description of the asymptotic properties of nuclear quasi-particle wave functions. In the present implementation, this is achieved by using the single-particle wave functions of the transformed harmonic oscillator, which allows for an accurate description of deformation effects and pairing correlations in nuclei arbitrarily close to the particle drip lines. Solution method: The program uses the axial Transformed Harmonic Oscillator (THO) single- particle basis to expand quasi-particle wave functions. It iteratively diagonalizes the Hartree-Fock-Bogoliubov Hamiltonian based on generalized Skyrme-like energy densities and zero-range pairing interactions until a self-consistent solution is found. A previous version of the program was presented in: M.V. Stoitsov, J. Dobaczewski, W. Nazarewicz, P. Ring, Comput. Phys. Commun. 167 (2005) 43-63. Reasons for new version: Version 2.00d of HFBTHO provides a number of new options such as the optional breaking of reflection symmetry, the calculation of axial multipole moments, the finite temperature formalism for the HFB method, optimized multi-constraint calculations, the treatment of odd-even and odd-odd nuclei in the blocking approximation, and the framework for generalized energy density with arbitrary density-dependences. It is also the first version of HFBTHO to contain threading capabilities. Summary of revisions: The modified Broyden method has been implemented, Optional breaking of reflection symmetry has been implemented, The calculation of all axial multipole moments up to λ=8 has been implemented, The finite temperature formalism for the HFB method has been implemented, The linear constraint method based on the approximation of the Random Phase Approximation (RPA) matrix for multi-constraint calculations has been implemented, The blocking of quasi-particles in the Equal Filling Approximation (EFA) has been implemented, The framework for generalized energy density functionals with arbitrary density-dependence has been implemented, Shared memory parallelism via OpenMP pragmas has been implemented. Restrictions: Axial- and time-reversal symmetries are assumed. Unusual features: The user must have access to the LAPACK subroutines DSYEVD, DSYTRF and DSYTRI, and their dependences, which compute eigenvalues and eigenfunctions of real symmetric matrices, the LAPACK subroutines DGETRI and DGETRF, which invert arbitrary real matrices, and the BLAS routines DCOPY, DSCAL, DGEMM and DGEMV for double-precision linear algebra (or provide another set of subroutines that can perform such tasks). The BLAS and LAPACK subroutines can be obtained from the Netlib Repository at the University of Tennessee, Knoxville: http://netlib2.cs.utk.edu/. Running time: Highly variable, as it depends on the nucleus, size of the basis, requested accuracy, requested configuration, compiler and libraries, and hardware architecture. An order of magnitude would be a few seconds for ground-state configurations in small bases N≈8-12, to a few minutes in very deformed configuration of a heavy nucleus with a large basis N>20.
RIS3: A program for relativistic isotope shift calculations
NASA Astrophysics Data System (ADS)
Nazé, C.; Gaidamauskas, E.; Gaigalas, G.; Godefroid, M.; Jönsson, P.
2013-09-01
An atomic spectral line is characteristic of the element producing the spectrum. The line also depends on the isotope. The program RIS3 (Relativistic Isotope Shift) calculates the electron density at the origin and the normal and specific mass shift parameters. Combining these electronic quantities with available nuclear data, isotope-dependent energy level shifts are determined. Program summaryProgram title:RIS3 Catalogue identifier: ADEK_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADEK_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 5147 No. of bytes in distributed program, including test data, etc.: 32869 Distribution format: tar.gz Programming language: Fortran 77. Computer: HP ProLiant BL465c G7 CTO. Operating system: Centos 5.5, which is a Linux distribution compatible with Red Hat Enterprise Advanced Server. Classification: 2.1. Catalogue identifier of previous version: ADEK_v1_0 Journal reference of previous version: Comput. Phys. Comm. 100 (1997) 81 Subprograms used: Cat Id Title Reference ADZL_v1_1 GRASP2K VERSION 1_1 to be published. Does the new version supersede the previous version?: Yes Nature of problem: Prediction of level and transition isotope shifts in atoms using four-component relativistic wave functions. Solution method: The nuclear motion and volume effects are treated in first order perturbation theory. Taking the zero-order wave function in terms of a configuration state expansion |Ψ>=∑μcμ|Φ(γμPJMj)>, where P, J and MJ are, respectively, the parity and angular quantum numbers, the electron density at the nucleus and the normal and specific mass shift parameters may generally be expressed as ∑cμcν<γμPJMj|V|γνPJMj> where V is the relevant operator. The matrix elements, in turn, can be expressed as sums over radial integrals multiplied by angular coefficients. All the angular coefficients are calculated using routines from the GRASP2K version 1_1 package [1]. Reasons for new version: This new version takes the nuclear recoil corrections into account within the (m2/M approximation [2] and also allows storage of the angular coefficients for a series of calculations within a given isoelectronic sequence. Furthermore, the program JJ2LSJ, a module of the GRASP2K version 1_1 toolkit that allows a transformation of ASFs from a jj-coupled CSF basis into an LSJ-coupled CSF basis, has been especially adapted to present RIS3 results using LSJ labels of the states. This additional tool is called RIS3_LSJ. Summary of revisions: This version is compatible with the new angular approach of the GRASP2K version 1_1 package [1] and can store necessary angular coefficients. According to the formalism of the relativistic nuclear recoil, the "uncorrected" expression of the normal mass shift has been fundamentally modified compared with its expression in [3]. Restrictions: The complexity of the cases that can be handled is entirely determined by the GRASP2K package [1] used for the generation of the electronic wave functions. Unusual features: Angular data is stored on disk and can be reused. LSJ labels are used for the states. Running time: As an example, we evaluated the isotope shift parameters and the electron density at the origin using the wave functions of Be-like system. We used the MCDHF wave function built on a complete active space (CAS) with n=8 (296 626 CSFs-62 orbitals) that contains 3 non-interacting blocks of given parity and J values involving 6 different eigenvalues in total. Calculations take around 10 h on one AMD Opteron 6100 @ 2.3 GHz CPU with 8 cores (64 GB DDR3 RAM 1.333 GHz). If angular files are available the time is reduced to 20 min. The storage of the angular data takes 139 MB and 7.2 GB for the one-body and the two-body elements, respectively. References: [1] P. Jönsson, G. Gaigalas, J. Bieroń, C. Froese Fischer, I.P. Grant, New version: GRASP2K relativistic atomic structure package, Comput. Phys. Commun. 184 (9) (2013) 2197-2203. [2] E. Gaidamauskas, C. Nazé, P. Rynkun, G. Gaigalas, P. Jönsson, M. Godefroid, J. Phys. B: At. Mol. Opt. Phys. 44 (17) (2011) 175003. [3] P. Jönsson, C. Froese Fischer, Comput. Phys. Commun. 100 (1997) 81-92.
XMDS2: Fast, scalable simulation of coupled stochastic partial differential equations
NASA Astrophysics Data System (ADS)
Dennis, Graham R.; Hope, Joseph J.; Johnsson, Mattias T.
2013-01-01
XMDS2 is a cross-platform, GPL-licensed, open source package for numerically integrating initial value problems that range from a single ordinary differential equation up to systems of coupled stochastic partial differential equations. The equations are described in a high-level XML-based script, and the package generates low-level optionally parallelised C++ code for the efficient solution of those equations. It combines the advantages of high-level simulations, namely fast and low-error development, with the speed, portability and scalability of hand-written code. XMDS2 is a complete redesign of the XMDS package, and features support for a much wider problem space while also producing faster code. Program summaryProgram title: XMDS2 Catalogue identifier: AENK_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENK_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 2 No. of lines in distributed program, including test data, etc.: 872490 No. of bytes in distributed program, including test data, etc.: 45522370 Distribution format: tar.gz Programming language: Python and C++. Computer: Any computer with a Unix-like system, a C++ compiler and Python. Operating system: Any Unix-like system; developed under Mac OS X and GNU/Linux. RAM: Problem dependent (roughly 50 bytes per grid point) Classification: 4.3, 6.5. External routines: The external libraries required are problem-dependent. Uses FFTW3 Fourier transforms (used only for FFT-based spectral methods), dSFMT random number generation (used only for stochastic problems), MPI message-passing interface (used only for distributed problems), HDF5, GNU Scientific Library (used only for Bessel-based spectral methods) and a BLAS implementation (used only for non-FFT-based spectral methods). Nature of problem: General coupled initial-value stochastic partial differential equations. Solution method: Spectral method with method-of-lines integration Running time: Determined by the size of the problem
New version: GRASP2K relativistic atomic structure package
NASA Astrophysics Data System (ADS)
Jönsson, P.; Gaigalas, G.; Bieroń, J.; Fischer, C. Froese; Grant, I. P.
2013-09-01
A revised version of GRASP2K [P. Jönsson, X. He, C. Froese Fischer, I.P. Grant, Comput. Phys. Commun. 177 (2007) 597] is presented. It supports earlier non-block and block versions of codes as well as a new block version in which the njgraf library module [A. Bar-Shalom, M. Klapisch, Comput. Phys. Commun. 50 (1988) 375] has been replaced by the librang angular package developed by Gaigalas based on the theory of [G. Gaigalas, Z.B. Rudzikas, C. Froese Fischer, J. Phys. B: At. Mol. Phys. 30 (1997) 3747, G. Gaigalas, S. Fritzsche, I.P. Grant, Comput. Phys. Commun. 139 (2001) 263]. Tests have shown that errors encountered by njgraf do not occur with the new angular package. The three versions are denoted v1, v2, and v3, respectively. In addition, in v3, the coefficients of fractional parentage have been extended to j=9/2, making calculations feasible for the lanthanides and actinides. Changes in v2 include minor improvements. For example, the new version of rci2 may be used to compute quantum electrodynamic (QED) corrections only from selected orbitals. In v3, a new program, jj2lsj, reports the percentage composition of the wave function in LSJ and the program rlevels has been modified to report the configuration state function (CSF) with the largest coefficient of an LSJ expansion. The bioscl2 and bioscl3 application programs have been modified to produce a file of transition data with one record for each transition in the same format as in ATSP2K [C. Froese Fischer, G. Tachiev, G. Gaigalas, M.R. Godefroid, Comput. Phys. Commun. 176 (2007) 559], which identifies each atomic state by the total energy and a label for the CSF with the largest expansion coefficient in LSJ intermediate coupling. All versions of the codes have been adapted for 64-bit computer architecture. Program SummaryProgram title: GRASP2K, version 1_1 Catalogue identifier: ADZL_v1_1 Program summary URL: http://cpc.cs.qub.ac.uk/summaries/ADZL_v1_1.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 730252 No. of bytes in distributed program, including test data, etc.: 14808872 Distribution format: tar.gz Programming language: Fortran. Computer: Intel Xeon, 2.66 GHz. Operating system: Suse, Ubuntu, and Debian Linux 64-bit. RAM: 500 MB or more Classification: 2.1. Catalogue identifier of previous version: ADZL_v1_0 Journal reference of previous version: Comput. Phys. Comm. 177 (2007) 597 Does the new version supersede the previous version?: Yes Nature of problem: Prediction of atomic properties — atomic energy levels, oscillator strengths, radiative decay rates, hyperfine structure parameters, Landé gJ-factors, and specific mass shift parameters — using a multiconfiguration Dirac-Hartree-Fock approach. Solution method: The computational method is the same as in the previous GRASP2K [1] version except that for v3 codes the njgraf library module [2] for recoupling has been replaced by librang [3,4]. Reasons for new version: New angular libraries with improved performance are available. Also methodology for transforming from jj- to LSJ-coupling has been developed. Summary of revisions: New angular libraries where the coefficients of fractional parentage have been extended to j=9/2, making calculations feasible for the lanthanides and actinides. Inclusion of a new program jj2lsj, which reports the percentage composition of the wave function in LSJ. Transition programs have been modified to produce a file of transition data with one record for each transition in the same format as Atsp2K [C. Froese Fischer, G. Tachiev, G. Gaigalas and M.R. Godefroid, Comput. Phys. Commun. 176 (2007) 559], which identifies each atomic state by the total energy and a label for the CSF with the largest expansion coefficient in LSJ intermediate coupling. Updated to 64-bit architecture. A comprehensive user manual in pdf format for the program package has been added. Restrictions: The packing algorithm restricts the maximum number of orbitals to be ≤214. The tables of reduced coefficients of fractional parentage used in this version are limited to subshells with j≤9/2 [5]; occupied subshells with j>9/2 are, therefore, restricted to a maximum of two electrons. Some other parameters, such as the maximum number of subshells of a CSF outside a common set of closed shells are determined by a parameter.def file that can be modified prior to compile time. Unusual features: The bioscl3 program reports transition data in the same format as in Atsp2K [6], and the data processing program tables of the latter package can be used. The tables program takes a name.lsj file, usually a concatenated file of all the .lsj transition files for a given atom or ion, and finds the energy structure of the levels and the multiplet transition arrays. The tables posted at the website http://atoms.vuse.vanderbilt.edu are examples of tables produced by the tables program. With the extension of coefficients of fractional parentage to j=9/2, calculations for the lanthanides and actinides become possible. Running time: CPU time required to execute test cases: 70.5 s.
NASA Astrophysics Data System (ADS)
Cipolla, Sam J.
2011-11-01
In this new version of ISICS, called ISICS2011, a few omissions and incorrect entries in the built-in file of electron binding energies have been corrected; operational situations leading to un-physical behavior have been identified and flagged. New version program summaryProgram title: ISICS2011 Catalogue identifier: ADDS_v5_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADDS_v5_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 6011 No. of bytes in distributed program, including test data, etc.: 130 587 Distribution format: tar.gz Programming language: C Computer: 80486 or higher-level PCs Operating system: WINDOWS XP and all earlier operating systems Classification: 16.7 Catalogue identifier of previous version: ADDS_v4_0 Journal reference of previous version: Comput. Phys. Commun. 180 (2009) 1716. Does the new version supersede the previous version?: Yes Nature of problem: Ionization and X-ray production cross section calculations for ion-atom collisions. Solution method: Numerical integration of form factor using a logarithmic transform and Gaussian quadrature, plus exact integration limits. Reasons for new version: General need for higher precision in output format for projectile energies; some built-in binding energies needed correcting; some anomalous results occur due to faulty read-in data or calculated parameters becoming un-physical; erroneous calculations could result for the L and M shells when restricted K-shell options are inadvertently chosen; to achieve general compatibility with ISICSoo, a companion C++ version that is portable to Linux and MacOS platforms, has been submitted for publication in the CPC Program Library approximately at the same time as this present new standalone version of ISICS [1]. Summary of revisions: The format field for projectile energies in the output has been expanded from two to four decimal places in order to distinguish between closely spaced energy values. There were a few entries in the executable binding energy file that needed correcting; K shell of Eu, M shells of Zn, M1 shell of Kr. The corrected values were also entered in the ENERGY.DAT file. In addition, an alternate data file of binding energies is included, called ENERGY_GW.DAT, which is more up-to-date [2]. Likewise, an alternate atomic parameters data file is now included, called FLOURE_JC.DAT, which is more up-to-date [3] fluorescence yields for the K and L shells and Coster-Kronig parameters for the L shell. Both data files can be read in using the -f usage option. To do this, the original energy file should be renamed and saved (e.g., ENERGY_BB.DAT) and the new file (ENERGY_GW.DAT ) should be duplicated as ENERGY.DAT to be read in using the -f option. Similarly for reading in an alternate FLOURE.DAT file. As with previous versions, the user can also simply input different values of any input quantity by invoking the "specify your own parameters" option from the main menu. You can also use this option to simply check the values of the built-in values of the parameters. If it still happens that a zero binding energy for a particular sub-shell is read in, the program will not completely abort, but will calculate results for the other sub-shells while setting the affected sub-shell output to zero. In calculating the Coulomb deflection factor, if the quantity inside the radical sign of the parameter z z=√{(1} becomes zero or negative, to prevent the program from aborting, the PWBA cross sections are still calculated while the ECPSSR cross sections are set to zero. This situation can happen for very low energy collisions, such as were noticed for helium ions on copper at energies of E⩽11.2 keV. It was observed during the engineering of ISICSoo [1] that erroneous calculations could result for the L- and M-shell cases when restricted K-shell R or HSR scaling options were inappropriately chosen. The program has now been fixed so that these inappropriate options are ignored for the L and M shells. In the previous versions, the usage for inputting a batch data file was incorrectly stated in the Users Manual as -Bxxx; the correct designation is -Fxxx, or alternatively, -Ixxx, as indicated on the usage screen in running the program. A revised Users Manual is also available. Restrictions: The consumed CPU time increases with the atomic shell (K, L, M), but execution is still very fast. Running time: This depends on which shell and the number of different energies to be used in the calculation. The running time is not significantly changed from the previous version.
Runwien: a text-based interface for the WIEN package
NASA Astrophysics Data System (ADS)
Otero de la Roza, A.; Luaña, Víctor
2009-05-01
A new text-based interface for WIEN2k, the full-potential linearized augmented plane-waves (FPLAPW) program, is presented. This code provides an easy to use, yet powerful way of generating arbitrarily large sets of calculations. Thus, properties over a potential energy surface and WIEN2k parameter exploration can be calculated using a simple input text file. This interface also provides new capabilities to the WIEN2k package, such as the calculation of elastic constants on hexagonal systems or the automatic gathering of relevant information. Additionally, runwien is modular, flexible and intuitive. Program summaryProgram title: runwien Catalogue identifier: AECM_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECM_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GPL version 3 No. of lines in distributed program, including test data, etc.: 62 567 No. of bytes in distributed program, including test data, etc.: 610 973 Distribution format: tar.gz Programming language: gawk (with locale POSIX or similar) Computer: All running Unix, Linux Operating system: Unix, GNU/Linux Classification: 7.3 External routines: WIEN2k ( http://www.wien2k.at/), GAWK ( http://www.gnu.org/software/gawk/), rename by L. Wall, a Perl script which renames files, modified by R. Barker to check for the existence of target files, gnuplot ( http://www.gnuplot.info/) Subprograms used:Cat Id: ADSY_v1_0/AECB_v1_0, Title: GIBBS/CRITIC, Reference: CPC 158 (2004) 57/CPC 999 (2009) 999 Nature of problem: Creation of a text-based, batch-oriented interface for the WIEN2k package. Solution method: WIEN2k solves the Kohn-Sham equations of a solid using the FPLAPW formalism. Runwien interprets an input file containing the description of the geometry and structure of the solid and drives the execution of the WIEN2k programs. The input is simplified thanks to the default values of the WIEN2k parameters known to runwien. Additional comments: Designed for WIEN2k versions 06.4, 07.2, 08.2, and 08.3. Running time: For the test case (TiC), a single geometry takes 5 to 10 minutes on a typical desktop PC (Intel Pentium 4, 3.4 GHz, 1 GB RAM). The full example including the calculation of the elastic constants and the equation of state, takes 9 hours and 32 minutes.
COOL: A code for Dynamic Monte Carlo Simulation of molecular dynamics
NASA Astrophysics Data System (ADS)
Barletta, Paolo
2012-02-01
Cool is a program to simulate evaporative and sympathetic cooling for a mixture of two gases co-trapped in an harmonic potential. The collisions involved are assumed to be exclusively elastic, and losses are due to evaporation from the trap. Each particle is followed individually in its trajectory, consequently properties such as spatial densities or energy distributions can be readily evaluated. The code can be used sequentially, by employing one output as input for another run. The code can be easily generalised to describe more complicated processes, such as the inclusion of inelastic collisions, or the possible presence of more than two species in the trap. New version program summaryProgram title: COOL Catalogue identifier: AEHJ_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHJ_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 1 097 733 No. of bytes in distributed program, including test data, etc.: 18 425 722 Distribution format: tar.gz Programming language: C++ Computer: Desktop Operating system: Linux RAM: 500 Mbytes Classification: 16.7, 23 Catalogue identifier of previous version: AEHJ_v1_0 Journal reference of previous version: Comput. Phys. Comm. 182 (2011) 388 Does the new version supersede the previous version?: Yes Nature of problem: Simulation of the sympathetic process occurring for two molecular gases co-trapped in a deep optical trap. Solution method: The Direct Simulation Monte Carlo method exploits the decoupling, over a short time period, of the inter-particle interaction from the trapping potential. The particle dynamics is thus exclusively driven by the external optical field. The rare inter-particle collisions are considered with an acceptance/rejection mechanism, that is, by comparing a random number to the collisional probability defined in terms of the inter-particle cross section and centre-of-mass energy. All particles in the trap are individually simulated so that at each time step a number of useful quantities, such as the spatial densities or the energy distributions, can be readily evaluated. Reasons for new version: A number of issues made the old version very difficult to be ported on different architectures, and impossible to compile on Windows. Furthermore, the test runs results could only be replicated poorly, as a consequence of the simulations being very sensitive to the machine background noise. In practise, as the particles are simulated for billions and billions of steps, the consequence of a small difference in the initial conditions due to the finiteness of double precision real can have macroscopic effects in the output. This is not a problem in its own right, but a feature of such simulations. However, for sake of completeness we have introduced a quadruple precision version of the code which yields the same results independently of the software used to compile it, or the hardware architecture where the code is run. Summary of revisions: A number of bugs in the dynamic memory allocation have been detected and removed, mostly in the cool.cpp file. All files have been renamed with a .cpp ending, rather than .c++, to make them compatible with Windows. The Random Number Generator routine, which is the computational core of the algorithm, has been re-written in C++, and there is no need any longer for cross FORTRAN-C++ compilation. A quadruple precision version of the code is provided alongside the original double precision one. The makefile allows the user to choose which one to compile by setting the switch PRECISION to either double or quad. The source code and header files have been organised into directories to make the code file system look neater. Restrictions: The in-trap motion of the particles is treated classically. Running time: The running time is relatively short, 1-2 hours. However it is convenient to replicate each simulation several times with different initialisations of the random sequence.
NASA Astrophysics Data System (ADS)
Lu, Benzhuo; Cheng, Xiaolin; Huang, Jingfang; McCammon, J. Andrew
2010-06-01
A Fortran program package is introduced for rapid evaluation of the electrostatic potentials and forces in biomolecular systems modeled by the linearized Poisson-Boltzmann equation. The numerical solver utilizes a well-conditioned boundary integral equation (BIE) formulation, a node-patch discretization scheme, a Krylov subspace iterative solver package with reverse communication protocols, and an adaptive new version of fast multipole method in which the exponential expansions are used to diagonalize the multipole-to-local translations. The program and its full description, as well as several closely related libraries and utility tools are available at http://lsec.cc.ac.cn/~lubz/afmpb.html and a mirror site at http://mccammon.ucsd.edu/. This paper is a brief summary of the program: the algorithms, the implementation and the usage. Program summaryProgram title: AFMPB: Adaptive fast multipole Poisson-Boltzmann solver Catalogue identifier: AEGB_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGB_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GPL 2.0 No. of lines in distributed program, including test data, etc.: 453 649 No. of bytes in distributed program, including test data, etc.: 8 764 754 Distribution format: tar.gz Programming language: Fortran Computer: Any Operating system: Any RAM: Depends on the size of the discretized biomolecular system Classification: 3 External routines: Pre- and post-processing tools are required for generating the boundary elements and for visualization. Users can use MSMS ( http://www.scripps.edu/~sanner/html/msms_home.html) for pre-processing, and VMD ( http://www.ks.uiuc.edu/Research/vmd/) for visualization. Sub-programs included: An iterative Krylov subspace solvers package from SPARSKIT by Yousef Saad ( http://www-users.cs.umn.edu/~saad/software/SPARSKIT/sparskit.html), and the fast multipole methods subroutines from FMMSuite ( http://www.fastmultipole.org/). Nature of problem: Numerical solution of the linearized Poisson-Boltzmann equation that describes electrostatic interactions of molecular systems in ionic solutions. Solution method: A novel node-patch scheme is used to discretize the well-conditioned boundary integral equation formulation of the linearized Poisson-Boltzmann equation. Various Krylov subspace solvers can be subsequently applied to solve the resulting linear system, with a bounded number of iterations independent of the number of discretized unknowns. The matrix-vector multiplication at each iteration is accelerated by the adaptive new versions of fast multipole methods. The AFMPB solver requires other stand-alone pre-processing tools for boundary mesh generation, post-processing tools for data analysis and visualization, and can be conveniently coupled with different time stepping methods for dynamics simulation. Restrictions: Only three or six significant digits options are provided in this version. Unusual features: Most of the codes are in Fortran77 style. Memory allocation functions from Fortran90 and above are used in a few subroutines. Additional comments: The current version of the codes is designed and written for single core/processor desktop machines. Check http://lsec.cc.ac.cn/~lubz/afmpb.html and http://mccammon.ucsd.edu/ for updates and changes. Running time: The running time varies with the number of discretized elements ( N) in the system and their distributions. In most cases, it scales linearly as a function of N.
VACTIV: A graphical dialog based program for an automatic processing of line and band spectra
NASA Astrophysics Data System (ADS)
Zlokazov, V. B.
2013-05-01
The program VACTIV-Visual ACTIV-has been developed for an automatic analysis of spectrum-like distributions, in particular gamma-ray spectra or alpha-spectra and is a standard graphical dialog based Windows XX application, driven by a menu, mouse and keyboard. On the one hand, it was a conversion of an existing Fortran program ACTIV [1] to the DELPHI language; on the other hand, it is a transformation of the sequential syntax of Fortran programming to a new object-oriented style, based on the organization of event interactions. New features implemented in the algorithms of both the versions consisted in the following as peak model both an analytical function and a graphical curve could be used; the peak search algorithm was able to recognize not only Gauss peaks but also peaks with an irregular form; both narrow peaks (2-4 channels) and broad ones (50-100 channels); the regularization technique in the fitting guaranteed a stable solution in the most complicated cases of strongly overlapping or weak peaks. The graphical dialog interface of VACTIV is much more convenient than the batch mode of ACTIV. [1] V.B. Zlokazov, Computer Physics Communications, 28 (1982) 27-37. NEW VERSION PROGRAM SUMMARYProgram Title: VACTIV Catalogue identifier: ABAC_v2_0 Licensing provisions: no Programming language: DELPHI 5-7 Pascal. Computer: IBM PC series. Operating system: Windows XX. RAM: 1 MB Keywords: Nuclear physics, spectrum decomposition, least squares analysis, graphical dialog, object-oriented programming. Classification: 17.6. Catalogue identifier of previous version: ABAC_v1_0 Journal reference of previous version: Comput. Phys. Commun. 28 (1982) 27 Does the new version supersede the previous version?: Yes. Nature of problem: Program VACTIV is intended for precise analysis of arbitrary spectrum-like distributions, e.g. gamma-ray and X-ray spectra and allows the user to carry out the full cycle of automatic processing of such spectra, i.e. calibration, automatic peak search and estimation of parameters of interest. VACTIV can run on any standard modern laptop. Reasons for the new version: At the time of its creation (1999) VACTIV was seemingly the first attempt to apply the newest programming languages and styles to systems of spectrum analysis. Its goal was to both get a convenient and efficient technique for data processing, and to elaborate the formalism of spectrum analysis in terms of classes, their properties, their methods and events of an object-oriented programming language. Summary of revisions: Compared with ACTIV, VACTIV preserves all the mathematical algorithms, but provides the user with all the benefits of an interface, based on a graphical dialog. It allows him to make a quick intervention in the work of the program; in particular, to carry out the on-line control of the fitting process: depending on the intermediate results and using the visual form of data representation, to change the conditions for the fitting and so achieve the optimum performance, selecting the optimum strategy. To find the best conditions for the fitting one can compress the spectrum, delete the blunders from it, smooth it using a high-frequency spline filter and build the background using a low-frequency spline filter; use not only automatic methods for the blunder deletion, the peak search, the peak model forming and the calibration, but also use manual mouse clicking on the spectrum graph. Restrictions: To enhance the reliability and portability of the program the majority of the most important arrays have a static allocation; all the arrays are allocated with a surplus, and the total pool of the program is restricted only by the size of the computer virtual memory. A spectrum has the static size of 32 K real words. The maximum size of the least-square matrix is 314 (the maximum number of fitted parameters per one analyzed spectrum interval, not for the whole spectrum), from which it follows that the maximum number of peaks in one spectrum interval is 154. The maximum total number of peaks in the spectrum is not restricted. Running time: The calculation time is negligibly small compared with the time for the dialog; using ini-files the program can be partly used in a semi-dialog mode.
NASA Astrophysics Data System (ADS)
Davidson, N.; Golonka, P.; Przedziński, T.; Waş, Z.
2011-03-01
Theoretical predictions in high energy physics are routinely provided in the form of Monte Carlo generators. Comparisons of predictions from different programs and/or different initialization set-ups are often necessary. MC-TESTER can be used for such tests of decays of intermediate states (particles or resonances) in a semi-automated way. Since 2002 new functionalities were introduced into the package. In particular, it now works with the HepMC event record, the standard for C++ programs. The complete set-up for benchmarking the interfaces, such as interface between τ-lepton production and decay, including QED bremsstrahlung effects is shown. The example is chosen to illustrate the new options introduced into the program. From the technical perspective, our paper documents software updates and supplements previous documentation. As in the past, our test consists of two steps. Distinct Monte Carlo programs are run separately; events with decays of a chosen particle are searched, and information is stored by MC-TESTER. Then, at the analysis step, information from a pair of runs may be compared and represented in the form of tables and plots. Updates introduced in the program up to version 1.24.4 are also documented. In particular, new configuration scripts or script to combine results from multitude of runs into single information file to be used in analysis step are explained. Program summaryProgram title: MC-TESTER, version 1.23 and version 1.24.4 Catalog identifier: ADSM_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADSM_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 250 548 No. of bytes in distributed program, including test data, etc.: 4 290 610 Distribution format: tar.gz Programming language: C++, FORTRAN77 Tested and compiled with: gcc 3.4.6, 4.2.4 and 4.3.2 with g77/gfortran Computer: Tested on various platforms Operating system: Tested on operating systems: Linux SLC 4.6 and SLC 5, Fedora 8, Ubuntu 8.2 etc. Classification: 11.9 External routines: HepMC ( https://savannah.cern.ch/projects/hepmc/), PYTHIA8 ( http://home.thep.lu.se/~torbjorn/Pythia.html), LaTeX ( http://www.latex-project.org/) Catalog identifier of previous version: ADSM_v1_0 Journal reference of previous version: Comput. Phys. Comm. 157 (2004) 39 Does the new version supersede the previous version?: Yes Nature of problem: The decays of individual particles are well defined modules of a typical Monte Carlo program chain in high energy physics. A fast, semi-automatic way of comparing results from different programs is often desirable for the development of new programs, in order to check correctness of the installations or for discussion of uncertainties. Solution method: A typical HEP Monte Carlo program stores the generated events in event records such as HepMC, HEPEVT or PYJETS. MC-TESTER scans, event by event, the contents of the record and searches for the decays of the particle under study. The list of the found decay modes is successively incremented and histograms of all invariant masses which can be calculated from the momenta of the particle decay products are defined and filled. The outputs from the two runs of distinct programs can be later compared. A booklet of comparisons is created: for every decay channel, all histograms present in the two outputs are plotted and parameter quantifying shape difference is calculated. Its maximum over every decay channel is printed in the summary table. Reasons for new version: Interface for HepMC Event Record is introduced. Setup for benchmarking the interfaces, such as τ-lepton production and decay, including QED bremsstrahlung effects is introduced as well. This required significant changes in the algorithm. As a consequence, a new version of the code was introduced. Restrictions: Only the first 200 decay channels that were found will initialize histograms and if the multiplicity of decay products in a given channel was larger than 7, histograms will not be created for that channel. Additional comments: New features: HepMC interface, use of lists in definition of histograms and decay channels, filters for decay products or secondary decays to be omitted, bug fixing, extended flexibility in representation of program output, installation configuration scripts, merging multiple output files from separate generations. Running time: Varies substantially with the analyzed decay particle, but generally speed estimation of the old version remains valid. On a PC/Linux with 2.0 GHz processors MC-TESTER increases the run time of the τ-lepton Monte Carlo program TAUOLA by 4.0 seconds for every 100 000 analyzed events (generation itself takes 26 seconds). The analysis step takes 13 seconds; LATEX processing takes additionally 10 seconds. Generation step runs may be executed simultaneously on multiprocessor machines.
A new version of Scilab software package for the study of dynamical systems
NASA Astrophysics Data System (ADS)
Bordeianu, C. C.; Felea, D.; Beşliu, C.; Jipa, Al.; Grossu, I. V.
2009-11-01
This work presents a new version of a software package for the study of chaotic flows, maps and fractals [1]. The codes were written using Scilab, a software package for numerical computations providing a powerful open computing environment for engineering and scientific applications. It was found that Scilab provides various functions for ordinary differential equation solving, Fast Fourier Transform, autocorrelation, and excellent 2D and 3D graphical capabilities. The chaotic behaviors of the nonlinear dynamics systems were analyzed using phase-space maps, autocorrelation functions, power spectra, Lyapunov exponents and Kolmogorov-Sinai entropy. Various well-known examples are implemented, with the capability of the users inserting their own ODE or iterative equations. New version program summaryProgram title: Chaos v2.0 Catalogue identifier: AEAP_v2_0 Program summary URL:
A new version of the CADNA library for estimating round-off error propagation in Fortran programs
NASA Astrophysics Data System (ADS)
Jézéquel, Fabienne; Chesneaux, Jean-Marie; Lamotte, Jean-Luc
2010-11-01
The CADNA library enables one to estimate, using a probabilistic approach, round-off error propagation in any simulation program. CADNA provides new numerical types, the so-called stochastic types, on which round-off errors can be estimated. Furthermore CADNA contains the definition of arithmetic and relational operators which are overloaded for stochastic variables and the definition of mathematical functions which can be used with stochastic arguments. On 64-bit processors, depending on the rounding mode chosen, the mathematical library associated with the GNU Fortran compiler may provide incorrect results or generate severe bugs. Therefore the CADNA library has been improved to enable the numerical validation of programs on 64-bit processors. New version program summaryProgram title: CADNA Catalogue identifier: AEAT_v1_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAT_v1_1.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 28 488 No. of bytes in distributed program, including test data, etc.: 463 778 Distribution format: tar.gz Programming language: Fortran NOTE: A C++ version of this program is available in the Library as AEGQ_v1_0 Computer: PC running LINUX with an i686 or an ia64 processor, UNIX workstations including SUN, IBM Operating system: LINUX, UNIX Classification: 6.5 Catalogue identifier of previous version: AEAT_v1_0 Journal reference of previous version: Comput. Phys. Commun. 178 (2008) 933 Does the new version supersede the previous version?: Yes Nature of problem: A simulation program which uses floating-point arithmetic generates round-off errors, due to the rounding performed at each assignment and at each arithmetic operation. Round-off error propagation may invalidate the result of a program. The CADNA library enables one to estimate round-off error propagation in any simulation program and to detect all numerical instabilities that may occur at run time. Solution method: The CADNA library [1-3] implements Discrete Stochastic Arithmetic [4,5] which is based on a probabilistic model of round-off errors. The program is run several times with a random rounding mode generating different results each time. From this set of results, CADNA estimates the number of exact significant digits in the result that would have been computed with standard floating-point arithmetic. Reasons for new version: On 64-bit processors, the mathematical library associated with the GNU Fortran compiler may provide incorrect results or generate severe bugs with rounding towards -∞ and +∞, which the random rounding mode is based on. Therefore a particular definition of mathematical functions for stochastic arguments has been included in the CADNA library to enable its use with the GNU Fortran compiler on 64-bit processors. Summary of revisions: If CADNA is used on a 64-bit processor with the GNU Fortran compiler, mathematical functions are computed with rounding to the nearest, otherwise they are computed with the random rounding mode. It must be pointed out that the knowledge of the accuracy of the stochastic argument of a mathematical function is never lost. Restrictions: CADNA requires a Fortran 90 (or newer) compiler. In the program to be linked with the CADNA library, round-off errors on complex variables cannot be estimated. Furthermore array functions such as product or sum must not be used. Only the arithmetic operators and the abs, min, max and sqrt functions can be used for arrays. Additional comments: In the library archive, users are advised to read the INSTALL file first. The doc directory contains a user guide named ug.cadna.pdf which shows how to control the numerical accuracy of a program using CADNA, provides installation instructions and describes test runs. The source code, which is located in the src directory, consists of one assembly language file (cadna_rounding.s) and eighteen Fortran language files. cadna_rounding.s is a symbolic link to the assembly file corresponding to the processor and the Fortran compiler used. This assembly file contains routines which are frequently called in the CADNA Fortran files to change the rounding mode. The Fortran language files contain the definition of the stochastic types on which the control of accuracy can be performed, CADNA specific functions (for instance to enable or disable the detection of numerical instabilities), the definition of arithmetic and relational operators which are overloaded for stochastic variables and the definition of mathematical functions which can be used with stochastic arguments. The examples directory contains seven test runs which illustrate the use of the CADNA library and the benefits of Discrete Stochastic Arithmetic. Running time: The version of a code which uses CADNA runs at least three times slower than its floating-point version. This cost depends on the computer architecture and can be higher if the detection of numerical instabilities is enabled. In this case, the cost may be related to the number of instabilities detected.
Precision studies of the NNLO DGLAP evolution at the LHC with Candia
NASA Astrophysics Data System (ADS)
Cafarella, Alessandro; Corianò, Claudio; Guzzi, Marco
2008-11-01
We summarize the theoretical approach to the solution of the NNLO DGLAP equations using methods based on the logarithmic expansions in x-space and their implementation into the C program CANDIA 1.0. We present the various options implemented in the program and discuss the different solutions. The user can choose the order of the evolution, the type of the solution, which can be either exact or truncated, and the evolution either with a fixed or a varying flavor number, implemented in the varying-flavor-number scheme (VFNS). The renormalization and factorization scale dependencies are treated separately. In the non-singlet sector the program implements an exact NNLO solution. Program summaryProgram title: CANDIA Catalogue identifier: AEBK_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEBK_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 101 376 No. of bytes in distributed program, including test data, etc.: 5 865 234 Distribution format: tar.gz Programming language: C and Fortran Computer: All Operating system: Linux RAM: In the given examples, it ranges from 4 to 490 MB Classification: 11.1, 11.5 Nature of problem: The program provided here solves the DGLAP evolution equations for the parton distribution functions up to NNLO. Solution method: The algorithm implemented is based on the theory of the logarithmic expansions in Bjorken x-space. Additional comments: To be sure of getting the latest version of the program, the authors suggest downloading the code from their official CANDIA website ( http://www.le.infn.it/candia). Running time: In the given examples, it ranges from 1 to 40 minutes. The jobs have been executed on an Intel Core 2 Duo T7250 CPU at 2 GHz with a 64 bit Linux kernel. The test run script included in the package contains 5 sample runs and may take a number of hours to process, depending on the speed of the processor used and the size of the available RAM. http://www.le.infn.it/candia.
Markovian Monte Carlo program EvolFMC v.2 for solving QCD evolution equations
NASA Astrophysics Data System (ADS)
Jadach, S.; Płaczek, W.; Skrzypek, M.; Stokłosa, P.
2010-02-01
We present the program EvolFMC v.2 that solves the evolution equations in QCD for the parton momentum distributions by means of the Monte Carlo technique based on the Markovian process. The program solves the DGLAP-type evolution as well as modified-DGLAP ones. In both cases the evolution can be performed in the LO or NLO approximation. The quarks are treated as massless. The overall technical precision of the code has been established at 5×10. This way, for the first time ever, we demonstrate that with the Monte Carlo method one can solve the evolution equations with precision comparable to the other numerical methods. New version program summaryProgram title: EvolFMC v.2 Catalogue identifier: AEFN_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFN_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including binary test data, etc.: 66 456 (7407 lines of C++ code) No. of bytes in distributed program, including test data, etc.: 412 752 Distribution format: tar.gz Programming language: C++ Computer: PC, Mac Operating system: Linux, Mac OS X RAM: Less than 256 MB Classification: 11.5 External routines: ROOT ( http://root.cern.ch/drupal/) Nature of problem: Solution of the QCD evolution equations for the parton momentum distributions of the DGLAP- and modified-DGLAP-type in the LO and NLO approximations. Solution method: Monte Carlo simulation of the Markovian process of a multiple emission of partons. Restrictions:Limited to the case of massless partons. Implemented in the LO and NLO approximations only. Weighted events only. Unusual features: Modified-DGLAP evolutions included up to the NLO level. Additional comments: Technical precision established at 5×10. Running time: For the 10 6 events at 100 GeV: DGLAP NLO: 27s; C-type modified DGLAP NLO: 150s (MacBook Pro with Mac OS X v.10.5.5, 2.4 GHz Intel Core 2 Duo, gcc 4.2.4, single thread).
Integrating products of Bessel functions with an additional exponential or rational factor
NASA Astrophysics Data System (ADS)
Van Deun, Joris; Cools, Ronald
2008-04-01
We provide two MATLAB programs to compute integrals of the form ex∏i=1kJν_i(ax)dxand 0∞xr+x∏i=1kJν_i(ax)dx with Jν_i(x) the Bessel function of the first kind and (real) order ν. The parameter m is a real number such that ∑ν+m>-1 (to assure integrability near zero), r is real and the numbers c and a are all strictly positive. The program can deliver accurate error estimates. Program summaryProgram title: BESSELINTR, BESSELINTC Catalogue identifier: AEAH_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAH_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 1601 No. of bytes in distributed program, including test data, etc.: 13 161 Distribution format: tar.gz Programming language: Matlab (version ⩾6.5), Octave (version ⩾2.1.69) Computer: All supporting Matlab or Octave Operating system: All supporting Matlab or Octave RAM: For k Bessel functions our program needs approximately ( 500+140k) double precision variables Classification: 4.11 Nature of problem: The problem consists in integrating an arbitrary product of Bessel functions with an additional rational or exponential factor over a semi-infinite interval. Difficulties arise from the irregular oscillatory behaviour and the possible slow decay of the integrand, which prevents truncation at a finite point. Solution method: The interval of integration is split into a finite and infinite part. The integral over the finite part is computed using Gauss-Legendre quadrature. The integrand on the infinite part is approximated using asymptotic expansions and this approximation is integrated exactly with the aid of the upper incomplete gamma function. In the case where a rational factor is present, this factor is first expanded in a Taylor series around infinity. Restrictions: Some (and eventually all) numerical accuracy is lost when one or more of the parameters r,c,a or v grow very large, or when r becomes small. Running time: Less than 0.02 s for a simple problem (two Bessel functions, small parameters), a few seconds for a more complex problem (more than six Bessel functions, large parameters), in Matlab 7.4 (R2007a) on a 2.4 GHz AMD Opteron Processor 250. References:J. Van Deun, R. Cools, Algorithm 858: Computing infinite range integrals of an arbitrary product of Bessel functions, ACM Trans. Math. Software 32 (4) (2006) 580-596.
MCdevelop - a universal framework for Stochastic Simulations
NASA Astrophysics Data System (ADS)
Slawinska, M.; Jadach, S.
2011-03-01
We present MCdevelop, a universal computer framework for developing and exploiting the wide class of Stochastic Simulations (SS) software. This powerful universal SS software development tool has been derived from a series of scientific projects for precision calculations in high energy physics (HEP), which feature a wide range of functionality in the SS software needed for advanced precision Quantum Field Theory calculations for the past LEP experiments and for the ongoing LHC experiments at CERN, Geneva. MCdevelop is a "spin-off" product of HEP to be exploited in other areas, while it will still serve to develop new SS software for HEP experiments. Typically SS involve independent generation of large sets of random "events", often requiring considerable CPU power. Since SS jobs usually do not share memory it makes them easy to parallelize. The efficient development, testing and running in parallel SS software requires a convenient framework to develop software source code, deploy and monitor batch jobs, merge and analyse results from multiple parallel jobs, even before the production runs are terminated. Throughout the years of development of stochastic simulations for HEP, a sophisticated framework featuring all the above mentioned functionality has been implemented. MCdevelop represents its latest version, written mostly in C++ (GNU compiler gcc). It uses Autotools to build binaries (optionally managed within the KDevelop 3.5.3 Integrated Development Environment (IDE)). It uses the open-source ROOT package for histogramming, graphics and the mechanism of persistency for the C++ objects. MCdevelop helps to run multiple parallel jobs on any computer cluster with NQS-type batch system. Program summaryProgram title:MCdevelop Catalogue identifier: AEHW_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHW_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 48 136 No. of bytes in distributed program, including test data, etc.: 355 698 Distribution format: tar.gz Programming language: ANSI C++ Computer: Any computer system or cluster with C++ compiler and UNIX-like operating system. Operating system: Most UNIX systems, Linux. The application programs were thoroughly tested under Ubuntu 7.04, 8.04 and CERN Scientific Linux 5. Has the code been vectorised or parallelised?: Tools (scripts) for optional parallelisation on a PC farm are included. RAM: 500 bytes Classification: 11.3 External routines: ROOT package version 5.0 or higher ( http://root.cern.ch/drupal/). Nature of problem: Developing any type of stochastic simulation program for high energy physics and other areas. Solution method: Object Oriented programming in C++ with added persistency mechanism, batch scripts for running on PC farms and Autotools.
NASA Astrophysics Data System (ADS)
Hung, Linda; Huang, Chen; Shin, Ilgyou; Ho, Gregory S.; Lignères, Vincent L.; Carter, Emily A.
2010-12-01
Orbital-free density functional theory (OFDFT) is a first principles quantum mechanics method to find the ground-state energy of a system by variationally minimizing with respect to the electron density. No orbitals are used in the evaluation of the kinetic energy (unlike Kohn-Sham DFT), and the method scales nearly linearly with the size of the system. The PRinceton Orbital-Free Electronic Structure Software (PROFESS) uses OFDFT to model materials from the atomic scale to the mesoscale. This new version of PROFESS allows the study of larger systems with two significant changes: PROFESS is now parallelized, and the ion-electron and ion-ion terms scale quasilinearly, instead of quadratically as in PROFESS v1 (L. Hung and E.A. Carter, Chem. Phys. Lett. 475 (2009) 163). At the start of a run, PROFESS reads the various input files that describe the geometry of the system (ion positions and cell dimensions), the type of elements (defined by electron-ion pseudopotentials), the actions you want it to perform (minimize with respect to electron density and/or ion positions and/or cell lattice vectors), and the various options for the computation (such as which functionals you want it to use). Based on these inputs, PROFESS sets up a computation and performs the appropriate optimizations. Energies, forces, stresses, material geometries, and electron density configurations are some of the values that can be output throughout the optimization. New version program summaryProgram Title: PROFESS Catalogue identifier: AEBN_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEBN_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 68 721 No. of bytes in distributed program, including test data, etc.: 1 708 547 Distribution format: tar.gz Programming language: Fortran 90 Computer: Intel with ifort; AMD Opteron with pathf90 Operating system: Linux Has the code been vectorized or parallelized?: Yes. Parallelization is implemented through domain composition using MPI. RAM: Problem dependent, but 2 GB is sufficient for up to 10,000 ions. Classification: 7.3 External routines: FFTW 2.1.5 ( http://www.fftw.org) Catalogue identifier of previous version: AEBN_v1_0 Journal reference of previous version: Comput. Phys. Comm. 179 (2008) 839 Does the new version supersede the previous version?: Yes Nature of problem: Given a set of coordinates describing the initial ion positions under periodic boundary conditions, recovers the ground state energy, electron density, ion positions, and cell lattice vectors predicted by orbital-free density functional theory. The computation of all terms is effectively linear scaling. Parallelization is implemented through domain decomposition, and up to ˜10,000 ions may be included in the calculation on just a single processor, limited by RAM. For example, when optimizing the geometry of ˜50,000 aluminum ions (plus vacuum) on 48 cores, a single iteration of conjugate gradient ion geometry optimization takes ˜40 minutes wall time. However, each CG geometry step requires two or more electron density optimizations, so step times will vary. Solution method: Computes energies as described in text; minimizes this energy with respect to the electron density, ion positions, and cell lattice vectors. Reasons for new version: To allow much larger systems to be simulated using PROFESS. Restrictions: PROFESS cannot use nonlocal (such as ultrasoft) pseudopotentials. A variety of local pseudopotential files are available at the Carter group website ( http://www.princeton.edu/mae/people/faculty/carter/homepage/research/localpseudopotentials/). Also, due to the current state of the kinetic energy functionals, PROFESS is only reliable for main group metals and some properties of semiconductors. Running time: Problem dependent: the test example provided with the code takes less than a second to run. Timing results for large scale problems are given in the PROFESS paper and Ref. [1].
NASA Astrophysics Data System (ADS)
Kondayya, Gundra; Shukla, Alok
2012-03-01
Pariser-Parr-Pople (P-P-P) model Hamiltonian is employed frequently to study the electronic structure and optical properties of π-conjugated systems. In this paper we describe a Fortran 90 computer program which uses the P-P-P model Hamiltonian to solve the Hartree-Fock (HF) equation for infinitely long, one-dimensional, periodic, π-electron systems. The code is capable of computing the band structure, as also the linear optical absorption spectrum, by using the tight-binding and the HF methods. Furthermore, using our program the user can solve the HF equation in the presence of a finite external electric field, thereby, allowing the simulation of gated systems. We apply our code to compute various properties of polymers such as trans-polyacetylene, poly- para-phenylene, and armchair and zigzag graphene nanoribbons, in the infinite length limit. Program summaryProgram title: ppp_bulk.x Catalogue identifier: AEKW_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKW_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 87 464 No. of bytes in distributed program, including test data, etc.: 2 046 933 Distribution format: tar.gz Programming language: Fortran 90 Computer: PCs and workstations Operating system: Linux, Code was developed and tested on various recent versions of 64-bit Fedora including Fedora 14 (kernel version 2.6.35.12-90). Classification: 7.3 External routines: This program needs to link with LAPACK/BLAS libraries compiled with the same compiler as the program. For the Intel Fortran Compiler we used the ACML library version 4.4.0, while for the gfortran compiler we used the libraries supplied with the Fedora distribution. Nature of problem: The electronic structure of one-dimensional periodic π-conjugated systems is an intense area of research at present because of the tremendous interest in the physics of conjugated polymers and graphene nanoribbons. The computer program described in this paper provides an efficient way of solving the Hartree-Fock equations for such systems within the P-P-P model. In addition to the Bloch orbitals, band structure, and the density of states, the program can also compute quantities such as the linear absorption spectrum, and the electro-absorption spectrum of these systems. Solution method: For a one-dimensional periodic π-conjugated system lying in the xy-plane, the single-particle Bloch orbitals are expressed as linear combinations of p-orbitals of individual atoms. Then using various parameters defining the P-P-P Hamiltonian, the Hartree-Fock equations are set up as a matrix eigenvalue problem in the k-space. Thereby, its solutions are obtained in a self-consistent manner, using the iterative diagonalizing technique at several k points. The band structure and the corresponding Bloch orbitals thus obtained are used to perform a variety of calculations such as the density of states, linear optical absorption spectrum, electro-absorption spectrum, etc. Running time: Most of the examples provided take only a few seconds to run. For a large system, however, depending on the system size, the run time may be a few minutes to a few hours.
AUTO_DERIV: Tool for automatic differentiation of a Fortran code
NASA Astrophysics Data System (ADS)
Stamatiadis, S.; Farantos, S. C.
2010-10-01
AUTO_DERIV is a module comprised of a set of FORTRAN 95 procedures which can be used to calculate the first and second partial derivatives (mixed or not) of any continuous function with many independent variables. The mathematical function should be expressed as one or more FORTRAN 77/90/95 procedures. A new type of variables is defined and the overloading mechanism of functions and operators provided by the FORTRAN 95 language is extensively used to define the differentiation rules. Proper (standard complying) handling of floating-point exceptions is provided by using the IEEE_EXCEPTIONS intrinsic module (Technical Report 15580, incorporated in FORTRAN 2003). New version program summaryProgram title: AUTO_DERIV Catalogue identifier: ADLS_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADLS_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 2963 No. of bytes in distributed program, including test data, etc.: 10 314 Distribution format: tar.gz Programming language: Fortran 95 + (optionally) TR-15580 (Floating-point exception handling) Computer: all platforms with a Fortran 95 compiler Operating system: Linux, Windows, MacOS Classification: 4.12, 6.2 Catalogue identifier of previous version: ADLS_v1_0 Journal reference of previous version: Comput. Phys. Comm. 127 (2000) 343 Does the new version supersede the previous version?: Yes Nature of problem: The need to calculate accurate derivatives of a multivariate function frequently arises in computational physics and chemistry. The most versatile approach to evaluate them by a computer, automatically and to machine precision, is via user-defined types and operator overloading. AUTO_DERIV is a Fortran 95 implementation of them, designed to evaluate the first and second derivatives of a function of many variables. Solution method: The mathematical rules for differentiation of sums, products, quotients, elementary functions in conjunction with the chain rule for compound functions are applied. The function should be expressed as one or more Fortran 77/90/95 procedures. A new type of variables is defined and the overloading mechanism of functions and operators provided by the Fortran 95 language is extensively used to implement the differentiation rules. Reasons for new version: The new version supports Fortran 95, handles properly the floating-point exceptions, and is faster due to internal reorganization. All discovered bugs are fixed. Summary of revisions:The code was rewritten extensively to benefit from features introduced in Fortran 95. Additionally, there was a major internal reorganization of the code, resulting in faster execution. The user interface described in the original paper was not changed. The values that the user must or should specify before compilation (essentially, the number of independent variables) were moved into ad_types module. There were many minor bug fixes. One important bug was found and fixed; the code did not handle correctly the overloading of ∗ in aλ when a=0. The case of division by zero and the discontinuity of the function at the requested point are indicated by standard IEEE exceptions ( IEEE_DIVIDE_BY_ZERO and IEEE_INVALID respectively). If the compiler does not support IEEE exceptions, a module with the appropriate name is provided, imitating the behavior of the 'standard' module in the sense that it raises the corresponding exceptions. It is up to the compiler (through certain flags probably) to detect them. Restrictions: None imposed by the program. There are certain limitations that may appear mostly due to the specific implementation chosen in the user code. They can always be overcome by recoding parts of the routines developed by the user or by modifying AUTO_DERIV according to specific instructions given in [1]. The common restrictions of available memory and the capabilities of the compiler are the same as the original version. Additional comments: The program has been tested using the following compilers: Intel ifort, GNU gfortran, NAGWare f95, g95. Running time: The typical running time for the program depends on the compiler and the complexity of the differentiated function. A rough estimate is that AUTO_DERIV is ten times slower than the evaluation of the analytical ('by hand') function value and derivatives (if they are available). References:S. Stamatiadis, R. Prosmiti, S.C. Farantos, AUTO_DERIV: tool for automatic differentiation of a Fortran code, Comput. Phys. Comm. 127 (2000) 343.
A Monte-Carlo maplet for the study of the optical properties of biological tissues
NASA Astrophysics Data System (ADS)
Yip, Man Ho; Carvalho, M. J.
2007-12-01
Monte-Carlo simulations are commonly used to study complex physical processes in various fields of physics. In this paper we present a Maple program intended for Monte-Carlo simulations of photon transport in biological tissues. The program has been designed so that the input data and output display can be handled by a maplet (an easy and user-friendly graphical interface), named the MonteCarloMaplet. A thorough explanation of the programming steps and how to use the maplet is given. Results obtained with the Maple program are compared with corresponding results available in the literature. Program summaryProgram title:MonteCarloMaplet Catalogue identifier:ADZU_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZU_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.:3251 No. of bytes in distributed program, including test data, etc.:296 465 Distribution format: tar.gz Programming language:Maple 10 Computer: Acer Aspire 5610 (any running Maple 10) Operating system: Windows XP professional (any running Maple 10) Classification: 3.1, 5 Nature of problem: Simulate the transport of radiation in biological tissues. Solution method: The Maple program follows the steps of the C program of L. Wang et al. [L. Wang, S.L. Jacques, L. Zheng, Computer Methods and Programs in Biomedicine 47 (1995) 131-146]; The Maple library routine for random number generation is used [Maple 10 User Manual c Maplesoft, a division of Waterloo Maple Inc., 2005]. Restrictions: Running time increases rapidly with the number of photons used in the simulation. Unusual features: A maplet (graphical user interface) has been programmed for data input and output. Note that the Monte-Carlo simulation was programmed with Maple 10. If attempting to run the simulation with an earlier version of Maple, appropriate modifications (regarding typesetting fonts) are required and once effected the worksheet runs without problem. However some of the windows of the maplet may still appear distorted. Running time: Depends essentially on the number of photons used in the simulation. Elapsed times for particular runs are reported in the main text.
Revision of FMM-Yukawa: An adaptive fast multipole method for screened Coulomb interactions
NASA Astrophysics Data System (ADS)
Zhang, Bo; Huang, Jingfang; Pitsianis, Nikos P.; Sun, Xiaobai
2010-12-01
FMM-YUKAWA is a mathematical software package primarily for rapid evaluation of the screened Coulomb interactions of N particles in three dimensional space. Since its release, we have revised and re-organized the data structure, software architecture, and user interface, for the purpose of enabling more flexible, broader and easier use of the package. The package and its documentation are available at http://www.fastmultipole.org/, along with a few other closely related mathematical software packages. New version program summaryProgram title: FMM-Yukawa Catalogue identifier: AEEQ_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEQ_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU GPL 2.0 No. of lines in distributed program, including test data, etc.: 78 704 No. of bytes in distributed program, including test data, etc.: 854 265 Distribution format: tar.gz Programming language: FORTRAN 77, FORTRAN 90, and C. Requires gcc and gfortran version 4.4.3 or later Computer: All Operating system: Any Classification: 4.8, 4.12 Catalogue identifier of previous version: AEEQ_v1_0 Journal reference of previous version: Comput. Phys. Comm. 180 (2009) 2331 Does the new version supersede the previous version?: Yes Nature of problem: To evaluate the screened Coulomb potential and force field of N charged particles, and to evaluate a convolution type integral where the Green's function is the fundamental solution of the modified Helmholtz equation. Solution method: The new version of fast multipole method (FMM) that diagonalizes the multipole-to-local translation operator is applied with the tree structure adaptive to sample particle locations. Reasons for new version: To handle much larger particle ensembles, to enable the iterative use of the subroutines in a solver, and to remove potential contention in assignments for parallelization. Summary of revisions: The software package FMM-Yukawa has been revised and re-organized in data structure, software architecture, programming methods, and user interface. The revision enables more flexible use of the package and economic use of memory resources. It consists of five stages. The initial stage (stage 1) determines, based on the accuracy requirement and FMM theory, the length of multipole expansions and the number of quadrature points for diagonalization, and loads the quadrature nodes and weights that are computed off line. Stage 2 constructs the oct-tree and interaction lists, with adaptation to the sparsity or density of particles and employing a dynamic memory allocation scheme at every tree level. Stage 3 executes the core FMM subroutine for numerical calculation of the particle interactions. The subroutine can now be used iteratively as in a solver, while the particle locations remain the same. Stage 4 releases the memory allocated in Stage 2 for the adaptive tree and interaction lists. The user can modify the iterative routine easily. When the particle locations are changed such as in a molecular dynamics simulation, stage 2 to 4 can also be used together repeatedly. The final stage releases the memory space used for the quadrature and other remaining FMM parameters. Programs at the stage level and at the user interface are re-written in the C programming language, while most of the translation and interaction operations remain in FORTRAN. As a result of the change in data structures and memory allocation, the revised package can accommodate much larger particle ensembles while maintaining the same accuracy-efficiency performance. The new version is also developed as an important precursor to its parallel counterpart on multi-core or many core processors in a shared memory programming environment. Particularly, in order to ensure mutual exclusion in concurrent updates without incurring extra latency, we have replaced all the assignment statements at a source box that put its data to multiple target boxes with assignments at every target box that gather data from source boxes. This amounts to replacing the column version of matrix-vector multiplication with the row version. The matrix here, however, is in compressive representation. Sufficient care is taken in the revision not to alter the algorithmic complexity or numerical behavior, as concurrent writing potentially takes place in the upward calculation of the multipole expansion coefficients, interactions at every level of the FMM tree, and downward calculation of the local expansion coefficients. The software modules and their compositions are also organized according to the stages they are used. Demonstration files and makefiles for merging the user routines and the library routines are provided. Restrictions: Accuracy requirement is described in terms of three or six digits. Higher multiples of three digits will be allowed in a later version. Finer decimation in digits for accuracy specification may or may not be necessary. Unusual features: Ready and friendly for customized use and instrumental in expression of concurrency and dependency for efficient parallelization. Running time: The running time depends linearly on the number N of particles, and varies with the distribution characteristics of the particle distribution. It also depends on the accuracy requirement, a higher accuracy requirement takes relatively longer time. The code outperforms the direct summation method when N⩾750.
Accelerating numerical solution of stochastic differential equations with CUDA
NASA Astrophysics Data System (ADS)
Januszewski, M.; Kostur, M.
2010-01-01
Numerical integration of stochastic differential equations is commonly used in many branches of science. In this paper we present how to accelerate this kind of numerical calculations with popular NVIDIA Graphics Processing Units using the CUDA programming environment. We address general aspects of numerical programming on stream processors and illustrate them by two examples: the noisy phase dynamics in a Josephson junction and the noisy Kuramoto model. In presented cases the measured speedup can be as high as 675× compared to a typical CPU, which corresponds to several billion integration steps per second. This means that calculations which took weeks can now be completed in less than one hour. This brings stochastic simulation to a completely new level, opening for research a whole new range of problems which can now be solved interactively. Program summaryProgram title: SDE Catalogue identifier: AEFG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFG_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Gnu GPL v3 No. of lines in distributed program, including test data, etc.: 978 No. of bytes in distributed program, including test data, etc.: 5905 Distribution format: tar.gz Programming language: CUDA C Computer: any system with a CUDA-compatible GPU Operating system: Linux RAM: 64 MB of GPU memory Classification: 4.3 External routines: The program requires the NVIDIA CUDA Toolkit Version 2.0 or newer and the GNU Scientific Library v1.0 or newer. Optionally gnuplot is recommended for quick visualization of the results. Nature of problem: Direct numerical integration of stochastic differential equations is a computationally intensive problem, due to the necessity of calculating multiple independent realizations of the system. We exploit the inherent parallelism of this problem and perform the calculations on GPUs using the CUDA programming environment. The GPU's ability to execute hundreds of threads simultaneously makes it possible to speed up the computation by over two orders of magnitude, compared to a typical modern CPU. Solution method: The stochastic Runge-Kutta method of the second order is applied to integrate the equation of motion. Ensemble-averaged quantities of interest are obtained through averaging over multiple independent realizations of the system. Unusual features: The numerical solution of the stochastic differential equations in question is performed on a GPU using the CUDA environment. Running time: < 1 minute
QCDNUM: Fast QCD evolution and convolution
NASA Astrophysics Data System (ADS)
Botje, M.
2011-02-01
The QCDNUM program numerically solves the evolution equations for parton densities and fragmentation functions in perturbative QCD. Un-polarised parton densities can be evolved up to next-to-next-to-leading order in powers of the strong coupling constant, while polarised densities or fragmentation functions can be evolved up to next-to-leading order. Other types of evolution can be accessed by feeding alternative sets of evolution kernels into the program. A versatile convolution engine provides tools to compute parton luminosities, cross-sections in hadron-hadron scattering, and deep inelastic structure functions in the zero-mass scheme or in generalised mass schemes. Input to these calculations are either the QCDNUM evolved densities, or those read in from an external parton density repository. Included in the software distribution are packages to calculate zero-mass structure functions in un-polarised deep inelastic scattering, and heavy flavour contributions to these structure functions in the fixed flavour number scheme. Program summaryProgram title: QCDNUM version: 17.00 Catalogue identifier: AEHV_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHV_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU Public Licence No. of lines in distributed program, including test data, etc.: 45 736 No. of bytes in distributed program, including test data, etc.: 911 569 Distribution format: tar.gz Programming language: Fortran-77 Computer: All Operating system: All RAM: Typically 3 Mbytes Classification: 11.5 Nature of problem: Evolution of the strong coupling constant and parton densities, up to next-to-next-to-leading order in perturbative QCD. Computation of observable quantities by Mellin convolution of the evolved densities with partonic cross-sections. Solution method: Parametrisation of the parton densities as linear or quadratic splines on a discrete grid, and evolution of the spline coefficients by solving (coupled) triangular matrix equations with a forward substitution algorithm. Fast computation of convolution integrals as weighted sums of spline coefficients, with weights derived from user-given convolution kernels. Restrictions: Accuracy and speed are determined by the density of the evolution grid. Running time: Less than 10 ms on a 2 GHz Intel Core 2 Duo processor to evolve the gluon density and 12 quark densities at next-to-next-to-leading order over a large kinematic range.
NASA Astrophysics Data System (ADS)
Daniluk, Andrzej
2010-03-01
Scientific computing is the field of study concerned with constructing mathematical models, numerical solution techniques and with using computers to analyse and solve scientific and engineering problems. Model-Driven Development (MDD) has been proposed as a means to support the software development process through the use of a model-centric approach. This paper surveys the core MDD technology that was used to develop an application that allows computation of the RHEED intensities dynamically for a disordered surface. New version program summaryProgram title: RHEED1DProcess Catalogue identifier: ADUY_v4_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADUY_v4_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 31 971 No. of bytes in distributed program, including test data, etc.: 3 039 820 Distribution format: tar.gz Programming language: Embarcadero C++ Builder Computer: Intel Core Duo-based PC Operating system: Windows XP, Vista, 7 RAM: more than 1 GB Classification: 4.3, 7.2, 6.2, 8, 14 Catalogue identifier of previous version: ADUY_v3_0 Journal reference of previous version: Comput. Phys. Comm. 180 (2009) 2394 Does the new version supersede the previous version?: No Nature of problem: An application that implements numerical simulations should be constructed according to the CSFAR rules: clear and well-documented, simple, fast, accurate, and robust. A clearly written, externally and internally documented program is much easier to understand and modify. A simple program is much less prone to error and is more easily modified than one that is complicated. Simplicity and clarity also help make the program flexible. Making the program fast has economic benefits. It also allows flexibility because some of the features that make a program efficient can be traded off for greater accuracy. Making the program fast also has the benefit of allowing longer calculations with better resolution. The compromise between speed and accuracy has always posted one of the most troublesome challenges for the programmer. Almost all advances in numerical analysis have come about trying to reach these twin goals. Change in the basic algorithms will give greater improvements in accuracy and speed than using special numerical tricks or changing programming language. A robust program works correctly over a broad spectrum of input data. Solution method: The computational model of the program is based on the use of a dynamical diffraction theory in which the electrons are taken to be diffracted by a potential, which is periodic in the dimension perpendicular to the surface. In the case of a disordered surface we can use the proportional model of the scattering potential, in which the potential of a partially filled layer is taken to be the product of the coverage of this layer and the potential of a fully filled layer: U(θ,z)=∑ θ(t/τ)U(1,z), where U(1,z) stands for the potential for the full nth layer, and U(θ,z) the potential of the growing layer. Reasons for new version: Responding to the user feedback the RHEEDGr_09 program has been upgraded to a standard that allows carrying out computations of the RHEED intensities for a disordered surface. Also, functionality and documentation of the program have been improved. Summary of revisions:The logical structure of the Platform-Specific Model of the RHEEDGr_09 program has been modified according to the scheme showed in Fig. 1*. The class diagram in Fig. 1* is a static view of the main platform-specific elements of the RHEED1DProcess architecture. Fig. 2* provides a dynamic view by showing the creation and destruction simplistic sequence diagram for the process. Fig. 3* shows the RHEED1DProcess use case model. As can be seen in Figs. 2-3* the RHEED1DProcess has been designed as a slave process that runs as a separate thread inside each transaction generated by the master Growth09 program (see pii:S0010-4655(09)00386-5 A. Daniluk, Model-Driven Development for scientific computing. Computations of RHEED intensities for a disordered surface. Part II The RHEED1DProcess requires the user to provide the appropriate parameters for the crystal structure under investigation. These parameters are loaded from the parameters.ini file at run-time. Instructions on the preparation of the .ini files can be found in the new distribution. The RHEED1DProcess requires the user to provide the appropriate values of the layers of coverage profiles. The CoverageProfiles.dat file (generated by Growth09 master application) at run-time loads these values. The RHEED1DProcess enables carrying out one-dimensional dynamical calculations for the fcc lattice, with a two-atoms basis and fcc lattice, with one atom basis but yet the zeroth Fourier component of the scattering potential in the TRHEED1D::crystPotUg() function can be modified according to users' specific application requirements. * The figures mentioned can be downloaded, see "Supplementary material" below. Unusual features: The program is distributed in the form of main projects RHEED1DProcess.cbproj and Graph2D0x.cbproj with associated files, and should be compiled using Embarcadero RAD Studio 2010 along with Together visual-modelling platform. The program should be compiled with English/USA regional and language options. Additional comments: This version of the RHEED program is designed to run in conjunction with the GROWTH09 (ADVL_v3_0) program. It does not replace the previous, stand alone, RHEEDGR-09 (ADUY_v3_0) version. Running time: The typical running time is machine and user-parameters dependent. References:[1] OMG, Model Driven Architecture Guide Version 1.0.1, 2003.
A molecular dynamics implementation of the 3D Mercedes-Benz water model
NASA Astrophysics Data System (ADS)
Hynninen, T.; Dias, C. L.; Mkrtchyan, A.; Heinonen, V.; Karttunen, M.; Foster, A. S.; Ala-Nissila, T.
2012-02-01
The three-dimensional Mercedes-Benz model was recently introduced to account for the structural and thermodynamic properties of water. It treats water molecules as point-like particles with four dangling bonds in tetrahedral coordination, representing H-bonds of water. Its conceptual simplicity renders the model attractive in studies where complex behaviors emerge from H-bond interactions in water, e.g., the hydrophobic effect. A molecular dynamics (MD) implementation of the model is non-trivial and we outline here the mathematical framework of its force-field. Useful routines written in modern Fortran are also provided. This open source code is free and can easily be modified to account for different physical context. The provided code allows both serial and MPI-parallelized execution. Program summaryProgram title: CASHEW (Coarse Approach Simulator for Hydrogen-bonding Effects in Water) Catalogue identifier: AEKM_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKM_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 20 501 No. of bytes in distributed program, including test data, etc.: 551 044 Distribution format: tar.gz Programming language: Fortran 90 Computer: Program has been tested on desktop workstations and a Cray XT4/XT5 supercomputer. Operating system: Linux, Unix, OS X Has the code been vectorized or parallelized?: The code has been parallelized using MPI. RAM: Depends on size of system, about 5 MB for 1500 molecules. Classification: 7.7 External routines: A random number generator, Mersenne Twister ( http://www.math.sci.hiroshima-u.ac.jp/m-mat/MT/VERSIONS/FORTRAN/mt95.f90), is used. A copy of the code is included in the distribution. Nature of problem: Molecular dynamics simulation of a new geometric water model. Solution method: New force-field for water molecules, velocity-Verlet integration, representation of molecules as rigid particles with rotations described using quaternion algebra. Restrictions: Memory and cpu time limit the size of simulations. Additional comments: Software web site: https://gitorious.org/cashew/. Running time: Depends on the size of system. The sample tests provided only take a few seconds.
The FTS atomic spectrum tool (FAST) for rapid analysis of line spectra
NASA Astrophysics Data System (ADS)
Ruffoni, M. P.
2013-07-01
The FTS Atomic Spectrum Tool (FAST) is an interactive graphical program designed to simplify the analysis of atomic emission line spectra obtained from Fourier transform spectrometers. Calculated, predicted and/or known experimental line parameters are loaded alongside experimentally observed spectral line profiles for easy comparison between new experimental data and existing results. Many such line profiles, which could span numerous spectra, may be viewed simultaneously to help the user detect problems from line blending or self-absorption. Once the user has determined that their experimental line profile fits are good, a key feature of FAST is the ability to calculate atomic branching fractions, transition probabilities, and oscillator strengths-and their uncertainties-which is not provided by existing analysis packages. Program SummaryProgram title: FAST: The FTS Atomic Spectrum Tool Catalogue identifier: AEOW_v1_0 Program summary URL: http://cpc.cs.qub.ac.uk/summaries/AEOW_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License version 3 No. of lines in distributed program, including test data, etc.: 293058 No. of bytes in distributed program, including test data, etc.: 13809509 Distribution format: tar.gz Programming language: C++. Computer: Intel x86-based systems. Operating system: Linux/Unix/Windows. RAM: 8 MB minimum. About 50-200 MB for a typical analysis. Classification: 2.2, 2.3, 21.2. Nature of problem: Visualisation of atomic line spectra including the comparison of theoretical line parameters with experimental atomic line profiles. Accurate intensity calibration of experimental spectra, and the determination of observed relative line intensities that are needed for calculating atomic branching fractions and oscillator strengths. Solution method: FAST is centred around a graphical interface, where a user may view sets of experimental line profiles and compare them to calculated data (such as from the Kurucz database [1]), predicted line parameters, and/or previously known experimental results. With additional information on the spectral response of the spectrometer, obtained from a calibrated standard light source, FT spectra may be intensity calibrated. In turn, this permits the user to calculate atomic branching fractions and oscillator strengths, and their respective uncertainties. Running time: Open ended. Defined by the user. References: [1] R.L. Kurucz (2007). URL http://kurucz.harvard.edu/atoms/.
NASA Astrophysics Data System (ADS)
Petersen, T. C.; Ringer, S. P.
2010-03-01
Upon discerning the mere shape of an imaged object, as portrayed by projected perimeters, the full three-dimensional scattering density may not be of particular interest. In this situation considerable simplifications to the reconstruction problem are possible, allowing calculations based upon geometric principles. Here we describe and provide an algorithm which reconstructs the three-dimensional morphology of specimens from tilt series of images for application to electron tomography. Our algorithm uses a differential approach to infer the intersection of projected tangent lines with surfaces which define boundaries between regions of different scattering densities within and around the perimeters of specimens. Details of the algorithm implementation are given and explained using reconstruction calculations from simulations, which are built into the code. An experimental application of the algorithm to a nano-sized Aluminium tip is also presented to demonstrate practical analysis for a real specimen. Program summaryProgram title: STOMO version 1.0 Catalogue identifier: AEFS_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFS_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 2988 No. of bytes in distributed program, including test data, etc.: 191 605 Distribution format: tar.gz Programming language: C/C++ Computer: PC Operating system: Windows XP RAM: Depends upon the size of experimental data as input, ranging from 200 Mb to 1.5 Gb Supplementary material: Sample output files, for the test run provided, are available. Classification: 7.4, 14 External routines: Dev-C++ ( http://www.bloodshed.net/devcpp.html) Nature of problem: Electron tomography of specimens for which conventional back projection may fail and/or data for which there is a limited angular range. The algorithm does not solve the tomographic back-projection problem but rather reconstructs the local 3D morphology of surfaces defined by varied scattering densities. Solution method: Reconstruction using differential geometry applied to image analysis computations. Restrictions: The code has only been tested with square images and has been developed for only single-axis tilting. Running time: For high quality reconstruction, 5-15 min
Automatic computation of the travelling wave solutions to nonlinear PDEs
NASA Astrophysics Data System (ADS)
Liang, Songxin; Jeffrey, David J.
2008-05-01
Various extensions of the tanh-function method and their implementations for finding explicit travelling wave solutions to nonlinear partial differential equations (PDEs) have been reported in the literature. However, some solutions are often missed by these packages. In this paper, a new algorithm and its implementation called TWS for solving single nonlinear PDEs are presented. TWS is implemented in MAPLE 10. It turns out that, for PDEs whose balancing numbers are not positive integers, TWS works much better than existing packages. Furthermore, TWS obtains more solutions than existing packages for most cases. Program summaryProgram title:TWS Catalogue identifier:AEAM_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAM_v1_0.html Program obtainable from:CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions:Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.:1250 No. of bytes in distributed program, including test data, etc.:78 101 Distribution format:tar.gz Programming language:Maple 10 Computer:A laptop with 1.6 GHz Pentium CPU Operating system:Windows XP Professional RAM:760 Mbytes Classification:5 Nature of problem:Finding the travelling wave solutions to single nonlinear PDEs. Solution method:Based on tanh-function method. Restrictions:The current version of this package can only deal with single autonomous PDEs or ODEs, not systems of PDEs or ODEs. However, the PDEs can have any finite number of independent space variables in addition to time t. Unusual features:For PDEs whose balancing numbers are not positive integers, TWS works much better than existing packages. Furthermore, TWS obtains more solutions than existing packages for most cases. Additional comments:It is easy to use. Running time:Less than 20 seconds for most cases, between 20 to 100 seconds for some cases, over 100 seconds for few cases. References: [1] E.S. Cheb-Terrab, K. von Bulow, Comput. Phys. Comm. 90 (1995) 102. [2] S.A. Elwakil, S.K. El-Labany, M.A. Zahran, R. Sabry, Phys. Lett. A 299 (2002) 179. [3] E. Fan, Phys. Lett. 277 (2000) 212. [4] W. Malfliet, Amer. J. Phys. 60 (1992) 650. [5] W. Malfliet, W. Hereman, Phys. Scripta 54 (1996) 563. [6] E.J. Parkes, B.R. Duffy, Comput. Phys. Comm. 98 (1996) 288.
Symbolic computation of the Hartree-Fock energy from a chiral EFT three-nucleon interaction at N 2LO
NASA Astrophysics Data System (ADS)
Gebremariam, B.; Bogner, S. K.; Duguet, T.
2010-06-01
We present the first of a two-part Mathematica notebook collection that implements a symbolic approach for the application of the density matrix expansion (DME) to the Hartree-Fock (HF) energy from a chiral effective field theory (EFT) three-nucleon interaction at N 2LO. The final output from the notebooks is a Skyrme-like energy density functional that provides a quasi-local approximation to the non-local HF energy. In this paper, we discuss the derivation of the HF energy and its simplification in terms of the scalar/vector-isoscalar/isovector parts of the one-body density matrix. Furthermore, a set of steps is described and illustrated on how to extend the approach to other three-nucleon interactions. Program summaryProgram title: SymbHFNNN Catalogue identifier: AEGC_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGC_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 96 666 No. of bytes in distributed program, including test data, etc.: 378 083 Distribution format: tar.gz Programming language: Mathematica 7.1 Computer: Any computer running Mathematica 6.0 and later versions Operating system: Windows Xp, Linux/Unix RAM: 256 Mb Classification: 5, 17.16, 17.22 Nature of problem: The calculation of the HF energy from the chiral EFT three-nucleon interaction at N 2LO involves tremendous spin-isospin algebra. The problem is compounded by the need to eventually obtain a quasi-local approximation to the HF energy, which requires the HF energy to be expressed in terms of scalar/vector-isoscalar/isovector parts of the one-body density matrix. The Mathematica notebooks discussed in this paper solve the latter issue. Solution method: The HF energy from the chiral EFT three-nucleon interaction at N 2LO is cast into a form suitable for an automatic simplification of the spin-isospin traces. Several Mathematica functions and symbolic manipulation techniques are used to obtain the result in terms of the scalar/vector-isoscalar/isovector parts of the one-body density matrix. Running time: Several hours
FEWZ 2.0: A code for hadronic Z production at next-to-next-to-leading order
NASA Astrophysics Data System (ADS)
Gavin, Ryan; Li, Ye; Petriello, Frank; Quackenbush, Seth
2011-11-01
We introduce an improved version of the simulation code FEWZ ( Fully Exclusive W and Z Production) for hadron collider production of lepton pairs through the Drell-Yan process at next-to-next-to-leading order (NNLO) in the strong coupling constant. The program is fully differential in the phase space of leptons and additional hadronic radiation. The new version offers users significantly more options for customization. FEWZ now bins multiple, user-selectable histograms during a single run, and produces parton distribution function (PDF) errors automatically. It also features a significantly improved integration routine, and can take advantage of multiple processor cores locally or on the Condor distributed computing system. We illustrate the new features of FEWZ by presenting numerous phenomenological results for LHC physics. We compare NNLO QCD with initial ATLAS and CMS results, and discuss in detail the effects of detector acceptance on the measurement of angular quantities associated with Z-boson production. We address the issue of technical precision in the presence of severe phase-space cuts. Program summaryProgram title: FEWZ Catalogue identifier: AEJP_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJP_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 6 280 771 No. of bytes in distributed program, including test data, etc.: 173 027 645 Distribution format: tar.gz Programming language: Fortran 77, C++, Python Computer: Mac, PC Operating system: Mac OSX, Unix/Linux Has the code been vectorized or parallelized?: Yes. User-selectable, 1 to 219 RAM: 200 Mbytes for common parton distribution functions Classification: 11.1 External routines: CUBA numerical integration library, numerous parton distribution sets (see text); these are provided with the code. Nature of problem: Determination of the Drell-Yan Z/photon production cross section and decay into leptons, with kinematic distributions of leptons and jets including full spin correlations, at next-to-next-to-leading order in the strong coupling constant. Solution method: Virtual loop integrals are decomposed into master integrals using automated techniques. Singularities are extracted from real radiation terms via sector decomposition, which separates singularities and maps onto suitable phase space variables. Result is convoluted with parton distribution functions. Each piece is numerically integrated over phase space, which allows arbitrary cuts on the observed particles. Each sample point may be binned during numerical integration, providing histograms, and reweighted by parton distribution function error eigenvectors, which provides PDF errors. Restrictions: Output does not correspond to unweighted events, and cannot be interfaced with a shower Monte Carlo. Additional comments: !!!!! The distribution file for this program is over 170 Mbytes and therefore is not delivered directly when download or E-mail is requested. Instead a html file giving details of how the program can be obtained is sent. Running time: One day for total cross sections with 0.1% integration errors assuming typical cuts, up to 1 week for smooth kinematic distributions with sub-percent integration errors for each bin.
Model-Driven Development for scientific computing. An upgrade of the RHEEDGr program
NASA Astrophysics Data System (ADS)
Daniluk, Andrzej
2009-11-01
Model-Driven Engineering (MDE) is the software engineering discipline, which considers models as the most important element for software development, and for the maintenance and evolution of software, through model transformation. Model-Driven Architecture (MDA) is the approach for software development under the Model-Driven Engineering framework. This paper surveys the core MDA technology that was used to upgrade of the RHEEDGR program to C++0x language standards. New version program summaryProgram title: RHEEDGR-09 Catalogue identifier: ADUY_v3_0 Program summary URL:
Ground state of the time-independent Gross Pitaevskii equation
NASA Astrophysics Data System (ADS)
Dion, Claude M.; Cancès, Eric
2007-11-01
We present a suite of programs to determine the ground state of the time-independent Gross-Pitaevskii equation, used in the simulation of Bose-Einstein condensates. The calculation is based on the Optimal Damping Algorithm, ensuring a fast convergence to the true ground state. Versions are given for the one-, two-, and three-dimensional equation, using either a spectral method, well suited for harmonic trapping potentials, or a spatial grid. Program summaryProgram title: GPODA Catalogue identifier: ADZN_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZN_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 5339 No. of bytes in distributed program, including test data, etc.: 19 426 Distribution format: tar.gz Programming language: Fortran 90 Computer: ANY (Compilers under which the program has been tested: Absoft Pro Fortran, The Portland Group Fortran 90/95 compiler, Intel Fortran Compiler) RAM: From <1 MB in 1D to ˜10 MB for a large 3D grid Classification: 2.7, 4.9 External routines: LAPACK, BLAS, DFFTPACK Nature of problem: The order parameter (or wave function) of a Bose-Einstein condensate (BEC) is obtained, in a mean field approximation, by the Gross-Pitaevskii equation (GPE) [F. Dalfovo, S. Giorgini, L.P. Pitaevskii, S. Stringari, Rev. Mod. Phys. 71 (1999) 463]. The GPE is a nonlinear Schrödinger-like equation, including here a confining potential. The stationary state of a BEC is obtained by finding the ground state of the time-independent GPE, i.e., the order parameter that minimizes the energy. In addition to the standard three-dimensional GPE, tight traps can lead to effective two- or even one-dimensional BECs, so the 2D and 1D GPEs are also considered. Solution method: The ground state of the time-independent of the GPE is calculated using the Optimal Damping Algorithm [E. Cancès, C. Le Bris, Int. J. Quantum Chem. 79 (2000) 82]. Two sets of programs are given, using either a spectral representation of the order parameter [C.M. Dion, E. Cancès, Phys. Rev. E 67 (2003) 046706], suitable for a (quasi) harmonic trapping potential, or by discretizing the order parameter on a spatial grid. Running time: From seconds in 1D to a few hours for large 3D grids
FAPT: A Mathematica package for calculations in QCD Fractional Analytic Perturbation Theory
NASA Astrophysics Data System (ADS)
Bakulev, Alexander P.; Khandramai, Vyacheslav L.
2013-01-01
We provide here all the procedures in Mathematica which are needed for the computation of the analytic images of the strong coupling constant powers in Minkowski (A(s;nf) and Aνglob(s)) and Euclidean (A(Q2;nf) and Aνglob(Q2)) domains at arbitrary energy scales (s and Q2, correspondingly) for both schemes — with fixed number of active flavours nf=3,4,5,6 and the global one with taking into account all heavy-quark thresholds. These singularity-free couplings are inevitable elements of Analytic Perturbation Theory (APT) in QCD, proposed in [10,69,70], and its generalization — Fractional APT, suggested in [42,46,43], needed to apply the APT imperative for renormalization-group improved hadronic observables. Program summaryProgram title: FAPT Catalogue identifier: AENJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENJ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 1985 No. of bytes in distributed program, including test data, etc.: 1895776 Distribution format: tar.gz Programming language: Mathematica. Computer: Any work-station or PC where Mathematica is running. Operating system: Windows XP, Mathematica (versions 5 and 7). Classification: 11.5. Nature of problem: The values of analytic images A(Q2) and A(s) of the QCD running coupling powers αsν(Q2) in Euclidean and Minkowski regions, correspondingly, are determined through the spectral representation in the QCD Analytic Perturbation Theory (APT). In the program FAPT we collect all relevant formulas and various procedures which allow for a convenient evaluation of A(Q2) and A(s) using numerical integrations of the relevant spectral densities. Solution method: FAPT uses Mathematica functions to calculate different spectral densities and then performs numerical integration of these spectral integrals to obtain analytic images of different objects. Restrictions: It could be that for an unphysical choice of the input parameters the results are without any meaning. Running time: For all operations the run time does not exceed a few seconds. Usually numerical integration is not fast, so that we advise the use of arrays of precalculated data and then to apply the routine Interpolate(as shown in supplied example of the program usage, namely in the notebook FAPT_Interp.nb).
CADNA: a library for estimating round-off error propagation
NASA Astrophysics Data System (ADS)
Jézéquel, Fabienne; Chesneaux, Jean-Marie
2008-06-01
The CADNA library enables one to estimate round-off error propagation using a probabilistic approach. With CADNA the numerical quality of any simulation program can be controlled. Furthermore by detecting all the instabilities which may occur at run time, a numerical debugging of the user code can be performed. CADNA provides new numerical types on which round-off errors can be estimated. Slight modifications are required to control a code with CADNA, mainly changes in variable declarations, input and output. This paper describes the features of the CADNA library and shows how to interpret the information it provides concerning round-off error propagation in a code. Program summaryProgram title:CADNA Catalogue identifier:AEAT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAT_v1_0.html Program obtainable from:CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions:Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.:53 420 No. of bytes in distributed program, including test data, etc.:566 495 Distribution format:tar.gz Programming language:Fortran Computer:PC running LINUX with an i686 or an ia64 processor, UNIX workstations including SUN, IBM Operating system:LINUX, UNIX Classification:4.14, 6.5, 20 Nature of problem:A simulation program which uses floating-point arithmetic generates round-off errors, due to the rounding performed at each assignment and at each arithmetic operation. Round-off error propagation may invalidate the result of a program. The CADNA library enables one to estimate round-off error propagation in any simulation program and to detect all numerical instabilities that may occur at run time. Solution method:The CADNA library [1] implements Discrete Stochastic Arithmetic [2-4] which is based on a probabilistic model of round-off errors. The program is run several times with a random rounding mode generating different results each time. From this set of results, CADNA estimates the number of exact significant digits in the result that would have been computed with standard floating-point arithmetic. Restrictions:CADNA requires a Fortran 90 (or newer) compiler. In the program to be linked with the CADNA library, round-off errors on complex variables cannot be estimated. Furthermore array functions such as product or sum must not be used. Only the arithmetic operators and the abs, min, max and sqrt functions can be used for arrays. Running time:The version of a code which uses CADNA runs at least three times slower than its floating-point version. This cost depends on the computer architecture and can be higher if the detection of numerical instabilities is enabled. In this case, the cost may be related to the number of instabilities detected. References:The CADNA library, URL address: http://www.lip6.fr/cadna. J.-M. Chesneaux, L'arithmétique Stochastique et le Logiciel CADNA, Habilitation á diriger des recherches, Université Pierre et Marie Curie, Paris, 1995. J. Vignes, A stochastic arithmetic for reliable scientific computation, Math. Comput. Simulation 35 (1993) 233-261. J. Vignes, Discrete stochastic arithmetic for validating results of numerical software, Numer. Algorithms 37 (2004) 377-390.
Swan: A tool for porting CUDA programs to OpenCL
NASA Astrophysics Data System (ADS)
Harvey, M. J.; De Fabritiis, G.
2011-04-01
The use of modern, high-performance graphical processing units (GPUs) for acceleration of scientific computation has been widely reported. The majority of this work has used the CUDA programming model supported exclusively by GPUs manufactured by NVIDIA. An industry standardisation effort has recently produced the OpenCL specification for GPU programming. This offers the benefits of hardware-independence and reduced dependence on proprietary tool-chains. Here we describe a source-to-source translation tool, "Swan" for facilitating the conversion of an existing CUDA code to use the OpenCL model, as a means to aid programmers experienced with CUDA in evaluating OpenCL and alternative hardware. While the performance of equivalent OpenCL and CUDA code on fixed hardware should be comparable, we find that a real-world CUDA application ported to OpenCL exhibits an overall 50% increase in runtime, a reduction in performance attributable to the immaturity of contemporary compilers. The ported application is shown to have platform independence, running on both NVIDIA and AMD GPUs without modification. We conclude that OpenCL is a viable platform for developing portable GPU applications but that the more mature CUDA tools continue to provide best performance. Program summaryProgram title: Swan Catalogue identifier: AEIH_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIH_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU Public License version 2 No. of lines in distributed program, including test data, etc.: 17 736 No. of bytes in distributed program, including test data, etc.: 131 177 Distribution format: tar.gz Programming language: C Computer: PC Operating system: Linux RAM: 256 Mbytes Classification: 6.5 External routines: NVIDIA CUDA, OpenCL Nature of problem: Graphical Processing Units (GPUs) from NVIDIA are preferentially programed with the proprietary CUDA programming toolkit. An alternative programming model promoted as an industry standard, OpenCL, provides similar capabilities to CUDA and is also supported on non-NVIDIA hardware (including multicore ×86 CPUs, AMD GPUs and IBM Cell processors). The adaptation of a program from CUDA to OpenCL is relatively straightforward but laborious. The Swan tool facilitates this conversion. Solution method:Swan performs a translation of CUDA kernel source code into an OpenCL equivalent. It also generates the C source code for entry point functions, simplifying kernel invocation from the host program. A concise host-side API abstracts the CUDA and OpenCL APIs. A program adapted to use Swan has no dependency on the CUDA compiler for the host-side program. The converted program may be built for either CUDA or OpenCL, with the selection made at compile time. Restrictions: No support for CUDA C++ features Running time: Nominal
Resolution of singularities for multi-loop integrals
NASA Astrophysics Data System (ADS)
Bogner, Christian; Weinzierl, Stefan
2008-04-01
We report on a program for the numerical evaluation of divergent multi-loop integrals. The program is based on iterated sector decomposition. We improve the original algorithm of Binoth and Heinrich such that the program is guaranteed to terminate. The program can be used to compute numerically the Laurent expansion of divergent multi-loop integrals regulated by dimensional regularisation. The symbolic and the numerical steps of the algorithm are combined into one program. Program summaryProgram title: sector_decomposition Catalogue identifier: AEAG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAG_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 47 506 No. of bytes in distributed program, including test data, etc.: 328 485 Distribution format: tar.gz Programming language: C++ Computer: all Operating system: Unix RAM: Depending on the complexity of the problem Classification: 4.4 External routines: GiNaC, available from http://www.ginac.de, GNU scientific library, available from http://www.gnu.org/software/gsl Nature of problem: Computation of divergent multi-loop integrals. Solution method: Sector decomposition. Restrictions: Only limited by the available memory and CPU time. Running time: Depending on the complexity of the problem.
Accelerating Monte Carlo simulations with an NVIDIA ® graphics processor
NASA Astrophysics Data System (ADS)
Martinsen, Paul; Blaschke, Johannes; Künnemeyer, Rainer; Jordan, Robert
2009-10-01
Modern graphics cards, commonly used in desktop computers, have evolved beyond a simple interface between processor and display to incorporate sophisticated calculation engines that can be applied to general purpose computing. The Monte Carlo algorithm for modelling photon transport in turbid media has been implemented on an NVIDIA ® 8800 GT graphics card using the CUDA toolkit. The Monte Carlo method relies on following the trajectory of millions of photons through the sample, often taking hours or days to complete. The graphics-processor implementation, processing roughly 110 million scattering events per second, was found to run more than 70 times faster than a similar, single-threaded implementation on a 2.67 GHz desktop computer. Program summaryProgram title: Phoogle-C/Phoogle-G Catalogue identifier: AEEB_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEB_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 51 264 No. of bytes in distributed program, including test data, etc.: 2 238 805 Distribution format: tar.gz Programming language: C++ Computer: Designed for Intel PCs. Phoogle-G requires a NVIDIA graphics card with support for CUDA 1.1 Operating system: Windows XP Has the code been vectorised or parallelized?: Phoogle-G is written for SIMD architectures RAM: 1 GB Classification: 21.1 External routines: Charles Karney Random number library. Microsoft Foundation Class library. NVIDA CUDA library [1]. Nature of problem: The Monte Carlo technique is an effective algorithm for exploring the propagation of light in turbid media. However, accurate results require tracing the path of many photons within the media. The independence of photons naturally lends the Monte Carlo technique to implementation on parallel architectures. Generally, parallel computing can be expensive, but recent advances in consumer grade graphics cards have opened the possibility of high-performance desktop parallel-computing. Solution method: In this pair of programmes we have implemented the Monte Carlo algorithm described by Prahl et al. [2] for photon transport in infinite scattering media to compare the performance of two readily accessible architectures: a standard desktop PC and a consumer grade graphics card from NVIDIA. Restrictions: The graphics card implementation uses single precision floating point numbers for all calculations. Only photon transport from an isotropic point-source is supported. The graphics-card version has no user interface. The simulation parameters must be set in the source code. The desktop version has a simple user interface; however some properties can only be accessed through an ActiveX client (such as Matlab). Additional comments: The random number library used has a LGPL ( http://www.gnu.org/copyleft/lesser.html) licence. Running time: Runtime can range from minutes to months depending on the number of photons simulated and the optical properties of the medium. References:http://www.nvidia.com/object/cuda_home.html. S. Prahl, M. Keijzer, Sl. Jacques, A. Welch, SPIE Institute Series 5 (1989) 102.
Simulation of n-qubit quantum systems. V. Quantum measurements
NASA Astrophysics Data System (ADS)
Radtke, T.; Fritzsche, S.
2010-02-01
The FEYNMAN program has been developed during the last years to support case studies on the dynamics and entanglement of n-qubit quantum registers. Apart from basic transformations and (gate) operations, it currently supports a good number of separability criteria and entanglement measures, quantum channels as well as the parametrizations of various frequently applied objects in quantum information theory, such as (pure and mixed) quantum states, hermitian and unitary matrices or classical probability distributions. With the present update of the FEYNMAN program, we provide a simple access to (the simulation of) quantum measurements. This includes not only the widely-applied projective measurements upon the eigenspaces of some given operator but also single-qubit measurements in various pre- and user-defined bases as well as the support for two-qubit Bell measurements. In addition, we help perform generalized and POVM measurements. Knowing the importance of measurements for many quantum information protocols, e.g., one-way computing, we hope that this update makes the FEYNMAN code an attractive and versatile tool for both, research and education. New version program summaryProgram title: FEYNMAN Catalogue identifier: ADWE_v5_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADWE_v5_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 27 210 No. of bytes in distributed program, including test data, etc.: 1 960 471 Distribution format: tar.gz Programming language: Maple 12 Computer: Any computer with Maple software installed Operating system: Any system that supports Maple; the program has been tested under Microsoft Windows XP and Linux Classification: 4.15 Catalogue identifier of previous version: ADWE_v4_0 Journal reference of previous version: Comput. Phys. Commun. 179 (2008) 647 Does the new version supersede the previous version?: Yes Nature of problem: During the last decade, the field of quantum information science has largely contributed to our understanding of quantum mechanics, and has provided also new and efficient protocols that are used on quantum entanglement. To further analyze the amount and transfer of entanglement in n-qubit quantum protocols, symbolic and numerical simulations need to be handled efficiently. Solution method: Using the computer algebra system Maple, we developed a set of procedures in order to support the definition, manipulation and analysis of n-qubit quantum registers. These procedures also help to deal with (unitary) logic gates and (nonunitary) quantum operations and measurements that act upon the quantum registers. All commands are organized in a hierarchical order and can be used interactively in order to simulate and analyze the evolution of n-qubit quantum systems, both in ideal and noisy quantum circuits. Reasons for new version: Until the present, the FEYNMAN program supported the basic data structures and operations of n-qubit quantum registers [1], a good number of separability and entanglement measures [2], quantum operations (noisy channels) [3] as well as the parametrizations of various frequently applied objects, such as (pure and mixed) quantum states, hermitian and unitary matrices or classical probability distributions [4]. With the current extension, we here add all necessary features to simulate quantum measurements, including the projective measurements in various single-qubit and the two-qubit Bell basis, and POVM measurements. Together with the previously implemented functionality, this greatly enhances the possibilities of analyzing quantum information protocols in which measurements play a central role, e.g., one-way computation. Running time: Most commands require ⩽10 seconds of processor time on a Pentium 4 processor with ⩾2 GHz RAM or newer, if they work with quantum registers with five or less qubits. Moreover, about 5-20 MB of working memory is typically needed (in addition to the memory for the Maple environment itself). However, especially when working with symbolic expressions, the requirements on the CPU time and memory critically depend on the size of the quantum registers owing to the exponential growth of the dimension of the associated Hilbert space. For example, complex (symbolic) noise models, i.e. with several Kraus operators, may result in very large expressions that dramatically slow down the evaluation of e.g. distance measures or the final-state entropy, etc. In these cases, Maple's assume facility sometimes helps to reduce the complexity of the symbolic expressions, but more often than not only a numerical evaluation is feasible. Since the various commands can be applied to quite different scenarios, no general scaling rule can be given for the CPU time or the request of memory. References:[1] T. Radtke, S. Fritzsche, Comput. Phys. Commun. 173 (2005) 91.[2] T. Radtke, S. Fritzsche, Comput. Phys. Commun. 175 (2006) 145.[3] T. Radtke, S. Fritzsche, Comput. Phys. Commun. 176 (2007) 617.[4] T. Radtke, S. Fritzsche, Comput. Phys. Commun. 179 (2008) 647.
The Grid[Way] Job Template Manager, a tool for parameter sweeping
NASA Astrophysics Data System (ADS)
Lorca, Alejandro; Huedo, Eduardo; Llorente, Ignacio M.
2011-04-01
Parameter sweeping is a widely used algorithmic technique in computational science. It is specially suited for high-throughput computing since the jobs evaluating the parameter space are loosely coupled or independent. A tool that integrates the modeling of a parameter study with the control of jobs in a distributed architecture is presented. The main task is to facilitate the creation and deletion of job templates, which are the elements describing the jobs to be run. Extra functionality relies upon the GridWay Metascheduler, acting as the middleware layer for job submission and control. It supports interesting features like multi-dimensional sweeping space, wildcarding of parameters, functional evaluation of ranges, value-skipping and job template automatic indexation. The use of this tool increases the reliability of the parameter sweep study thanks to the systematic bookkeeping of job templates and respective job statuses. Furthermore, it simplifies the porting of the target application to the grid reducing the required amount of time and effort. Program summaryProgram title: Grid[Way] Job Template Manager (version 1.0) Catalogue identifier: AEIE_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIE_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Apache license 2.0 No. of lines in distributed program, including test data, etc.: 3545 No. of bytes in distributed program, including test data, etc.: 126 879 Distribution format: tar.gz Programming language: Perl 5.8.5 and above Computer: Any (tested on PC x86 and x86_64) Operating system: Unix, GNU/Linux (tested on Ubuntu 9.04, Scientific Linux 4.7, centOS 5.4), Mac OS X (tested on Snow Leopard 10.6) RAM: 10 MB Classification: 6.5 External routines: The GridWay Metascheduler [1]. Nature of problem: To parameterize and manage an application running on a grid or cluster. Solution method: Generation of job templates as a cross product of the input parameter sets. Also management of the job template files including the job submission to the grid, control and information retrieval. Restrictions: The parameter sweep is limited by disk space during generation of the job templates. The wild-carding of parameters cannot be done in decreasing order. Job submission, control and information is delegated to the GridWay Metascheduler. Running time: From half a second in the simplest operation to a few minutes for thousands of exponential sampling parameters.
Reduze - Feynman integral reduction in C++
NASA Astrophysics Data System (ADS)
Studerus, C.
2010-07-01
Reduze is a computer program for reducing Feynman integrals to master integrals employing a Laporta algorithm. The program is written in C++ and uses classes provided by the GiNaC library to perform the simplifications of the algebraic prefactors in the system of equations. Reduze offers the possibility to run reductions in parallel. Program summaryProgram title:Reduze Catalogue identifier: AEGE_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGE_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions:: yes No. of lines in distributed program, including test data, etc.: 55 433 No. of bytes in distributed program, including test data, etc.: 554 866 Distribution format: tar.gz Programming language: C++ Computer: All Operating system: Unix/Linux Number of processors used: The number of processors is problem dependent. More than one possible but not arbitrary many. RAM: Depends on the complexity of the system. Classification: 4.4, 5 External routines: CLN ( http://www.ginac.de/CLN/), GiNaC ( http://www.ginac.de/) Nature of problem: Solving large systems of linear equations with Feynman integrals as unknowns and rational polynomials as prefactors. Solution method: Using a Gauss/Laporta algorithm to solve the system of equations. Restrictions: Limitations depend on the complexity of the system (number of equations, number of kinematic invariants). Running time: Depends on the complexity of the system.
MCNP output data analysis with ROOT (MODAR)
NASA Astrophysics Data System (ADS)
Carasco, C.
2010-12-01
MCNP Output Data Analysis with ROOT (MODAR) is a tool based on CERN's ROOT software. MODAR has been designed to handle time-energy data issued by MCNP simulations of neutron inspection devices using the associated particle technique. MODAR exploits ROOT's Graphical User Interface and functionalities to visualize and process MCNP simulation results in a fast and user-friendly way. MODAR allows to take into account the detection system time resolution (which is not possible with MCNP) as well as detectors energy response function and counting statistics in a straightforward way. New version program summaryProgram title: MODAR Catalogue identifier: AEGA_v1_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGA_v1_1.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 150 927 No. of bytes in distributed program, including test data, etc.: 4 981 633 Distribution format: tar.gz Programming language: C++ Computer: Most Unix workstations and PCs Operating system: Most Unix systems, Linux and windows, provided the ROOT package has been installed. Examples where tested under Suse Linux and Windows XP. RAM: Depends on the size of the MCNP output file. The example presented in the article, which involves three two dimensional 139×740 bins histograms, allocates about 60 MB. These data are running under ROOT and include consumption by ROOT itself. Classification: 17.6 Catalogue identifier of previous version: AEGA_v1_0 Journal reference of previous version: Comput. Phys. Comm. 181 (2010) 1161 External routines: ROOT version 5.24.00 ( http://root.cern.ch/drupal/) Does the new version supersede the previous version?: Yes Nature of problem: The output of a MCNP simulation is an ascii file. The data processing is usually performed by copying and pasting the relevant parts of the ascii file into Microsoft Excel. Such an approach is satisfactory when the quantity of data is small but is not efficient when the size of the simulated data is large, for example when time-energy correlations are studied in detail such as in problems involving the associated particle technique. In addition, since the finite time resolution of the simulated detector cannot be modeled with MCNP, systems in which time-energy correlation is crucial cannot be described in a satisfactory way. Finally, realistic particle energy deposit in detectors is calculated with MCNP in a two step process involving type-5 then type-8 tallies. In the first step, the photon flux energy spectrum associated to a time region is selected and serves as a source energy distribution for the second step. Thus, several files must be manipulated before getting the result, which can be time consuming if one needs to study several time regions or different detectors performances. In the same way, modeling counting statistics obtained in a limited acquisition time requires several steps and can also be time consuming. Solution method: In order to overcome the previous limitations, the MODAR C++ code has been written to make use of CERN's ROOT data analysis software. MCNP output data are read from the MCNP output file with dedicated routines. Two dimensional histograms are filled and can be handled efficiently within the ROOT framework. To keep a user friendly analysis tool, all processing and data display can be done by means of ROOT Graphical User Interface. Specific routines have been written to include detectors finite time resolution and energy response function as well as counting statistics in a straightforward way. Reasons for new version: For applications involving the Associate Particle Technique, a large number of gamma rays are produced by the fast neutrons interactions. To study the energy spectra, it is useful to identify the gamma-ray energy peaks in a straightforward way. Therefore, the possibility to show gamma rays corresponding to specific reactions has been added in MODAR. Summary of revisions: It is possible to use a gamma ray database to better identify in the energy spectra gamma ray peaks with their first and second escapes. Histograms can be scaled by the number of source particle to evaluate the number of counts that is expected without statistical uncertainties. Additional comments: The possibility of adding tallies has also been incorporated in MODAR in order to describe systems in which the signal from several detectors can be summed. Moreover, MODAR can be adapted to handle other problems involving two dimensional data. Running time: The CPU time needed to smear a two dimensional histogram depends on the size of the histogram. In the presented example, the time-energy smearing of one of the 139×740 two dimensional histograms takes 3 minutes with a DELL computer equipped with INTEL Core 2.
NASA Astrophysics Data System (ADS)
Sanna, N.; Baccarelli, I.; Morelli, G.
2009-12-01
SCELib is a computer program which implements the Single Center Expansion (SCE) method to describe molecular electronic densities and the interaction potentials between a charged projectile (electron or positron) and a target molecular system. The first version (CPC Catalog identifier ADMG_v1_0) was submitted to the CPC Program Library in 2000, and version 2.0 (ADMG_v2_0) was submitted in 2004. We here announce the new release 3.0 which presents additional features with respect to the previous versions aiming at a significative enhance of its capabilities to deal with larger molecular systems. SCELib 3.0 allows for ab initio effective core potential (ECP) calculations of the molecular wavefunctions to be used in the SCE method in addition to the standard all-electron description of the molecule. The list of supported architectures has been updated and the code has been ported to platforms based on accelerating coprocessors, such as the NVIDIA GPGPU and the new parallel model adopted is able to efficiently run on a mixed many-core computing system. Program summaryProgram title: SCELib3.0 Catalogue identifier: ADMG_v3_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADMG_v3_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 2 018 862 No. of bytes in distributed program, including test data, etc.: 4 955 014 Distribution format: tar.gz Programming language: C Compilers used: xlc V8.x, Intel C V10.x, Portland Group V7.x, nvcc V2.x Computer: All SMP platforms based on AIX, Linux and SUNOS operating systems over SPARC, POWER, Intel Itanium2, X86, em64t and Opteron processors Operating system: SUNOS, IBM AIX, Linux RedHat (Enterprise), Linux SuSE (SLES) Has the code been vectorized or parallelized?: Yes. 1 to 32 (CPU or GPU) used RAM: Up to 32 GB depending on the molecular system and runtime parameters Classification: 16.5 Catalogue identifier of previous version: ADMG_v2_0 Journal reference of previous version: Comput. Phys. Comm. 162 (2004) 51 External routines: CUDA libraries (SDK V2.x). Does the new version supersede the previous version?: Yes Nature of problem: In this set of codes an efficient procedure is implemented to describe the wavefunction and related molecular properties of a polyatomic molecular system within the Single Center of Expansion (SCE) approximation. The resulting SCE wavefunction, electron density, electrostatic and correlation/polarization potentials can then be used in a wide variety of applications, such as electron-molecule scattering calculations, quantum chemistry studies, biomodelling and drug design. Solution method: The polycentre Hartree-Fock solution for a molecule of arbitrary geometry, based on linear combination of Gaussian-Type Orbital (GTO), is expanded over a single center, typically the Center Of Mass (C.O.M.), by means of a Gauss Legendre/Chebyschev quadrature over the θ,φ angular coordinates. The resulting SCE numerical wavefunction is then used to calculate the one-particle electron density, the electrostatic potential and two different models for the correlation/polarization potentials induced by the impinging electron, which have the correct asymptotic behavior for the leading dipole molecular polarizabilities. Reasons for new version: The present release of SCELib allows the study of larger molecular systems with respect to the previous versions by means of theoretical and technological advances, with the first implementation of the code over a many-core computing system. Summary of revisions: The major features added with respect to SCELib Version 2.0 are molecular wavefunctions obtained via the Los Alamos (Hay and Wadt) LAN ECP plus DZ description of the inner-shell electrons (on Na-La, Hf-Bi elements) [1] can now be single-center-expanded; the addition required modifications of: (i) the filtering code readgau, (ii) the main reading function setinp, (iii) the sphint code (including changes to the CalcMO code), (iv) the densty code, (v) the vst code; the classes of platforms supported now include two more architectures based on accelerated coprocessors (Nvidia GSeries GPGPU and ClearSpeed e720 (ClearSpeed version, experimental; initial preliminary porting of the sphint() function not for production runs - see the code documentation for additional detail). A single-precision representation for real numbers in the SCE mapping of the GTOs ( sphint code), has been implemented into the new code; the I h symmetry point group for the molecular systems has been added to those already allowed in the SCE procedure; the orientation of the molecular axis system for the Cs (planar) symmetry has been changed in accord with the standard orientation adopted by the latest version of the quantum chemistry code (Gaussian C03 [2]), which is used to generate the input multi-centre molecular wavefunctions ( z-axis perpendicular to the symmetry plane); the abelian subgroup for the Cs point group has been changed from C 1 to Cs; atomic basis functions including g-type GTOs can now be single-center-expanded. Restrictions: Depending on the molecular system under study and on the operating conditions the program may or may not fit into available RAM memory. In this case a feature of the program is to memory map a disk file in order to efficiently access the memory data through a disk device. The parallel GP-GPU implementation limits the number of CPU threads to the number of GPU cores present. Running time: The execution time strongly depends on the molecular target description and on the hardware/OS chosen, it is directly proportional to the ( r,θ,φ) grid size and to the number of angular basis functions used. Thus, from the program printout of the main arrays memory occupancy, the user can approximately derive the expected computer time needed for a given calculation executed in serial mode. For parallel executions the overall efficiency must be further taken into account, and this depends on the no. of processors used as well as on the parallel architecture chosen, so a simple general law is at present not determinable. References:[1] P.J. Hay, W.R. Wadt, J. Chem. Phys. 82 (1985) 270; W.R. Wadt, P.J. Hay, J. Chem. Phys. 284 (1985);P.J. Hay, W.R. Wadt, J. Chem. Phys. 299 (1985). [2] M.J. Frisch et al., Gaussian 03, revision C.02, Gaussian, Inc., Wallingford, CT, 2004.
Lambda: A Mathematica package for operator product expansions in vertex algebras
NASA Astrophysics Data System (ADS)
Ekstrand, Joel
2011-02-01
We give an introduction to the Mathematica package Lambda, designed for calculating λ-brackets in both vertex algebras, and in SUSY vertex algebras. This is equivalent to calculating operator product expansions in two-dimensional conformal field theory. The syntax of λ-brackets is reviewed, and some simple examples are shown, both in component notation, and in N=1 superfield notation. Program summaryProgram title: Lambda Catalogue identifier: AEHF_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHF_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License No. of lines in distributed program, including test data, etc.: 18 087 No. of bytes in distributed program, including test data, etc.: 131 812 Distribution format: tar.gz Programming language: Mathematica Computer: See specifications for running Mathematica V7 or above. Operating system: See specifications for running Mathematica V7 or above. RAM: Varies greatly depending on calculation to be performed. Classification: 4.2, 5, 11.1. Nature of problem: Calculate operator product expansions (OPEs) of composite fields in 2d conformal field theory. Solution method: Implementation of the algebraic formulation of OPEs given by vertex algebras, and especially by λ-brackets. Running time: Varies greatly depending on calculation requested. The example notebook provided takes about 3 s to run.
Self-consistent RPA calculations with Skyrme-type interactions: The skyrme_rpa program
NASA Astrophysics Data System (ADS)
Colò, Gianluca; Cao, Ligang; Van Giai, Nguyen; Capelli, Luigi
2013-01-01
Random Phase Approximation (RPA) calculations are nowadays an indispensable tool in nuclear physics studies. We present here a complete version implemented with Skyrme-type interactions, with the spherical symmetry assumption, that can be used in cases where the effects of pairing correlations and of deformation can be ignored. The full self-consistency between the Hartree-Fock mean field and the RPA excitations is enforced, and it is numerically controlled by comparison with energy-weighted sum rules. The main limitations are that charge-exchange excitations and transitions involving spin operators are not included in this version. Program summaryProgram title: skyrme_rpa (v 1.00) Catalogue identifier: AENF_v1_0 Program summary URL: http://cpc.cs.qub.ac.uk/summaries/AENF_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 5531 No. of bytes in distributed program, including test data, etc.: 39435 Distribution format: tar.gz Programming language: FORTRAN-90/95; easily downgradable to FORTRAN-77. Computer: PC with Intel Celeron, Intel Pentium, AMD Athlon and Intel Core Duo processors. Operating system: Linux, Windows. RAM: From 4 MBytes to 150 MBytes, depending on the size of the nucleus and of the model space for RPA. Word size: The code is written with a prevalent use of double precision or REAL(8) variables; this assures 15 significant digits. Classification: 17.24. Nature of problem: Systematic observations of excitation properties in finite nuclear systems can lead to improved knowledge of the nuclear matter equation of state as well as a better understanding of the effective interaction in the medium. This is the case of the nuclear giant resonances and low-lying collective excitations, which can be described as small amplitude collective motions in the framework of the Random Phase Approximation (RPA). This work provides a tool where one starts from an assumed form of nuclear effective interaction (the Skyrme forces) and builds the self-consistent Hartree-Fock mean field of a given nucleus, and then the RPA multipole excitations of that nucleus. Solution method: The Hartree-Fock (HF) equations are solved in a radial mesh, using a Numerov algorithm. The solutions are iterated until self-consistency is achieved (in practice, when the energy eigenvalues are stable within a desired accuracy). In the obtained mean field, unoccupied states necessary for the RPA calculations are found. For all single-particle states, box boundary conditions are assumed. To solve the RPA problem for a given value of total angular momentum and parity Jπ a coupled basis is constructed and the RPA matrix is diagonalized (protons and neutrons are treated explicitly, and no approximation related to the use of isospin formalism is introduced). The transition amplitudes and transition strengths associated to given external operators are calculated. The HF densities and RPA transition densities are also evaluated. Restrictions: The main restrictions are related to the assumed spherical symmetry and absence of pairing correlations. Running time: The typical running time depends strongly on the nucleus, on the multipolarity, on the choice of the model space and of course on the computer. It can vary from a few minutes to several hours.
NASA Astrophysics Data System (ADS)
McConnell, Sean; Fritzsche, Stephan; Surzhykov, Andrey
2010-03-01
During recent years, the DIRAC package has proved to be an efficient tool for studying the structural properties and dynamic behavior of hydrogen-like ions. Originally designed as a set of MAPLE procedures, this package provides interactive access to the wave and Green's functions in the non-relativistic and relativistic frameworks and supports analytical evaluation of a large number of radial integrals that are required for the construction of transition amplitudes and interaction cross sections. We provide here a new version of the DIRAC program which is developed within the framework of MATHEMATICA (version 6.0). This new version aims to cater to a wider community of researchers that use the MATHEMATICA platform and to take advantage of the generally faster processing times therein. Moreover, the addition of new procedures, a more convenient and detailed help system, as well as source code revisions to overcome identified shortcomings should ensure expanded use of the new DIRAC program over its predecessor. New version program summaryProgram title: DIRAC Catalogue identifier: ADUQ_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADUQ_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 45 073 No. of bytes in distributed program, including test data, etc.: 285 828 Distribution format: tar.gz Programming language: Mathematica 6.0 or higher Computer: All computers with a license for the computer algebra package Mathematica (version 6.0 or higher) Operating system: Mathematica is O/S independent Classification: 2.1 Catalogue identifier of previous version: ADUQ_v1_0 Journal reference of previous version: Comput. Phys. Comm. 165 (2005) 139 Does the new version supersede the previous version?: Yes Nature of problem: Since the early days of quantum mechanics, the "hydrogen atom" has served as one of the key models for studying the structure and dynamics of various quantum systems. Its analytic solutions are frequently used in case studies in atomic and molecular physics, quantum optics, plasma physics, or even in the field of quantum information and computation. Fast and reliable access to functions and properties of the hydrogenic systems are frequently required, in both the non-relativistic and relativistic frameworks. Despite all the knowledge about one-electron ions, providing such an access is not a simple task, owing to the rather complicated mathematical structure of the Schrödinger and especially Dirac equations. Moreover, for analyzing experimental results as well as for performing advanced theoretical studies one often needs (apart from the detailed information on atomic wave- and Green's functions) to be able to calculate a number of integrals involving these functions. Although for many types of transition operators these integrals can be evaluated analytically in terms of special mathematical functions, such an evaluation is usually rather involved and prone to mistakes. Solution method: A set of Mathematica procedures is developed which provides both the non-relativistic and relativistic solutions of the "Hydrogen atom model". It facilitates, moreover, the symbolic evaluation of integrals involved in the calculations of cross sections and transition amplitudes. These procedures are based on a large number of relations among special mathematical functions, information about their integral representations, recurrence formulae and series expansions. Based on this knowledge, the DIRAC tools provide a fast and reliable algebraic (and if necessary, numeric) manipulation of functions and properties of one-electron systems, thus helping to obtain further insight into the behavior of quantum physical systems. Reasons for new version: The original version of the DIRAC program was developed as a toolbox of Maple procedures and was submitted to the CPC library in 2004 (cf. Ref. [1]). Since then DIRAC has found its niche in advanced theoretical studies carried out in realm of heavy ion physics. With the help of this program detailed analysis has been performed, in particular, for the various excitation and ionization processes occurring in relativistic ion-atom collisions [2], the polarization of the characteristic X-ray radiation following radiative electron capture [3], the correlation properties of the two-photon emission from few-electron heavy ions [4], the spin entanglement phenomena in atomic photoionization [5] and even for exploring the vibrational excitations of the heavy nuclei [6]. Although these studies have conclusively proven the potential of the program, they have also illuminated routes for its further enhancement. Apart from certain source code revisions, demand has grown for a new version of DIRAC compatible with the Mathematica platform. The version presented here includes a wider ranging and more user friendly interactive help system, a number of new procedures and reprogramming for greater computational efficiency. Summary of revisions: The most important new capabilities of the DIRAC program since the previous version are: The utilization of the Mathematica (version 6.0) platform. The addition of a number of new procedures. Since the complete list of the new (and updated) procedures can be found in the interactive help library of the program, we mention here only the most important ones: DiracGlobal[] - Displays a list of the current global settings which specify the framework, nuclear charge and the units which are to be used by the DIRAC program. DiracRadialOrbitalMomentum[] - Returns a non-relativistic radial orbital in momentum space for both, the bound and free electron states. DiracSlaterRadial[] - Evaluates the radial Slater integral both, with the non-relativistic and relativistic wavefunctions. In the previous version of the program this procedure was restricted to the non-relativistic framework only. DiracGreensIntegralRadial[] - Evaluates the two-dimensional radial integrals with the wave- and Green's functions both in non-relativistic and relativistic frameworks. DiracAngularMatrixElement[] - Calculates the angular matrix elements for various irreducible tensor operators. The elimination of some redundant procedures. In particular, the previous version supported evaluation of the spherical Bessel functions, Wigner 3j symbols, Clebsch-Gordan coefficients and spherical harmonics functions. These tools are now superseded by in-built procedures of Mathematica. The development of a full featured interactive help system which follows the style of the Mathematica Help Pages. Extensive revision of the source code in order to correct a number of bugs and inconsistencies that have been identified during use of the previous version of Dirac. The DIRAC package is distributed as a compressed tar file from which the DIRAC root directory can be (re-)generated. The root directory contains the source code and help libraries, a "Readme" file, Dirac_Installation_Instructions, as well as the notebook DemonstrationNotebook.nb that includes a number of test cases to illustrate the use of the program. These test cases, which concern the theoretical analysis of wavefunctions and the fine-structure of hydrogen-like ions, has already been discussed in detail in Ref. [1] and are provided here in order to underline the continuity between the previous (Maple) and new (Mathematica) versions of the DIRAC program. Unusual features: Even though all basic features of the previous Maple version have been retained in as close to the original form as possible, some small syntax changes became necessary in the new version of DIRAC in order to follow Mathematica standards. First of all, these changes concern naming conventions for DIRAC's procedures. As was discussed in Ref. [1], previously rather long names were employed in which each word was separated by an underscore. For example, when running the Maple version of the program one had to call the procedure Dirac_Slater_radial() in order to evaluate the Slater integral. Such a naming convention however, cannot be used in the Mathematica framework where the underscore character is reserved to represent Blank, a built-in symbol. In the new version of DIRAC we therefore follow the Mathematica convention of delimiting each word in a procedure's name by capitalization. Evaluation of the Slater determinant can be accomplished now simply by entering DiracSlaterRadial[]. Besides procedure names, a new convention is introduced to represent fundamental physical constants. In this version of DIRAC the group of (preset) global variables has changed to resemble their conventional symbols, specifically α, a, e, m, c and ℏ, being the fine structure constant, Bohr radius, electron charge, electron mass, speed of light and the Planck constant respectively. If the numerical evaluator N is wrapped around any of these constants, their numerical values are returned. Running time: Although the program replies promptly upon most requests, the running time also depends on the particular task. For example, computation of (radial) matrix elements involving components of relativistic wavefunctions might require a few seconds of a runtime. A number of test calculations performed regarding this and other tasks clearly indicate that the new version of Dirac requires up to 90% less evaluation time compared to its predecessor. References:A. Surzhykov, P. Koval, S. Fritzsche, Comput. Phys. Comm. 165 (2005) 139. H. Ogawa, et al., Phys. Rev. A 75 (2007) 1. A.V. Maiorova, et al., J. Phys. B: At. Mol. Opt. Phys. 42 (2009) 125003. L. Borowska, A. Surzhykov, Th. Stöhlker, S. Fritzsche, Phys. Rev. A 74 (2006) 062516. T. Radtke, S. Fritzsche, A. Surzhykov, Phys. Rev. A 74 (2006) 032709. A. Pálffy, Z. Harman, A. Surzhykov, U.D. Jentschura, Phys. Rev. A 75 (2007) 012712.
NASA Astrophysics Data System (ADS)
Radtke, T.; Fritzsche, S.
2008-11-01
Entanglement is known today as a key resource in many protocols from quantum computation and quantum information theory. However, despite the successful demonstration of several protocols, such as teleportation or quantum key distribution, there are still many open questions of how entanglement affects the efficiency of quantum algorithms or how it can be protected against noisy environments. The investigation of these and related questions often requires a search or optimization over the set of quantum states and, hence, a parametrization of them and various other objects. To facilitate this kind of studies in quantum information theory, here we present an extension of the FEYNMAN program that was developed during recent years as a toolbox for the simulation and analysis of quantum registers. In particular, we implement parameterizations of hermitian and unitary matrices (of arbitrary order), pure and mixed quantum states as well as separable states. In addition to being a prerequisite for the study of many optimization problems, these parameterizations also provide the necessary basis for heuristic studies which make use of random states, unitary matrices and other objects. Program summaryProgram title: FEYNMAN Catalogue identifier: ADWE_v4_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADWE_v4_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 24 231 No. of bytes in distributed program, including test data, etc.: 1 416 085 Distribution format: tar.gz Programming language: Maple 11 Computer: Any computer with Maple software installed Operating system: Any system that supports Maple; program has been tested under Microsoft Windows XP, Linux Classification: 4.15 Does the new version supersede the previous version?: Yes Nature of problem: During the last decades, quantum information science has contributed to our understanding of quantum mechanics and has provided also new and efficient protocols, based on the use of entangled quantum states. To determine the behavior and entanglement of n-qubit quantum registers, symbolic and numerical simulations need to be applied in order to analyze how these quantum information protocols work and which role the entanglement plays hereby. Solution method: Using the computer algebra system Maple, we have developed a set of procedures that support the definition, manipulation and analysis of n-qubit quantum registers. These procedures also help to deal with (unitary) logic gates and (nonunitary) quantum operations that act upon the quantum registers. With the parameterization of various frequently-applied objects, that are implemented in the present version, the program now facilitates a wider range of symbolic and numerical studies. All commands can be used interactively in order to simulate and analyze the evolution of n-qubit quantum systems, both in ideal and noisy quantum circuits. Reasons for new version: In the first version of the FEYNMAN program [1], we implemented the data structures and tools that are necessary to create, manipulate and to analyze the state of quantum registers. Later [2,3], support was added to deal with quantum operations (noisy channels) as an ingredient which is essential for studying the effects of decoherence. With the present extension, we add a number of parametrizations of objects frequently utilized in decoherence and entanglement studies, such that as hermitian and unitary matrices, probability distributions, or various kinds of quantum states. This extension therefore provides the basis, for example, for the optimization of a given function over the set of pure states or the simple generation of random objects. Running time: Most commands that act upon quantum registers with five or less qubits take ⩽10 seconds of processor time on a Pentium 4 processor with ⩾2GHz or newer, and about 5-20 MB of working memory (in addition to the memory for the Maple environment). Especially when working with symbolic expressions, however, the requirements on CPU time and memory critically depend on the size of the quantum registers, owing to the exponential growth of the dimension of the associated Hilbert space. For example, complex (symbolic) noise models, i.e. with several symbolic Kraus operators, result for multi-qubit systems often in very large expressions that dramatically slow down the evaluation of e.g. distance measures or the final-state entropy, etc. In these cases, Maple's assume facility sometimes helps to reduce the complexity of the symbolic expressions, but more often only a numerical evaluation is possible eventually. Since the complexity of the various commands of the FEYNMAN program and the possible usage scenarios can be very different, no general scaling law for CPU time or the memory requirements can be given. References: [1] T. Radtke, S. Fritzsche, Comput. Phys. Comm. 173 (2005) 91. [2] T. Radtke, S. Fritzsche, Comput. Phys. Comm. 175 (2006) 145. [3] T. Radtke, S. Fritzsche, Comput. Phys. Comm. 176 (2007) 617.
The grasp2K relativistic atomic structure package
NASA Astrophysics Data System (ADS)
Jönsson, P.; He, X.; Froese Fischer, C.; Grant, I. P.
2007-10-01
This paper describes grasp2K, a general-purpose relativistic atomic structure package. It is a modification and extension of the GRASP92 package by [F.A. Parpia, C. Froese Fischer, I.P. Grant, Comput. Phys. Comm. 94 (1996) 249]. For the sake of continuity, two versions are included. Version 1 retains the GRASP92 formats for wave functions and expansion coefficients, but no longer requires preprocessing and more default options have been introduced. Modifications have eliminated some errors, improved the stability, and simplified interactive use. The transition code has been extended to cases where the initial and final states have different orbital sets. Several utility programs have been added. Whereas Version 1 constructs a single interaction matrix for all the J's and parities, Version 2 treats each J and parity as a separate matrix. This block structure results in a reduction of memory use and considerably shorter eigenvectors. Additional tools have been developed for this format. The CPU intensive parts of Version 2 have been parallelized using MPI. The package includes a "make" facility that relies on environment variables. These make it easier to port the application to different platforms. The present version supports the 32-bit Linux and ibmSP environments where the former is compatible with many Unix systems. Descriptions of the features and the program/data flow of the package will be given in some detail in this report. Program summaryProgram title: grasp2K Catalogue identifier: ADZL_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZL_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 213 524 No. of bytes in distributed program, including test data, etc.: 1 328 588 Distribution format: tar.gz Programming language: Fortran and C Computer: Intel Xeon, 3.06 GHz Operating system: Suse LINUX RAM: 500 MB or more Classification: 2.1 Nature of problem: Prediction of atomic spectra—atomic energy levels, oscillator strengths, and radiative decay rates—using a 'fully relativistic' approach. Solution method: Atomic orbitals are assumed to be four-component spinor eigenstates of the angular momentum operator, j=l+s, and the parity operator Π=βπ. Configuration state functions (CSFs) are linear combinations of Slater determinants of atomic orbitals, and are simultaneous eigenfunctions of the atomic electronic angular momentum operator, J, and the atomic parity operator, P. Approximate atomic state functions (ASFs) are linear combinations of CSFs. A variational functional may be constructed by combining expressions for the energies of one or more ASFs. Average energy level (EAL) functionals are weighted sums of energies of all possible ASFs that may be constructed from a set of CSFs; the number of ASFs is then the same as the number of CSFs. Extended optimal level (EOL) functionals are weighted sums of energies of some subset of ASFs. Radial functions may be determined by numerically solving the multiconfiguration Dirac-Hartree-Fock (MCDHF) equations that define an extremum of the variational functional by the self-consistent-field (SCF) method. Lists of CSFs are generated from a set of reference CSFs and rules for deriving other CSFs from these. Expansion coefficients are obtained using sparse-matrix methods for solving the relativistic configuration interaction (CI) problem. Transition properties for pairs of ASFs are computed from matrix elements of multipole operators of the electromagnetic field. Biorthogonal transformation methods are employed so that all matrix elements between CSFs can be evaluated using Racah algebra. Restrictions: The maximum number of radial orbitals is limited to 120 by the packing algorithm used for 32-bit integers. The maximum size of a multiconfiguration (MC) calculation, as measured by the length of the configuration state function (CSF) list, is limited by numerical stability, processing time, or storage which may be either in memory or on disk. Numerical stability is the same as GRASP92 [F.A. Parpia, C. Froese Fischer, I.P. Grant, Comput. Phys. Comm. 94 (1996) 249] with a slight improvement in memory management for Version 2 codes. Sufficient disk space is needed to store angular data. In configuration interaction calculations the matrix may be either in memory or on disk. The tables of coefficients of fractional parentage, as in GRASP92, are limited to subshells with j⩽7/2; occupied subshells with j=9/2 are, therefore, restricted to a maximum of two electrons. Unusual features: The installation process has been simplified so that pre-processing of the raw code needed with GRASP92 can be eliminated. Dynamic memory allocation reduces the number of parameters needed to define fixed array dimensions to nine. The corrections discussed in [C. Froese Fischer, G. Gaigalas, Y. Ralchenko, Comput. Phys. Comm. 175 (2006) 739] have also been implemented. Environment variables are used to facilitate the compilation of the libraries, applications, and tools with different compilers on different platforms. Computationally intensive applications have been parallelized using the message passing interface (MPI). When standard output is redirected, prompts and critical information about the progress of a calculation or convergence are still directed to the screen through the standard error output unit. Running time: CPU time required to execute test cases: 5 min ( n=4 calculation with 2190 CSFs) and 52.7 minutes ( n=5 calculation with 6752 CSFs)
SARAH 3.2: Dirac gauginos, UFO output, and more
NASA Astrophysics Data System (ADS)
Staub, Florian
2013-07-01
SARAH is a Mathematica package optimized for the fast, efficient and precise study of supersymmetric models beyond the MSSM: a new model can be defined in a short form and all vertices are derived. This allows SARAH to create model files for FeynArts/FormCalc, CalcHep/CompHep and WHIZARD/O'Mega. The newest version of SARAH now provides the possibility to create model files in the UFO format which is supported by MadGraph 5, MadAnalysis 5, GoSam, and soon by Herwig++. Furthermore, SARAH also calculates the mass matrices, RGEs and 1-loop corrections to the mass spectrum. This information is used to write source code for SPheno in order to create a precision spectrum generator for the given model. This spectrum-generator-generator functionality as well as the output of WHIZARD and CalcHep model files has seen further improvement in this version. Also models including Dirac gauginos are supported with the new version of SARAH, and additional checks for the consistency of the implementation of new models have been created. Program summaryProgram title:SARAH Catalogue identifier: AEIB_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIB_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 3 22 411 No. of bytes in distributed program, including test data, etc.: 3 629 206 Distribution format: tar.gz Programming language: Mathematica. Computer: All for which Mathematica is available. Operating system: All for which Mathematica is available. Classification: 11.1, 11.6. Catalogue identifier of previous version: AEIB_v1_0 Journal reference of previous version: Comput. Phys. Comm. 182 (2011) 808 Does the new version supersede the previous version?: Yes, the new version includes all known features of the previous version but also provides the new features mentioned below. Nature of problem: To use Madgraph for new models it is necessary to provide the corresponding model files which include all information about the interactions of the model. However, the derivation of the vertices for a given model and putting those into model files which can be used with Madgraph is usually very time consuming. Dirac gauginos are not present in the minimal supersymmetric standard model (MSSM) or many extensions of it. Dirac mass terms for vector superfields lead to new structures in the supersymmetric (SUSY) Lagrangian (bilinear mass term between gaugino and matter fermion as well as new D-terms) and modify also the SUSY renormalization group equations (RGEs). The Dirac character of gauginos can change the collider phenomenology. In addition, they come with an extended Higgs sector for which a precise calculation of the 1-loop masses has not happened so far. Solution method: SARAH calculates the complete Lagrangian for a given model whose gauge sector can be any direct product of SU(N) gauge groups. The chiral superfields can transform as any, irreducible representation with respect to these gauge groups and it is possible to handle an arbitrary number of symmetry breakings or particle rotations. Also the gauge fixing is automatically added. Using this information, SARAH derives all vertices for a model. These vertices can be exported to model files in the UFO which is supported by Madgraph and other codes like GoSam, MadAnalysis or ALOHA. The user can also study models with Dirac gauginos. In that case SARAH includes all possible terms in the Lagrangian stemming from the new structures and can also calculate the RGEs. The entire impact of these terms is then taken into account in the output of SARAH to UFO, CalcHep, WHIZARD, FeynArts and SPheno. Reasons for new version: SARAH provides, with this version, the possibility of creating model files in the UFO format. The UFO format is supposed to become a standard format for model files which should be supported by many different tools in the future. Also models with Dirac gauginos were not supported in earlier versions. Summary of revisions: Support of models with Dirac gauginos. Output of model files in the UFO format, speed improvement in the output of WHIZARD model files, CalcHep output supports the internal diagonalization of mass matrices, output of control files for LHPC spectrum plotter, support of generalized PDG numbering scheme PDG.IX, improvement of the calculation of the decay widths and branching ratios with SPheno, the calculation of new low energy observables are added to the SPheno output, the handling of gauge fixing terms has been significantly simplified. Restrictions: SARAH can only derive the Lagrangian in an automatized way for N=1 SUSY models, but not for those with more SUSY generators. Furthermore, SARAH supports only renormalizable operators in the output of model files in the UFO format and also for CalcHep, FeynArts and WHIZARD. Also color sextets are not yet included in the model files for Monte Carlo tools. Dimension 5 operators are only supported in the calculation of the RGEs and mass matrices. Unusual features: SARAH does not need the Lagrangian of a model as input to calculate the vertices. The gauge structure, particle and content and superpotential as well as rotations stemming from gauge symmetry breaking are sufficient. All further information is derived by SARAH on its own. Therefore, the model files are very short and the implementation of new models is fast and easy. In addition, the implementation of a model can be checked for physical and formal consistency. In addition, SARAH can generate Fortran code for a full 1-loop analysis of the mass spectrum in the context for Dirac gauginos. Running time: Measured CPU time for the evaluation of the MSSM using a Lenovo Thinkpad X220 with i7 processor (2.53 GHz). Calculating the complete Lagrangian: 9 s. Calculating all vertices: 51 s. Output of the UFO model files: 49 s.
An object oriented code for simulating supersymmetric Yang-Mills theories
NASA Astrophysics Data System (ADS)
Catterall, Simon; Joseph, Anosh
2012-06-01
We present SUSY_LATTICE - a C++ program that can be used to simulate certain classes of supersymmetric Yang-Mills (SYM) theories, including the well known N=4 SYM in four dimensions, on a flat Euclidean space-time lattice. Discretization of SYM theories is an old problem in lattice field theory. It has resisted solution until recently when new ideas drawn from orbifold constructions and topological field theories have been brought to bear on the question. The result has been the creation of a new class of lattice gauge theories in which the lattice action is invariant under one or more supersymmetries. The resultant theories are local, free of doublers and also possess exact gauge-invariance. In principle they form the basis for a truly non-perturbative definition of the continuum SYM theories. In the continuum limit they reproduce versions of the SYM theories formulated in terms of twisted fields, which on a flat space-time is just a change of the field variables. In this paper, we briefly review these ideas and then go on to provide the details of the C++ code. We sketch the design of the code, with particular emphasis being placed on SYM theories with N=(2,2) in two dimensions and N=4 in three and four dimensions, making one-to-one comparisons between the essential components of the SYM theories and their corresponding counterparts appearing in the simulation code. The code may be used to compute several quantities associated with the SYM theories such as the Polyakov loop, mean energy, and the width of the scalar eigenvalue distributions. Program summaryProgram title: SUSY_LATTICE Catalogue identifier: AELS_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AELS_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 9315 No. of bytes in distributed program, including test data, etc.: 95 371 Distribution format: tar.gz Programming language: C++ Computer: PCs and Workstations Operating system: Any, tested on Linux machines Classification:: 11.6 Nature of problem: To compute some of the observables of supersymmetric Yang-Mills theories such as supersymmetric action, Polyakov/Wilson loops, scalar eigenvalues and Pfaffian phases. Solution method: We use the Rational Hybrid Monte Carlo algorithm followed by a Leapfrog evolution and a Metropolis test. The input parameters of the model are read in from a parameter file. Restrictions: This code applies only to supersymmetric gauge theories with extended supersymmetry, which undergo the process of maximal twisting. (See Section 2 of the manuscript for details.) Running time: From a few minutes to several hours depending on the amount of statistics needed.
Code OK3 - An upgraded version of OK2 with beam wobbling function
NASA Astrophysics Data System (ADS)
Ogoyski, A. I.; Kawata, S.; Popov, P. H.
2010-07-01
For computer simulations on heavy ion beam (HIB) irradiation onto a target with an arbitrary shape and structure in heavy ion fusion (HIF), the code OK2 was developed and presented in Computer Physics Communications 161 (2004). Code OK3 is an upgrade of OK2 including an important capability of wobbling beam illumination. The wobbling beam introduces a unique possibility for a smooth mechanism of inertial fusion target implosion, so that sufficient fusion energy is released to construct a fusion reactor in future. New version program summaryProgram title: OK3 Catalogue identifier: ADST_v3_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADST_v3_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 221 517 No. of bytes in distributed program, including test data, etc.: 2 471 015 Distribution format: tar.gz Programming language: C++ Computer: PC (Pentium 4, 1 GHz or more recommended) Operating system: Windows or UNIX RAM: 2048 MBytes Classification: 19.7 Catalogue identifier of previous version: ADST_v2_0 Journal reference of previous version: Comput. Phys. Comm. 161 (2004) 143 Does the new version supersede the previous version?: Yes Nature of problem: In heavy ion fusion (HIF), ion cancer therapy, material processing, etc., a precise beam energy deposition is essentially important [1]. Codes OK1 and OK2 have been developed to simulate the heavy ion beam energy deposition in three-dimensional arbitrary shaped targets [2, 3]. Wobbling beam illumination is important to smooth the beam energy deposition nonuniformity in HIF, so that a uniform target implosion is realized and a sufficient fusion output energy is released. Solution method: OK3 code works on the base of OK1 and OK2 [2, 3]. The code simulates a multi-beam illumination on a target with arbitrary shape and structure, including beam wobbling function. Reasons for new version: The code OK3 is based on OK2 [3] and uses the same algorithm with some improvements, the most important one is the beam wobbling function. Summary of revisions:In the code OK3, beams are subdivided on many bunches. The displacement of each bunch center from the initial beam direction is calculated. Code OK3 allows the beamlet number to vary from bunch to bunch. That reduces the calculation error especially in case of very complicated mesh structure with big internal holes. The target temperature rises during the time of energy deposition. Some procedures are improved to perform faster. The energy conservation is checked up on each step of calculation process and corrected if necessary. New procedures included in OK3 Procedure BeamCenterRot( ) rotates the beam axis around the impinging direction of each beam. Procedure BeamletRot( ) rotates the beamlet axes that belong to each beam. Procedure Rotation( ) sets the coordinates of rotated beams and beamlets in chamber and pellet systems. Procedure BeamletOut( ) calculates the lost energy of ions that have not impinged on the target. Procedure TargetT( ) sets the temperature of the target layer of energy deposition during the irradiation process. Procedure ECL( ) checks up the energy conservation law at each step of the energy deposition process. Procedure ECLt( ) performs the final check up of the energy conservation law at the end of deposition process. Modified procedures in OK3 Procedure InitBeam( ): This procedure initializes the beam radius and coefficients A1, A2, A3, A4 and A5 for Gauss distributed beams [2]. It is enlarged in OK3 and can set beams with radii from 1 to 20 mm. Procedure kBunch( ) is modified to allow beamlet number variation from bunch to bunch during the deposition. Procedure ijkSp( ) and procedure Hole( ) are modified to perform faster. Procedure Espl( ) and procedure ChechE( ) are modified to increase the calculation accuracy. Procedure SD( ) calculates the total relative root-mean-square (RMS) deviation and the total relative peak-to-valley (PTV) deviation in energy deposition non-uniformity. This procedure is not included in code OK2 because of its limited applications (for spherical targets only). It is taken from code OK1 and modified to perform with code OK3. Running time: The execution time depends on the pellet mesh number and the number of beams in the simulated illumination as well as on the beam characteristics (beam radius on the pellet surface, beam subdivision, projectile particle energy and so on). In almost all of the practical running tests performed, the typical running time for one beam deposition is about 30 s on a PC with a CPU of Pentium 4, 2.4 GHz. References:A.I. Ogoyski, et al., Heavy ion beam irradiation non-uniformity in inertial fusion, Phys. Lett. A 315 (2003) 372-377. A.I. Ogoyski, et al., Code OK1 - Simulation of multi-beam irradiation on a spherical target in heavy ion fusion, Comput. Phys. Comm. 157 (2004) 160-172. A.I. Ogoyski, et al., Code OK2 - A simulation code of ion-beam illumination on an arbitrary shape and structure target, Comput. Phys. Comm. 161 (2004) 143-150.
NASA Astrophysics Data System (ADS)
Schunck, N.; Dobaczewski, J.; McDonnell, J.; Satuła, W.; Sheikh, J. A.; Staszczak, A.; Stoitsov, M.; Toivanen, P.
2012-01-01
We describe the new version (v2.49t) of the code HFODD which solves the nuclear Skyrme-Hartree-Fock (HF) or Skyrme-Hartree-Fock-Bogolyubov (HFB) problem by using the Cartesian deformed harmonic-oscillator basis. In the new version, we have implemented the following physics features: (i) the isospin mixing and projection, (ii) the finite-temperature formalism for the HFB and HF + BCS methods, (iii) the Lipkin translational energy correction method, (iv) the calculation of the shell correction. A number of specific numerical methods have also been implemented in order to deal with large-scale multi-constraint calculations and hardware limitations: (i) the two-basis method for the HFB method, (ii) the Augmented Lagrangian Method (ALM) for multi-constraint calculations, (iii) the linear constraint method based on the approximation of the RPA matrix for multi-constraint calculations, (iv) an interface with the axial and parity-conserving Skyrme-HFB code HFBTHO, (v) the mixing of the HF or HFB matrix elements instead of the HF fields. Special care has been paid to using the code on massively parallel leadership class computers. For this purpose, the following features are now available with this version: (i) the Message Passing Interface (MPI) framework, (ii) scalable input data routines, (iii) multi-threading via OpenMP pragmas, (iv) parallel diagonalization of the HFB matrix in the simplex-breaking case using the ScaLAPACK library. Finally, several little significant errors of the previous published version were corrected. New version program summaryProgram title:HFODD (v2.49t) Catalogue identifier: ADFL_v3_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADFL_v3_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public Licence v3 No. of lines in distributed program, including test data, etc.: 190 614 No. of bytes in distributed program, including test data, etc.: 985 898 Distribution format: tar.gz Programming language: FORTRAN-90 Computer: Intel Pentium-III, Intel Xeon, AMD-Athlon, AMD-Opteron, Cray XT4, Cray XT5 Operating system: UNIX, LINUX, Windows XP Has the code been vectorized or parallelized?: Yes, parallelized using MPI RAM: 10 Mwords Word size: The code is written in single-precision for the use on a 64-bit processor. The compiler option -r8 or +autodblpad (or equivalent) has to be used to promote all real and complex single-precision floating-point items to double precision when the code is used on a 32-bit machine. Classification: 17.22 Catalogue identifier of previous version: ADFL_v2_2 Journal reference of previous version: Comput. Phys. Comm. 180 (2009) 2361 External routines: The user must have access to the NAGLIB subroutine f02axe, or LAPACK subroutines zhpev, zhpevx, zheevr, or zheevd, which diagonalize complex hermitian matrices, the LAPACK subroutines dgetri and dgetrf which invert arbitrary real matrices, the LAPACK subroutines dsyevd, dsytrf and dsytri which compute eigenvalues and eigenfunctions of real symmetric matrices, the LINPACK subroutines zgedi and zgeco, which invert arbitrary complex matrices and calculate determinants, the BLAS routines dcopy, dscal, dgeem and dgemv for double-precision linear algebra and zcopy, zdscal, zgeem and zgemv for complex linear algebra, or provide another set of subroutines that can perform such tasks. The BLAS and LAPACK subroutines can be obtained from the Netlib Repository at the University of Tennessee, Knoxville: http://netlib2.cs.utk.edu/. Does the new version supersede the previous version?: Yes Nature of problem: The nuclear mean field and an analysis of its symmetries in realistic cases are the main ingredients of a description of nuclear states. Within the Local Density Approximation, or for a zero-range velocity-dependent Skyrme interaction, the nuclear mean field is local and velocity dependent. The locality allows for an effective and fast solution of the self-consistent Hartree-Fock equations, even for heavy nuclei, and for various nucleonic ( n-particle- n-hole) configurations, deformations, excitation energies, or angular momenta. Similarly, Local Density Approximation in the particle-particle channel, which is equivalent to using a zero-range interaction, allows for a simple implementation of pairing effects within the Hartree-Fock-Bogolyubov method. Solution method: The program uses the Cartesian harmonic oscillator basis to expand single-particle or single-quasiparticle wave functions of neutrons and protons interacting by means of the Skyrme effective interaction and zero-range pairing interaction. The expansion coefficients are determined by the iterative diagonalization of the mean-field Hamiltonians or Routhians which depend non-linearly on the local neutron and proton densities. Suitable constraints are used to obtain states corresponding to a given configuration, deformation or angular momentum. The method of solution has been presented in: [J. Dobaczewski, J. Dudek, Comput. Phys. Commun. 102 (1997) 166]. Reasons for new version: Version 2.49s of HFODD provides a number of new options such as the isospin mixing and projection of the Skyrme functional, the finite-temperature HF and HFB formalism and optimized methods to perform multi-constrained calculations. It is also the first version of HFODD to contain threading and parallel capabilities. Summary of revisions: Isospin mixing and projection of the HF states has been implemented. The finite-temperature formalism for the HFB equations has been implemented. The Lipkin translational energy correction method has been implemented. Calculation of the shell correction has been implemented. The two-basis method for the solution to the HFB equations has been implemented. The Augmented Lagrangian Method (ALM) for calculations with multiple constraints has been implemented. The linear constraint method based on the cranking approximation of the RPA matrix has been implemented. An interface between HFODD and the axially-symmetric and parity-conserving code HFBTHO has been implemented. The mixing of the matrix elements of the HF or HFB matrix has been implemented. A parallel interface using the MPI library has been implemented. A scalable model for reading input data has been implemented. OpenMP pragmas have been implemented in three subroutines. The diagonalization of the HFB matrix in the simplex-breaking case has been parallelized using the ScaLAPACK library. Several little significant errors of the previous published version were corrected. Running time: In serial mode, running 6 HFB iterations for 152Dy for conserved parity and signature symmetries in a full spherical basis of N=14 shells takes approximately 8 min on an AMD Opteron processor at 2.6 GHz, assuming standard BLAS and LAPACK libraries. As a rule of thumb, runtime for HFB calculations for parity and signature conserved symmetries roughly increases as N, where N is the number of full HO shells. Using custom-built optimized BLAS and LAPACK libraries (such as in the ATLAS implementation) can bring down the execution time by 60%. Using the threaded version of the code with 12 threads and threaded BLAS libraries can bring an additional factor 2 speed-up, so that the same 6 HFB iterations now take of the order of 2 min 30 s.
MNPBEM - A Matlab toolbox for the simulation of plasmonic nanoparticles
NASA Astrophysics Data System (ADS)
Hohenester, Ulrich; Trügler, Andreas
2012-02-01
MNPBEM is a Matlab toolbox for the simulation of metallic nanoparticles (MNP), using a boundary element method (BEM) approach. The main purpose of the toolbox is to solve Maxwell's equations for a dielectric environment where bodies with homogeneous and isotropic dielectric functions are separated by abrupt interfaces. Although the approach is in principle suited for arbitrary body sizes and photon energies, it is tested (and probably works best) for metallic nanoparticles with sizes ranging from a few to a few hundreds of nanometers, and for frequencies in the optical and near-infrared regime. The toolbox has been implemented with Matlab classes. These classes can be easily combined, which has the advantage that one can adapt the simulation programs flexibly for various applications. Program summaryProgram title: MNPBEM Catalogue identifier: AEKJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKJ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License v2 No. of lines in distributed program, including test data, etc.: 15 700 No. of bytes in distributed program, including test data, etc.: 891 417 Distribution format: tar.gz Programming language: Matlab 7.11.0 (R2010b) Computer: Any which supports Matlab 7.11.0 (R2010b) Operating system: Any which supports Matlab 7.11.0 (R2010b) RAM: ⩾1 GByte Classification: 18 Nature of problem: Solve Maxwell's equations for dielectric particles with homogeneous dielectric functions separated by abrupt interfaces. Solution method: Boundary element method using electromagnetic potentials. Running time: Depending on surface discretization between seconds and hours.
Browndye: A software package for Brownian dynamics
NASA Astrophysics Data System (ADS)
Huber, Gary A.; McCammon, J. Andrew
2010-11-01
A new software package, Browndye, is presented for simulating the diffusional encounter of two large biological molecules. It can be used to estimate second-order rate constants and encounter probabilities, and to explore reaction trajectories. Browndye builds upon previous knowledge and algorithms from software packages such as UHBD, SDA, and Macrodox, while implementing algorithms that scale to larger systems. Program summaryProgram title: Browndye Catalogue identifier: AEGT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGT_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: MIT license, included in distribution No. of lines in distributed program, including test data, etc.: 143 618 No. of bytes in distributed program, including test data, etc.: 1 067 861 Distribution format: tar.gz Programming language: C++, OCaml ( http://caml.inria.fr/) Computer: PC, Workstation, Cluster Operating system: Linux Has the code been vectorised or parallelized?: Yes. Runs on multiple processors with shared memory using pthreads RAM: Depends linearly on size of physical system Classification: 3 External routines: uses the output of APBS [1] ( http://www.poissonboltzmann.org/apbs/) as input. APBS must be obtained and installed separately. Expat 2.0.1, CLAPACK, ocaml-expat, Mersenne Twister. These are included in the Browndye distribution. Nature of problem: Exploration and determination of rate constants of bimolecular interactions involving large biological molecules. Solution method: Brownian dynamics with electrostatic, excluded volume, van der Waals, and desolvation forces. Running time: Depends linearly on size of physical system and quadratically on precision of results. The included example executes in a few minutes.
Calculation of four-particle harmonic-oscillator transformation brackets
NASA Astrophysics Data System (ADS)
Germanas, D.; Kalinauskas, R. K.; Mickevičius, S.
2010-02-01
A procedure for precise calculation of the three- and four-particle harmonic-oscillator (HO) transformation brackets is presented. The analytical expressions of the four-particle HO transformation brackets are given. The computer code for the calculations of HO transformation brackets proves to be quick, efficient and produces results with small numerical uncertainties. Program summaryProgram title: HOTB Catalogue identifier: AEFQ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFQ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 1247 No. of bytes in distributed program, including test data, etc.: 6659 Distribution format: tar.gz Programming language: FORTRAN 90 Computer: Any computer with FORTRAN 90 compiler Operating system: Windows, Linux, FreeBSD, True64 Unix RAM: 8 MB Classification: 17.17 Nature of problem: Calculation of the three-particle and four-particle harmonic-oscillator transformation brackets. Solution method: The method is based on compact expressions of the three-particle harmonics oscillator brackets, presented in [1] and expressions of the four-particle harmonics oscillator brackets, presented in this paper. Restrictions: The three- and four-particle harmonic-oscillator transformation brackets up to the e=28. Unusual features: Possibility of calculating the four-particle harmonic-oscillator transformation brackets. Running time: Less than one second for the single harmonic-oscillator transformation bracket. References:G.P. Kamuntavičius, R.K. Kalinauskas, B.R. Barret, S. Mickevičius, D. Germanas, Nuclear Physics A 695 (2001) 191.
MsSpec-1.0: A multiple scattering package for electron spectroscopies in material science
NASA Astrophysics Data System (ADS)
Sébilleau, Didier; Natoli, Calogero; Gavaza, George M.; Zhao, Haifeng; Da Pieve, Fabiana; Hatada, Keisuke
2011-12-01
We present a multiple scattering package to calculate the cross-section of various spectroscopies namely photoelectron diffraction (PED), Auger electron diffraction (AED), X-ray absorption (XAS), low-energy electron diffraction (LEED) and Auger photoelectron coincidence spectroscopy (APECS). This package is composed of three main codes, computing respectively the cluster, the potential and the cross-section. In the latter case, in order to cover a range of energies as wide as possible, three different algorithms are provided to perform the multiple scattering calculation: full matrix inversion, series expansion or correlation expansion of the multiple scattering matrix. Numerous other small Fortran codes or bash/csh shell scripts are also provided to perform specific tasks. The cross-section code is built by the user from a library of subroutines using a makefile. Program summaryProgram title: MsSpec-1.0 Catalogue identifier: AEJT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJT_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 504 438 No. of bytes in distributed program, including test data, etc.: 14 448 180 Distribution format: tar.gz Programming language: Fortran 77 Computer: Any Operating system: Linux, MacOs RAM: Bytes Classification: 7.2 External routines: Lapack ( http://www.netlib.org/lapack/) Nature of problem: Calculation of the cross-section of various spectroscopies. Solution method: Multiple scattering. Running time: The test runs provided only take a few seconds to run.
NASA Astrophysics Data System (ADS)
Skouteris, Dimitris; Gervasi, Osvaldo; Laganà, Antonio
2009-03-01
A program that uses the time-dependent wavepacket method to study the motion of structureless particles in a force field of quasi-cylindrical symmetry is presented here. The program utilises cylindrical polar coordinates to express the wavepacket, which is subsequently propagated using a Chebyshev expansion of the Schrödinger propagator. Time-dependent exit flux as well as energy-dependent S matrix elements can be obtained for all states of the particle (describing its angular momentum component along the nanotube axis and the excitation of the radial degree of freedom in the cylinder). The program has been used to study the motion of an H atom across a carbon nanotube. Program summaryProgram title: CYLWAVE Catalogue identifier: AECL_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECL_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 3673 No. of bytes in distributed program, including test data, etc.: 35 237 Distribution format: tar.gz Programming language: Fortran 77 Computer: RISC workstations Operating system: UNIX RAM: 120 MBytes Classification: 16.7, 16.10 External routines: SUNSOFT performance library (not essential) TFFT2D.F (Temperton Fast Fourier Transform), BESSJ.F (from Numerical Recipes, for the calculation of Bessel functions) (included in the distribution file). Nature of problem: Time evolution of the state of a structureless particle in a quasicylindrical potential. Solution method: Time dependent wavepacket propagation. Running time: 50000 secs. The test run supplied with the distribution takes about 10 minutes to complete.
METAGUI. A VMD interface for analyzing metadynamics and molecular dynamics simulations
NASA Astrophysics Data System (ADS)
Biarnés, Xevi; Pietrucci, Fabio; Marinelli, Fabrizio; Laio, Alessandro
2012-01-01
We present a new computational tool, METAGUI, which extends the VMD program with a graphical user interface that allows constructing a thermodynamic and kinetic model of a given process simulated by large-scale molecular dynamics. The tool is specially designed for analyzing metadynamics based simulations. The huge amount of diverse structures generated during such a simulation is partitioned into a set of microstates (i.e. structures with similar values of the collective variables). Their relative free energies are then computed by a weighted-histogram procedure and the most relevant free energy wells are identified by diagonalization of the rate matrix followed by a commitor analysis. All this procedure leads to a convenient representation of the metastable states and long-time kinetics of the system which can be compared with experimental data. The tool allows to seamlessly switch between a collective variables space representation of microstates and their atomic structure representation, which greatly facilitates the set-up and analysis of molecular dynamics simulations. METAGUI is based on the output format of the PLUMED plugin, making it compatible with a number of different molecular dynamics packages like AMBER, NAMD, GROMACS and several others. The METAGUI source files can be downloaded from the PLUMED web site ( http://www.plumed-code.org). Program summaryProgram title: METAGUI Catalogue identifier: AEKH_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKH_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License version 3 No. of lines in distributed program, including test data, etc.: 117 545 No. of bytes in distributed program, including test data, etc.: 8 516 203 Distribution format: tar.gz Programming language: TK/TCL, Fortran Computer: Any computer with a VMD installation and capable of running an executable produced by a gfortran compiler Operating system: Linux, Unix OS-es RAM: 1 073 741 824 bytes Classification: 23 External routines: A VMD installation ( http://www.ks.uiuc.edu/Research/vmd/) Nature of problem: Extract thermodynamic data and build a kinetic model of a given process simulated by metadynamics or molecular dynamics simulations, and provide this information on a dual representation that allows navigating and exploring the molecular structures corresponding to each point along the multi-dimensional free energy hypersurface. Solution method: Graphical-user interface linked to VMD that clusterizes the simulation trajectories in the space of a set of collective variables and assigns each frame to a given microstate, determines the free energy of each microstate by a weighted histogram analysis method, and identifies the most relevant free energy wells (kinetic basins) by diagonalization of the rate matrix followed by a commitor analysis. Restrictions: Input format files compatible with PLUMED and all the MD engines supported by PLUMED and VMD. Running time: A few minutes.
Stochastic hyperfine interactions modeling library
NASA Astrophysics Data System (ADS)
Zacate, Matthew O.; Evenson, William E.
2011-04-01
The stochastic hyperfine interactions modeling library (SHIML) provides a set of routines to assist in the development and application of stochastic models of hyperfine interactions. The library provides routines written in the C programming language that (1) read a text description of a model for fluctuating hyperfine fields, (2) set up the Blume matrix, upon which the evolution operator of the system depends, and (3) find the eigenvalues and eigenvectors of the Blume matrix so that theoretical spectra of experimental techniques that measure hyperfine interactions can be calculated. The optimized vector and matrix operations of the BLAS and LAPACK libraries are utilized; however, there was a need to develop supplementary code to find an orthonormal set of (left and right) eigenvectors of complex, non-Hermitian matrices. In addition, example code is provided to illustrate the use of SHIML to generate perturbed angular correlation spectra for the special case of polycrystalline samples when anisotropy terms of higher order than A can be neglected. Program summaryProgram title: SHIML Catalogue identifier: AEIF_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIF_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU GPL 3 No. of lines in distributed program, including test data, etc.: 8224 No. of bytes in distributed program, including test data, etc.: 312 348 Distribution format: tar.gz Programming language: C Computer: Any Operating system: LINUX, OS X RAM: Varies Classification: 7.4 External routines: TAPP [1], BLAS [2], a C-interface to BLAS [3], and LAPACK [4] Nature of problem: In condensed matter systems, hyperfine methods such as nuclear magnetic resonance (NMR), Mössbauer effect (ME), muon spin rotation (μSR), and perturbed angular correlation spectroscopy (PAC) measure electronic and magnetic structure within Angstroms of nuclear probes through the hyperfine interaction. When interactions fluctuate at rates comparable to the time scale of a hyperfine method, there is a loss in signal coherence, and spectra are damped. The degree of damping can be used to determine fluctuation rates, provided that theoretical expressions for spectra can be derived for relevant physical models of the fluctuations. SHIML provides routines to help researchers quickly develop code to incorporate stochastic models of fluctuating hyperfine interactions in calculations of hyperfine spectra. Solution method: Calculations are based on the method for modeling stochastic hyperfine interactions for PAC by Winkler and Gerdau [5]. The method is extended to include other hyperfine methods following the work of Dattagupta [6]. The code provides routines for reading model information from text files, allowing researchers to develop new models quickly without the need to modify computer code for each new model to be considered. Restrictions: In the present version of the code, only methods that measure the hyperfine interaction on one probe spin state, such as PAC, μSR, and NMR, are supported. Running time: Varies
The MOLDY short-range molecular dynamics package
NASA Astrophysics Data System (ADS)
Ackland, G. J.; D'Mellow, K.; Daraszewicz, S. L.; Hepburn, D. J.; Uhrin, M.; Stratford, K.
2011-12-01
We describe a parallelised version of the MOLDY molecular dynamics program. This Fortran code is aimed at systems which may be described by short-range potentials and specifically those which may be addressed with the embedded atom method. This includes a wide range of transition metals and alloys. MOLDY provides a range of options in terms of the molecular dynamics ensemble used and the boundary conditions which may be applied. A number of standard potentials are provided, and the modular structure of the code allows new potentials to be added easily. The code is parallelised using OpenMP and can therefore be run on shared memory systems, including modern multicore processors. Particular attention is paid to the updates required in the main force loop, where synchronisation is often required in OpenMP implementations of molecular dynamics. We examine the performance of the parallel code in detail and give some examples of applications to realistic problems, including the dynamic compression of copper and carbon migration in an iron-carbon alloy. Program summaryProgram title: MOLDY Catalogue identifier: AEJU_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJU_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License version 2 No. of lines in distributed program, including test data, etc.: 382 881 No. of bytes in distributed program, including test data, etc.: 6 705 242 Distribution format: tar.gz Programming language: Fortran 95/OpenMP Computer: Any Operating system: Any Has the code been vectorised or parallelized?: Yes. OpenMP is required for parallel execution RAM: 100 MB or more Classification: 7.7 Nature of problem: Moldy addresses the problem of many atoms (of order 10 6) interacting via a classical interatomic potential on a timescale of microseconds. It is designed for problems where statistics must be gathered over a number of equivalent runs, such as measuring thermodynamic properities, diffusion, radiation damage, fracture, twinning deformation, nucleation and growth of phase transitions, sputtering etc. In the vast majority of materials, the interactions are non-pairwise, and the code must be able to deal with many-body forces. Solution method: Molecular dynamics involves integrating Newton's equations of motion. MOLDY uses verlet (for good energy conservation) or predictor-corrector (for accurate trajectories) algorithms. It is parallelised using open MP. It also includes a static minimisation routine to find the lowest energy structure. Boundary conditions for surfaces, clusters, grain boundaries, thermostat (Nose), barostat (Parrinello-Rahman), and externally applied strain are provided. The initial configuration can be either a repeated unit cell or have all atoms given explictly. Initial velocities are generated internally, but it is also possible to specify the velocity of a particular atom. A wide range of interatomic force models are implemented, including embedded atom, Morse or Lennard-Jones. Thus the program is especially well suited to calculations of metals. Restrictions: The code is designed for short-ranged potentials, and there is no Ewald sum. Thus for long range interactions where all particles interact with all others, the order- N scaling will fail. Different interatomic potential forms require recompilation of the code. Additional comments: There is a set of associated open-source analysis software for postprocessing and visualisation. This includes local crystal structure recognition and identification of topological defects. Running time: A set of test modules for running time are provided. The code scales as order N. The parallelisation shows near-linear scaling with number of processors in a shared memory environment. A typical run of a few tens of nanometers for a few nanoseconds will run on a timescale of days on a multiprocessor desktop.
THERMINATOR 2: THERMal heavy Io N gener ATOR 2
NASA Astrophysics Data System (ADS)
Chojnacki, Mikołaj; Kisiel, Adam; Florkowski, Wojciech; Broniowski, Wojciech
2012-03-01
We present an extended version of THERMINATOR, a Monte Carlo event generator dedicated to studies of the statistical production of particles in relativistic heavy-ion collisions. The package is written in C++ and uses the CERN ROOT data-analysis environment. The largely increased functionality of the code contains the following main features: 1) The possibility of input of any shape of the freeze-out hypersurface and the expansion velocity field, including the 3+1-dimensional profiles, in particular those generated externally with various hydrodynamic codes. 2) The hypersurfaces may have variable thermal parameters, which allow studies departing significantly from the mid-rapidity region where the baryon chemical potential becomes large. 3) We include a library of standard sets of hypersurfaces and velocity profiles describing the RHIC Au + Au data at √{s}=200 GeV for various centralities, as well as those anticipated for the LHC Pb + Pb collisions at √{s}=5.5 TeV. 4) A separate code, FEMTO-THERMINATOR, is provided to carry out the analysis of the pion-pion femtoscopic correlations which are an important source of information concerning the size and expansion of the system. 5) We also include several useful scripts that carry out auxiliary tasks, such as obtaining an estimate of the number of elastic collisions after the freeze-out, counting of particles flowing back into the fireball and violating causality (typically very few), or visualizing various results: the particle p-spectra, the elliptic flow coefficients, and the HBT correlation radii. Program summaryProgram title:THERMINATOR 2 Catalogue identifier: ADXL_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXL_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 423 444 No. of bytes in distributed program, including test data, etc.: 2 854 602 Distribution format: tar.gz Programming language:C++ with the CERN ROOT libraries, BASH shell Computer: Any with a C++ compiler and the CERN ROOT environment, ver. 5.26 or later, tested with Intel Core2 Duo CPU E8400 @ 3 GHz, 4 GB RAM Operating system: Linux Ubuntu 10.10 x64 (gcc 4.4.5) ROOT 5.26 Linux Ubuntu 11.04 x64 (gcc Ubuntu/Linaro 4.5.2-8ubuntu4) ROOT 5.30/00 (compiled from source) Linux CentOS 5.2 (gcc Red Hat 4.1.2-42) ROOT 5.30/00 (compiled from source) Mac OS X 10.6.8 (i686-apple-darwin10-g++-4.2.1) ROOT 5.30/00 (for Mac OS X 10.6 x86-64 with gcc 4.2.1) cygwin-1.7.9-1 (gcc gcc4-g++-4.3.4-4) ROOT 5.30/00 (for cygwin gcc 4.3) RAM: 30 MB therm2 events 150 MB therm2 femto Classification: 11.2 Catalogue identifier of previous version: ADXL_v1_0 Journal reference of previous version: Comput. Phys. Comm. 174 (2006) 669 External routines: CERN ROOT ( http://root.cern.ch/drupal/) Does the new version supersede the previous version?: Yes Nature of problem: Particle production via statistical hadronization in relativistic heavy-ion collisions. Solution method: Monte Carlo simulation, analyzed with ROOT. Reasons for new version: The increased functionality of the code contains the following important features. The input of any shape of the freeze-out hypersurface and the expansion velocity field, including the 3+1-dimensional profiles, in particular those generated externally with the various popular hydrodynamic codes. The hypersurfaces may have variable thermal parameters, which allows for studies departing significantly from the mid-rapidity region. We include a library of standard sets of hypersurfaces and velocity profiles describing the RHIC Au + Au and the LHC Pb+Pb data. A separate code, FEMTO-THERMINATOR, is provided to carry out the analysis of femtoscopic correlations. Summary of revisions: THERMINATOR 2 incorporates major revisions to encompass the enhanced functionality. Classes: The Integrator class has been expanded and a new subgroup of classes defined. Model and abstract class: These classes are responsible for the physical models of the freeze-out process. The functionality and readability of the code has been substantially increased by implementing each freeze-out model in a different class. The Hypersurface class was added to handle the input form hydrodynamic codes. The hydro input is passed to the program as a lattice of the freeze-out hypersurface. That information is stored in the .xml files. Input: THERMINATOR 2 programs are now controlled by *. ini type files. The programs parameters and the freeze-out model parameters are now in separate ini files. Output: The event files generated by the therm2_events program are not backward compatible with the previous version. The event*. root file structure was expanded with two new TTree structures. From the particle entry it is possible to back-trace the whole cascade. Event text output is now optional. The ROOT macros produce the *. eps figures with physics results, e.g. the pT-spectra, the elliptic-flow coefficient, rapidity distributions, etc. The THERMINATOR HBT package creates the ROOT files femto*. root ( therm2_femto) and hbtfit*. root ( therm2_hbtfit). Directory structure: The directory structure has been reorganized. Source code resides in the build directory. The freeze-out model input files, event files, ROOT macros are stored separately. The THERMINATOR 2 system, after installation, is able to run on a cluster. Scripts: The package contains a few BASH scripts helpful when running e.g. on a cluster the whole system can be executed via a single script. Additional comments: Typical data file size: default configuration. 45 MB/500 events; 35 MB/correlation file (one k bin); 45 kB/fit file (projections and fits). Running time: Default configuration at 3 GHz. primordial multiplicities 70 min (calculated only once per case); 8 min/500 events; 10 min - draw all figures; 25 min/one k bin in the HBT analysis with 5000 events.
Parallel implementation of an adaptive and parameter-free N-body integrator
NASA Astrophysics Data System (ADS)
Pruett, C. David; Ingham, William H.; Herman, Ralph D.
2011-05-01
Previously, Pruett et al. (2003) [3] described an N-body integrator of arbitrarily high order M with an asymptotic operation count of O(MN). The algorithm's structure lends itself readily to data parallelization, which we document and demonstrate here in the integration of point-mass systems subject to Newtonian gravitation. High order is shown to benefit parallel efficiency. The resulting N-body integrator is robust, parameter-free, highly accurate, and adaptive in both time-step and order. Moreover, it exhibits linear speedup on distributed parallel processors, provided that each processor is assigned at least a handful of bodies. Program summaryProgram title: PNB.f90 Catalogue identifier: AEIK_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIK_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 3052 No. of bytes in distributed program, including test data, etc.: 68 600 Distribution format: tar.gz Programming language: Fortran 90 and OpenMPI Computer: All shared or distributed memory parallel processors Operating system: Unix/Linux Has the code been vectorized or parallelized?: The code has been parallelized but has not been explicitly vectorized. RAM: Dependent upon N Classification: 4.3, 4.12, 6.5 Nature of problem: High accuracy numerical evaluation of trajectories of N point masses each subject to Newtonian gravitation. Solution method: Parallel and adaptive extrapolation in time via power series of arbitrary degree. Running time: 5.1 s for the demo program supplied with the package.
The predictive information obtained by testing multiple software versions
NASA Technical Reports Server (NTRS)
Lee, Larry D.
1987-01-01
Multiversion programming is a redundancy approach to developing highly reliable software. In applications of this method, two or more versions of a program are developed independently by different programmers and the versions are combined to form a redundant system. One variation of this approach consists of developing a set of n program versions and testing the versions to predict the failure probability of a particular program or a system formed from a subset of the programs. The precision that might be obtained, and also the effect of programmer variability if predictions are made over repetitions of the process of generating different program versions, are examined.
NASA Astrophysics Data System (ADS)
Sanna, N.; Baccarelli, I.; Morelli, G.
2009-12-01
VOLSCAT is a computer program which implements the Single Center Expansion (SCE) method to solve the scattering equation for the elastic collision of electrons/positrons off molecular targets. The scattering potential needed is calculated by on-the-fly calls to the external SCELib library for molecular properties, recently ported to GPU computing environment and ClearSpeed platforms, and made available by means of an Application Program Interface (SCELib-API) which is also provided with the VOLSCAT package in a beta version. The result is a high throughput approach to the solution of the complex e/e-molecule scattering problem, with allows for intensive calculations both for the number of systems which can be studied and for their size. Accurate partial and total elastic cross sections are produced in output together with the associated eigenphase sums. Indirect scattering processes arising from the formation of temporary negative ions can also be analyzed through the computation of the resonances' parameters. Program summaryProgram title: VOLSCAT V1.0 Catalogue identifier: AEEW_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEW_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 4 618 353 No. of bytes in distributed program, including test data, etc.: 120 307 536 Distribution format: tar.gz Programming language: Fortran90 Computer: All SMP platforms based on AIX, Linux and SUNOS operating systems over SPARC, POWER, Intel Itanium2, X86, em64t and Opteron processors Operating system: SUNOS, IBM AIX, Linux RedHat (Enterprise), Linux SuSE (SLES) Has the code been vectorized or parallelized?: Yes. The parallel version in the present release of the code is limited to the OpenMP calculation of the exchange potential V or V. The number of OpenMP threads can then be set in the input script. RAM: For a typical (isolated) biomolecule (e.g. Cytosine or Ribose) a converged calculation would require from 320 MB up to 2.5 GB. Word size: 64 bits Classification: 16.5 External routines: LAPACK (dsyev, dgetri, dgetrf) ( http://www.netlib.org/lapack/) Nature of problem: In this set of codes an efficient procedure is implemented to calculate partial cross section for the scattering between an electron/positron and a molecular target as a function of the collision energies. Solution method: The scattering equations are derived in the framework of the Single Center Expansion (SCE) procedure which allows the reduction of the original three-dimensional problem to a radial (one-dimensional) equation through the expansion of the scattering potential and the system wavefunction in a set of symmetry-adapted (real) spherical harmonics. The local part of the electrostatic interaction between the charged projectile (electron/positron) and the molecular target is provided in input by the SCELib library, which also provides the correlation and polarization corrections for the short-range and long-range part, respectively, of the interaction. A proper Application Programming Interface (API) is used by VOLSCAT to load the energy-independent part of the potential while the non-local exchange contribution is approximated by a local form and calculated on the fly in the VOLSCAT run for each desired collision energy. The resulting SCE one-dimensional homogeneous scattering equation is rewritten in an integral form by means of the standard Green's function technique resulting in a set of Volterra coupled equations which are solved to give the phase shifts and cross sections for any desired impact energy in terms of the partial components defined by the irreducible representations of the symmetry point group to which the target molecule belongs. The total cross section can then be straightforwardly calculated by summing over all the partial cross sections produced in the output. By the Breit-Wigner analysis of the eigenphase sum produced as a function of the energy one can also get information on the location of possible resonance states arising in the collision process. Restrictions: Depending on the molecular system under study and on the operating conditions the program may or may not fit into available RAM memory. Additional comments: A beta version of SCELib-API is included in the distribution package. Running time: The execution time strongly depends on the molecular target description and on the hardware/OS chosen, it is directly proportional to the (r,θ,φ) grid size and to the number of angular basis functions used.
Real-time Java simulations of multiple interference dielectric filters
NASA Astrophysics Data System (ADS)
Kireev, Alexandre N.; Martin, Olivier J. F.
2008-12-01
An interactive Java applet for real-time simulation and visualization of the transmittance properties of multiple interference dielectric filters is presented. The most commonly used interference filters as well as the state-of-the-art ones are embedded in this platform-independent applet which can serve research and education purposes. The Transmittance applet can be freely downloaded from the site http://cpc.cs.qub.ac.uk. Program summaryProgram title: Transmittance Catalogue identifier: AEBQ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEBQ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 5778 No. of bytes in distributed program, including test data, etc.: 90 474 Distribution format: tar.gz Programming language: Java Computer: Developed on PC-Pentium platform Operating system: Any Java-enabled OS. Applet was tested on Windows ME, XP, Sun Solaris, Mac OS RAM: Variable Classification: 18 Nature of problem: Sophisticated wavelength selective multiple interference filters can include some tens or even hundreds of dielectric layers. The spectral response of such a stack is not obvious. On the other hand, there is a strong demand from application designers and students to get a quick insight into the properties of a given filter. Solution method: A Java applet was developed for the computation and the visualization of the transmittance of multilayer interference filters. It is simple to use and the embedded filter library can serve educational purposes. Also, its ability to handle complex structures will be appreciated as a useful research and development tool. Running time: Real-time simulations
SLHAplus: A library for implementing extensions of the standard model
NASA Astrophysics Data System (ADS)
Bélanger, G.; Christensen, Neil D.; Pukhov, A.; Semenov, A.
2011-03-01
We provide a library to facilitate the implementation of new models in codes such as matrix element and event generators or codes for computing dark matter observables. The library contains an SLHA reader routine as well as diagonalisation routines. This library is available in CalcHEP and micrOMEGAs. The implementation of models based on this library is supported by LanHEP and FeynRules. Program summaryProgram title: SLHAplus_1.3 Catalogue identifier: AEHX_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHX_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 6283 No. of bytes in distributed program, including test data, etc.: 52 119 Distribution format: tar.gz Programming language: C Computer: IBM PC, MAC Operating system: UNIX (Linux, Darwin, Cygwin) RAM: 2000 MB Classification: 11.1 Nature of problem: Implementation of extensions of the standard model in matrix element and event generators and codes for dark matter observables. Solution method: For generic extensions of the standard model we provide routines for reading files that adopt the standard format of the SUSY Les Houches Accord (SLHA) file. The procedure has been generalized to take into account an arbitrary number of blocks so that the reader can be used in generic models including non-supersymmetric ones. The library also contains routines to diagonalize real and complex mass matrices with either unitary or bi-unitary transformations as well as routines for evaluating the running strong coupling constant, running quark masses and effective quark masses. Running time: 0.001 sec
Local electron tomography using angular variations of surface tangents: Stomo version 2
NASA Astrophysics Data System (ADS)
Petersen, T. C.; Ringer, S. P.
2012-03-01
In a recent publication, we investigated the prospect of measuring the outer three-dimensional (3D) shapes of nano-scale atom probe specimens from tilt-series of images collected in the transmission electron microscope. For this purpose alone, an algorithm and simplified reconstruction theory were developed to circumvent issues that arise in commercial "back-projection" computations in this context. In our approach, we give up the difficult task of computing the complete 3D continuum structure and instead seek only the 3D morphology of internal and external scattering interfaces. These interfaces can be described as embedded 2D surfaces projected onto each image in a tilt series. Curves and other features in the images are interpreted as inscribed sets of tangent lines, which intersect the scattering interfaces at unknown locations along the direction of the incident electron beam. Smooth angular variations of the tangent line abscissa are used to compute the surface tangent intersections and hence the 3D morphology as a "point cloud". We have published the explicit details of our alternative algorithm along with the source code entitled "stomo_version_1". For this work, we have further modified the code to efficiently handle rectangular image sets, perform much faster tangent-line "edge detection" and smoother tilt-axis image alignment using simple bi-linear interpolation. We have also adapted the algorithm to detect tangent lines as "ridges", based upon 2nd order partial derivatives of the image intensity; the magnitude and orientation of which is described by a Hessian matrix. Ridges are more appropriate descriptors for tangent-line curves in phase contrast images outlined by Fresnel fringes or absorption contrast data from fine-scale objects. Improved accuracy, efficiency and speed for "stomo_version_2" is demonstrated in this paper using both high resolution electron tomography data of a nano-sized atom probe tip and simulated absorption-contrast images. Program summaryProgram title: STOMO version 2 Catalogue identifier: AEFS_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFS_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 2854 No. of bytes in distributed program, including test data, etc.: 23 559 Distribution format: tar.gz Programming language: C/C++ Computer: PC Operating system: Windows XP RAM: Scales as the product of experimental image dimensions multiplied by the number of points chosen by the user in polynomial fitting. Typical runs require between 50 Mb and 100 Mb of RAM. Supplementary material: Sample output files, for the test run provided, are available. Classification: 7.4, 14 Catalogue identifier of previous version: AEFS_v1_0 Journal reference of previous version: Comput. Phys. Comm. 181 (2010) 676 Does the new version supersede the previous version?: Yes Nature of problem: A local electron tomography algorithm of specimens for which conventional back projection may fail and or data for which there is a limited angular range (which would otherwise cause significant 'missing-wedge' artefacts). The algorithm does not solve the tomography back projection problem but rather locally reconstructs the 3D morphology of surfaces defined by varied scattering densities. Solution method: Local reconstruction is effected using image-analysis edge and ridge detection computations on experimental tilt series to measure smooth angular variations of surface tangent-line intersections, which generate point clouds decorating the embedded and or external scattering surfaces of a specimen. Reasons for new version: The new version was coded to cater for rectangular images in experimental tilt-series, ensure smoother image rotations, provide ridge detection (suitable for sensing phase-contrast Fresnel fringes and other fine-scale structures), faster/larger kernel edge detection and also greatly reduce RAM usage. Specimen surface normals are also explicitly computed from tangent-line and edge intersections, providing new information for potential use in point cloud rendering. Hysteresis thresholding implemented in the version 1 edge-detection algorithm provided only sparse edge-linking. Version 2 now implements edge tracking using recursion to fully link the edges during hysteresis thresholding. Furthermore in version 1 the minimum number of fitted polynomial points (specified in the input file) was not correctly imposed, which has been fixed for version 2. Most of these changes increase the accuracy of 3d morphology surface-tomography reconstructions by facilitating the use of more/finer tilt angles and experimental images of increased spatial-resolution. The ridge detection was incorporated to specifically improve the reconstruction of internal specimen morphology. Summary of revisions: Included Hessian() function to compute 2nd order spatial derivatives of image intensities (operates in the same fashion as the previous and existing Sobel() function). Changed convolve_Gaussian() function to alternatively use successive 1D convolutions (rather than cumbersome 2D summations implemented in version 1), resulting in a large increase in computational speed without any loss in accuracy. The convolution kernel size was hence widened to three times the full width half maximum of the Gaussian filter to improve scale-space selection accuracy. A ridge detection option was included to compute edge maps sensitive to ridges, rather than edges, using elements from a Hessian matrix; the eigenvalues of which were used to define ridge direction for Canny-type hysteresis thresholding. Function edge_detect_Canny() was also altered to pass the gradient-direction maps (from either Hessian or Sobel based operators) in and out of scope for computation of surface normals; thereby enabling the output of both point-cloud and corresponding unstructured vector-field surface descriptors. Function rotate_imgs() was changed to incorporate basic bi-linear interpolation for improved tilt-axis alignment of the entire tilt series in exp_data.dat. Smoother and more accurate edge maps are thereby produced. Algorithm convert_point_cloud_to_tomogram() was created to output the tomogram 3d_imgs.dat in a more memory efficient manner. The function shell_sort(), adapted from numerical recipes in C, was also coded for this purpose. The new function compute_xyz() was coded to calculate point-clouds and tomogram surface normals using information from single tilt images, as opposed to the entire stack. This function is hence used iteratively throughout the reconstruction as each tilt image is analysed in succession. The new function reconstruct_local() is the heart of stomo_version_2.cpp. the main() source code in stomo_version_1.cpp has been rewritten here to process experimental images and edge maps one at a time, using a buffered 3d array of dimensions dictated solely by the number of tilt images required for the local SVD fit of the angular variations. These changes (along with similar iterative file writing) have been made to vastly reduce memory usage and hence allow higher spatial and angular resolution data sets to be analysed without recourse to high performance computing resources. The input file has been simplified by removing the 'slices' and 'channels' settings (used in version 1 for crude image binning), which are now equal to the respective numbers of image rows and columns. Every summation over image rows and columns has been checked to enable the analysis of rectangular images without error. For images of specimens with high aspect-ratios, such as narrow tips, these fixes allow significant reductions in computation time and memory usage. Some arrays in the source code were not appropriately zeroed in version 1, causing reconstruction artefacts in some cases. These problems have now been fixed. Fixed an if-statement to correctly impose the minimum number of fitted polynomial points, thereby reducing noise in the reconstructed data. Implemented proper edge linking in the hysteresis thresholding code for Canny edge detection. Restrictions: The input experimental tilt-series of images must be registered with respect to a common single tilt axis with known orientation and position. Running time: For high quality reconstruction, 2-5 min.
DAMQT: A package for the analysis of electron density in molecules
NASA Astrophysics Data System (ADS)
López, Rafael; Rico, Jaime Fernández; Ramírez, Guillermo; Ema, Ignacio; Zorrilla, David
2009-09-01
DAMQT is a package for the analysis of the electron density in molecules and the fast computation of the density, density deformations, electrostatic potential and field, and Hellmann-Feynman forces. The method is based on the partition of the electron density into atomic fragments by means of a least deformation criterion. Each atomic fragment of the density is expanded in regular spherical harmonics times radial factors, which are piecewise represented in terms of analytical functions. This representation is used for the fast evaluation of the electrostatic potential and field generated by the electron density and nuclei, as well as for the computation of the Hellmann-Feynman forces on the nuclei. An analysis of the atomic and molecular deformations of the density can be also carried out, yielding a picture that connects with several concepts of the empirical structural chemistry. Program summaryProgram title: DAMQT1.0 Catalogue identifier: AEDL_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDL_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GPLv3 No. of lines in distributed program, including test data, etc.: 278 356 No. of bytes in distributed program, including test data, etc.: 31 065 317 Distribution format: tar.gz Programming language: Fortran90 and C++ Computer: Any Operating system: Linux, Windows (Xp, Vista) RAM: 190 Mbytes Classification: 16.1 External routines: Trolltech's Qt (4.3 or higher) ( http://www.qtsoftware.com/products), OpenGL (1.1 or higher) ( http://www.opengl.org/), GLUT 3.7 ( http://www.opengl.org/resources/libraries/glut/). Nature of problem: Analysis of the molecular electron density and density deformations, including fast evaluation of electrostatic potential, electric field and Hellmann-Feynman forces on nuclei. Solution method: The method of Deformed Atoms in Molecules, reported elsewhere [1], is used for partitioning the molecular electron density into atomic fragments, which are further expanded in spherical harmonics times radial factors. The partition is used for defining molecular density deformations and for the fast calculation of several properties associated to density. Restrictions: The current version is limited to 120 atoms, 2000 contracted functions, and l=5 in basis functions. Density must come from a LCAO calculation (any level) with spherical (not Cartesian) Gaussian functions. Unusual features: The program contains an OPEN statement to binary files (stream) in file GOPENMOL.F90. This statement has not a standard syntax in Fortran 90. Two possibilities are considered in conditional compilation: Intel's ifort and Fortran2003 standard. This latter is applied to compilers other than ifort (gfortran uses this one, for instance). Additional comments: The distribution file for this program is over 30 Mbytes and therefore is not delivered directly when download or e-mail is requested. Instead a html file giving details of how the program can be obtained is sent. Running time: Largely dependent on the system size and the module run (from fractions of a second to hours). References: [1] J. Fernández Rico, R. López, I. Ema, G. Ramírez, J. Mol. Struct. (Theochem) 727 (2005) 115.
A computer program for two-particle generalized coefficients of fractional parentage
NASA Astrophysics Data System (ADS)
Deveikis, A.; Juodagalvis, A.
2008-10-01
We present a FORTRAN90 program GCFP for the calculation of the generalized coefficients of fractional parentage (generalized CFPs or GCFP). The approach is based on the observation that the multi-shell CFPs can be expressed in terms of single-shell CFPs, while the latter can be readily calculated employing a simple enumeration scheme of antisymmetric A-particle states and an efficient method of construction of the idempotent matrix eigenvectors. The program provides fast calculation of GCFPs for a given particle number and produces results possessing numerical uncertainties below the desired tolerance. A single j-shell is defined by four quantum numbers, (e,l,j,t). A supplemental C++ program parGCFP allows calculation to be done in batches and/or in parallel. Program summaryProgram title:GCFP, parGCFP Catalogue identifier: AEBI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEBI_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 17 199 No. of bytes in distributed program, including test data, etc.: 88 658 Distribution format: tar.gz Programming language: FORTRAN 77/90 ( GCFP), C++ ( parGCFP) Computer: Any computer with suitable compilers. The program GCFP requires a FORTRAN 77/90 compiler. The auxiliary program parGCFP requires GNU-C++ compatible compiler, while its parallel version additionally requires MPI-1 standard libraries Operating system: Linux (Ubuntu, Scientific) (all programs), also checked on Windows XP ( GCFP, serial version of parGCFP) RAM: The memory demand depends on the computation and output mode. If this mode is not 4, the program GCFP demands the following amounts of memory on a computer with Linux operating system. It requires around 2 MB of RAM for the A=12 system at E⩽2. Computation of the A=50 particle system requires around 60 MB of RAM at E=0 and ˜70 MB at E=2 (note, however, that the calculation of this system will take a very long time). If the computation and output mode is set to 4, the memory demands by GCFP are significantly larger. Calculation of GCFPs of A=12 system at E=1 requires 145 MB. The program parGCFP requires additional 2.5 and 4.5 MB of memory for the serial and parallel version, respectively. Classification: 17.18 Nature of problem: The program GCFP generates a list of two-particle coefficients of fractional parentage for several j-shells with isospin. Solution method: The method is based on the observation that multishell coefficients of fractional parentage can be expressed in terms of single-shell CFPs [1]. The latter are calculated using the algorithm [2,3] for a spectral decomposition of an antisymmetrization operator matrix Y. The coefficients of fractional parentage are those eigenvectors of the antisymmetrization operator matrix Y that correspond to unit eigenvalues. A computer code for these coefficients is available [4]. The program GCFP offers computation of two-particle multishell coefficients of fractional parentage. The program parGCFP allows a batch calculation using one input file. Sets of GCFPs are independent and can be calculated in parallel. Restrictions:A<86 when E=0 (due to the memory constraints); small numbers of particles allow significantly higher excitations, though the shell with j⩾11/2 cannot get full (it is the implementation constraint). Unusual features: Using the program GCFP it is possible to determine allowed particle configurations without the GCFP computation. The GCFPs can be calculated either for all particle configurations at once or for a specified particle configuration. The values of GCFPs can be printed out with a complete specification in either one file or with the parent and daughter configurations printed in separate files. The latter output mode requires additional time and RAM memory. It is possible to restrict the ( J,T) values of the considered particle configurations. (Here J is the total angular momentum and T is the total isospin of the system.) The program parGCFP produces several result files the number of which equals to the number of particle configurations. To work correctly, the program GCFP needs to be compiled to read parameters from the standard input (the default setting). Running time: It depends on the size of the problem. The minimum time is required, if the computation and output mode ( CompMode) is not 4, but the resulting file is larger. A system with A=12 particles at E=0 (all 9411 GCFPs) took around 1 sec on a Pentium4 2.8 GHz processor with 1 MB L2 cache. The program required about 14 min to calculate all 1.3×10 GCFPs of E=1. The time for all 5.5×10 GCFPs of E=2 was about 53 hours. For this number of particles, the calculation time of both E=0 and E=1 with CompMode = 1 and 4 is nearly the same, when no other processes are running. The case of E=2 could not be calculated with CompMode = 4, because the RAM memory was insufficient. In general, the latter CompMode requires a longer computation time, although the resulting files are smaller in size. The program parGCFP puts virtually no time overhead. Its parallel version speeds-up the calculation. However, the results need to be collected from several files created for each configuration. References: [1] J. Levinsonas, Works of Lithuanian SSR Academy of Sciences 4 (1957) 17. [2] A. Deveikis, A. Bončkus, R. Kalinauskas, Lithuanian Phys. J. 41 (2001) 3. [3] A. Deveikis, R.K. Kalinauskas, B.R. Barrett, Ann. Phys. 296 (2002) 287. [4] A. Deveikis, Comput. Phys. Comm. 173 (2005) 186. (CPC Catalogue ID. ADWI_v1_0)
Automated generation of lattice QCD Feynman rules
NASA Astrophysics Data System (ADS)
Hart, A.; von Hippel, G. M.; Horgan, R. R.; Müller, E. H.
2009-12-01
The derivation of the Feynman rules for lattice perturbation theory from actions and operators is complicated, especially for highly improved actions such as HISQ. This task is, however, both important and particularly suitable for automation. We describe a suite of software to generate and evaluate Feynman rules for a wide range of lattice field theories with gluons and (relativistic and/or heavy) quarks. Our programs are capable of dealing with actions as complicated as (m)NRQCD and HISQ. Automated differentiation methods are used to calculate also the derivatives of Feynman diagrams. Program summaryProgram title: HiPPY, HPsrc Catalogue identifier: AEDX_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDX_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GPLv2 (see Additional comments below) No. of lines in distributed program, including test data, etc.: 513 426 No. of bytes in distributed program, including test data, etc.: 4 893 707 Distribution format: tar.gz Programming language: Python, Fortran95 Computer: HiPPy: Single-processor workstations. HPsrc: Single-processor workstations and MPI-enabled multi-processor systems Operating system: HiPPy: Any for which Python v2.5.x is available. HPsrc: Any for which a standards-compliant Fortran95 compiler is available Has the code been vectorised or parallelised?: Yes RAM: Problem specific, typically less than 1 GB for either code Classification: 4.4, 11.5 Nature of problem: Derivation and use of perturbative Feynman rules for complicated lattice QCD actions. Solution method: An automated expansion method implemented in Python (HiPPy) and code to use expansions to generate Feynman rules in Fortran95 (HPsrc). Restrictions: No general restrictions. Specific restrictions are discussed in the text. Additional comments: The HiPPy and HPsrc codes are released under the second version of the GNU General Public Licence (GPL v2). Therefore anyone is free to use or modify the code for their own calculations. As part of the licensing, we ask that any publications including results from the use of this code or of modifications of it cite Refs. [1,2] as well as this paper. Finally, we also ask that details of these publications, as well as of any bugs or required or useful improvements of this core code, would be communicated to us. Running time: Very problem specific, depending on the complexity of the Feynman rules and the number of integration points. Typically between a few minutes and several weeks. The installation tests provided with the program code take only a few seconds to run. References:A. Hart, G.M. von Hippel, R.R. Horgan, L.C. Storoni, Automatically generating Feynman rules for improved lattice eld theories, J. Comput. Phys. 209 (2005) 340-353, doi:10.1016/j.jcp.2005.03.010, arXiv:hep-lat/0411026. M. Lüscher, P. Weisz, Efficient Numerical Techniques for Perturbative Lattice Gauge Theory Computations, Nucl. Phys. B 266 (1986) 309, doi:10.1016/0550-3213(86)90094-5.
Computer program for the computation of total sediment discharge by the modified Einstein procedure
Stevens, H.H.
1985-01-01
Two versions of a computer program to compute total sediment discharge by the modified Einstein procedure are presented. The FORTRAN 77 language version is for use on the PRIME computer, and the BASIC language version is for use on most microcomputers. The program contains built-in limitations and input-output options that closely follow the original modified Einstein procedure. Program documentation and listings of both versions of the program are included. (USGS)
NASA Astrophysics Data System (ADS)
Schimeczek, C.; Engel, D.; Wunner, G.
2012-07-01
Our previously published code for calculating energies and bound-bound transitions of medium-Z elements at neutron star magnetic field strengths [D. Engel, M. Klews, G. Wunner, Comput. Phys. Comm. 180 (2009) 302-311] was based on the adiabatic approximation. It assumes a complete decoupling of the (fast) gyration of the electrons under the action of the magnetic field and the (slow) bound motion along the field under the action of the Coulomb forces. For the single-particle orbitals this implied that each is a product of a Landau state and an (unknown) longitudinal wave function whose B-spline coefficients were determined self-consistently by solving the Hartree-Fock equations for the many-electron problem on a finite-element grid. In the present code we go beyond the adiabatic approximation, by allowing the transverse part of each orbital to be a superposition of Landau states, while assuming that the longitudinal part can be approximated by the same wave function in each Landau level. Inserting this ansatz into the energy variational principle leads to a system of coupled equations in which the B-spline coefficients depend on the weights of the individual Landau states, and vice versa, and which therefore has to be solved in a doubly self-consistent manner. The extended ansatz takes into account the back-reaction of the Coulomb motion of the electrons along the field direction on their motion in the plane perpendicular to the field, an effect which cannot be captured by the adiabatic approximation. The new code allows for the inclusion of up to 8 Landau levels. This reduces the relative error of energy values as compared to the adiabatic approximation results by typically a factor of three (1/3 of the original error), and yields accurate results also in regions of lower neutron star magnetic field strengths where the adiabatic approximation fails. Further improvements in the code are a more sophisticated choice of the initial wave functions, which takes into account the shielding of the core potential for outer electrons by inner electrons, and an optimal finite-element decomposition of each individual longitudinal wave function. These measures largely enhance the convergence properties compared to the previous code, and lead to speed-ups by factors up to two orders of magnitude compared with the implementation of the Hartree-Fock-Roothaan method used by Engel and Wunner in [D. Engel, G. Wunner, Phys. Rev. A 78 (2008) 032515]. New version program summaryProgram title: HFFER II Catalogue identifier: AECC_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECC_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: v 55 130 No. of bytes in distributed program, including test data, etc.: 293 700 Distribution format: tar.gz Programming language: Fortran 95 Computer: Cluster of 1-13 HP Compaq dc5750 Operating system: Linux Has the code been vectorized or parallelized?: Yes, parallelized using MPI directives. RAM: 1 GByte per node Classification: 2.1 External routines: MPI/GFortran, LAPACK, BLAS, FMlib (included in the package) Catalogue identifier of previous version: AECC_v1_0 Journal reference of previous version: Comput. Phys. Comm. 180 (2009) 302 Does the new version supersede the previous version?: Yes Nature of problem: Quantitative modellings of features observed in the X-ray spectra of isolated magnetic neutron stars are hampered by the lack of sufficiently large and accurate databases for atoms and ions up to the last fusion product, iron, at strong magnetic field strengths. Our code is intended to provide a powerful tool for calculating energies and oscillator strengths of medium-Z atoms and ions at neutron star magnetic field strengths with sufficient accuracy in a routine way to create such databases. Solution method: The Slater determinants of the atomic wave functions are constructed from single-particle orbitals ψi which are products of a wave function in the z direction (the direction of the magnetic field) and an expansion of the wave function perpendicular to the direction of the magnetic field in terms of Landau states, ψi(ρ,φ,z)=Pi(z)∑n=0NLtinϕni(ρ,φ). The tin are expansion coefficients, and the expansion is cut off at some maximum Landau level quantum number n=NL. In the previous version of the code only the lowest Landau level was included (NL=0), in the new version NL can take values of up to 7. As in the previous version of the code, the longitudinal wave functions are expanded in terms of sixth-order B-splines on finite elements on the z axis, with a combination of equidistant and quadratically widening element borders. Both the B-spline expansion coefficients and the Landau weights tin of all orbitals have to be determined in a doubly self-consistent way: For a given set of Landau weights tin, the system of linear equations for the B-spline expansion coefficients, which is equivalent to the Hartree-Fock equations for the longitudinal wave functions, is solved numerically. In the second step, for frozen B-spline coefficients new Landau weights are determined by minimizing the total energy with respect to the Landau expansion coefficients. Both steps require solving non-linear eigenvalue problems of Roothaan type. The procedure is repeated until convergence of both the B-spline coefficients and the Landau weights is achieved. Reasons for new version: The former version of the code was restricted to the adiabatic approximation, which assumes the quantum dynamics of the electrons in the plane perpendicular to the magnetic field to be fixed in the lowest Landau level, n=0. This approximation is valid only if the magnetic field strengths are large compared to the reference magnetic field BZ, for a nuclear charge Z,BZ=Z24.70108×105 T. Summary of revisions: In the new version, the transverse parts of the orbitals are expanded in terms of Landau states up to n=7, and the expansion coefficients are determined, together with the longitudinal wave functions, in a doubly self-consistent way. Thus the back-reaction of the quantum dynamics along the magnetic field direction on the quantum dynamics in the plane perpendicular to it is taken into account. The new ansatz not only increases the accuracy of the results for energy values and transition strengths obtained so far, but also allows their calculation for magnetic field strengths down to B≳BZ, where the adiabatic approximation fails. Restrictions: Intense magnetic field strengths are required, since the expansion of the transverse single-particle wave functions using 8 Landau levels will no longer produce accurate results if the scaled magnetic field strength parameter βZ=B/BZ becomes much smaller than unity. Unusual features: A huge program speed-up is achieved by making use of pre-calculated binary files. These can be calculated with additional programs provided with this package. Running time: 1-30 min.
CalcHEP 3.4 for collider physics within and beyond the Standard Model
NASA Astrophysics Data System (ADS)
Belyaev, Alexander; Christensen, Neil D.; Pukhov, Alexander
2013-07-01
We present version 3.4 of the CalcHEP software package which is designed for effective evaluation and simulation of high energy physics collider processes at parton level. The main features of CalcHEP are the computation of Feynman diagrams, integration over multi-particle phase space and event simulation at parton level. The principle attractive key-points along these lines are that it has: (a) an easy startup and usage even for those who are not familiar with CalcHEP and programming; (b) a friendly and convenient graphical user interface (GUI); (c) the option for the user to easily modify a model or introduce a new model by either using the graphical interface or by using an external package with the possibility of cross checking the results in different gauges; (d) a batch interface which allows to perform very complicated and tedious calculations connecting production and decay modes for processes with many particles in the final state. With this features set, CalcHEP can efficiently perform calculations with a high level of automation from a theory in the form of a Lagrangian down to phenomenology in the form of cross sections, parton level event simulation and various kinematical distributions. In this paper we report on the new features of CalcHEP 3.4 which improves the power of our package to be an effective tool for the study of modern collider phenomenology. Program summaryProgram title: CalcHEP Catalogue identifier: AEOV_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOV_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 78535 No. of bytes in distributed program, including test data, etc.: 818061 Distribution format: tar.gz Programming language: C. Computer: PC, MAC, Unix Workstations. Operating system: Unix. RAM: Depends on process under study Classification: 4.4, 5. External routines: X11 Nature of problem: Implement new models of particle interactions. Generate Feynman diagrams for a physical process in any implemented theoretical model. Integrate phase space for Feynman diagrams to obtain cross sections or particle widths taking into account kinematical cuts. Simulate collisions at modern colliders and generate respective unweighted events. Mix events for different subprocesses and connect them with the decays of unstable particles. Solution method: Symbolic calculations. Squared Feynman diagram approach Vegas Monte Carlo algorithm. Restrictions: Up to 2→4 production (1→5 decay) processes are realistic on typical computers. Higher multiplicities sometimes possible for specific 2→5 and 2→6 processes. Unusual features: Graphical user interface, symbolic algebra calculation of squared matrix element, parallelization on a pbs cluster. Running time: Depends strongly on the process. For a typical 2→2 process it takes seconds. For 2→3 processes the typical running time is of the order of minutes. For higher multiplicities it could take much longer.
Berent, Jarosław
2010-01-01
This paper presents the new DNAStat version 2.1 for processing genetic profile databases and biostatistical calculations. The popularization of DNA studies employed in the judicial system has led to the necessity of developing appropriate computer programs. Such programs must, above all, address two critical problems, i.e. the broadly understood data processing and data storage, and biostatistical calculations. Moreover, in case of terrorist attacks and mass natural disasters, the ability to identify victims by searching related individuals is very important. DNAStat version 2.1 is an adequate program for such purposes. The DNAStat version 1.0 was launched in 2005. In 2006, the program was updated to 1.1 and 1.2 versions. There were, however, slight differences between those versions and the original one. The DNAStat version 2.0 was launched in 2007 and the major program improvement was an introduction of the group calculation options with the potential application to personal identification of mass disasters and terrorism victims. The last 2.1 version has the option of language selection--Polish or English, which will enhance the usage and application of the program also in other countries.
Nadkarni, P M; Miller, P L
1991-01-01
A parallel program for inter-database sequence comparison was developed on the Intel Hypercube using two models of parallel programming. One version was built using machine-specific Hypercube parallel programming commands. The other version was built using Linda, a machine-independent parallel programming language. The two versions of the program provide a case study comparing these two approaches to parallelization in an important biological application area. Benchmark tests with both programs gave comparable results with a small number of processors. As the number of processors was increased, the Linda version was somewhat less efficient. The Linda version was also run without change on Network Linda, a virtual parallel machine running on a network of desktop workstations.
NASA Astrophysics Data System (ADS)
Asinari, Pietro
2010-10-01
The homogeneous isotropic Boltzmann equation (HIBE) is a fundamental dynamic model for many applications in thermodynamics, econophysics and sociodynamics. Despite recent hardware improvements, the solution of the Boltzmann equation remains extremely challenging from the computational point of view, in particular by deterministic methods (free of stochastic noise). This work aims to improve a deterministic direct method recently proposed [V.V. Aristov, Kluwer Academic Publishers, 2001] for solving the HIBE with a generic collisional kernel and, in particular, for taking care of the late dynamics of the relaxation towards the equilibrium. Essentially (a) the original problem is reformulated in terms of particle kinetic energy (exact particle number and energy conservation during microscopic collisions) and (b) the computation of the relaxation rates is improved by the DVM-like correction, where DVM stands for Discrete Velocity Model (ensuring that the macroscopic conservation laws are exactly satisfied). Both these corrections make possible to derive very accurate reference solutions for this test case. Moreover this work aims to distribute an open-source program (called HOMISBOLTZ), which can be redistributed and/or modified for dealing with different applications, under the terms of the GNU General Public License. The program has been purposely designed in order to be minimal, not only with regards to the reduced number of lines (less than 1000), but also with regards to the coding style (as simple as possible). Program summaryProgram title: HOMISBOLTZ Catalogue identifier: AEGN_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGN_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License No. of lines in distributed program, including test data, etc.: 23 340 No. of bytes in distributed program, including test data, etc.: 7 635 236 Distribution format: tar.gz Programming language: Tested with Matlab version ⩽6.5. However, in principle, any recent version of Matlab or Octave should work Computer: All supporting Matlab or Octave Operating system: All supporting Matlab or Octave RAM: 300 MBytes Classification: 23 Nature of problem: The problem consists in integrating the homogeneous Boltzmann equation for a generic collisional kernel in case of isotropic symmetry, by a deterministic direct method. Difficulties arise from the multi-dimensionality of the collisional operator and from satisfying the conservation of particle number and energy (momentum is trivial for this test case) as accurately as possible, in order to preserve the late dynamics. Solution method: The solution is based on the method proposed by Aristov (2001) [1], but with two substantial improvements: (a) the original problem is reformulated in terms of particle kinetic energy (this allows one to ensure exact particle number and energy conservation during microscopic collisions) and (b) a DVM-like correction (where DVM stands for Discrete Velocity Model) is adopted for improving the relaxation rates (this allows one to satisfy exactly the conservation laws at macroscopic level, which is particularly important for describing the late dynamics in the relaxation towards the equilibrium). Both these corrections make possible to derive very accurate reference solutions for this test case. Restrictions: The nonlinear Boltzmann equation is extremely challenging from the computational point of view, in particular for deterministic methods, despite the increased computational power of recent hardware. In this work, only the homogeneous isotropic case is considered, for making possible the development of a minimal program (by a simple scripting language) and allowing the user to check the advantages of the proposed improvements beyond Aristov's (2001) method [1]. The initial conditions are supposed parameterized according to a fixed analytical expression, but this can be easily modified. Running time: From minutes to hours (depending on the adopted discretization of the kinetic energy space). For example, on a 64 bit workstation with Intel CoreTM i7-820Q Quad Core CPU at 1.73 GHz and 8 MBytes of RAM, the provided test run (with the corresponding binary data file storing the pre-computed relaxation rates) requires 154 seconds. References:V.V. Aristov, Direct Methods for Solving the Boltzmann Equation and Study of Nonequilibrium Flows, Kluwer Academic Publishers, 2001.
Automatic calculation of supersymmetric renormalization group equations and loop corrections
NASA Astrophysics Data System (ADS)
Staub, Florian
2011-03-01
SARAH is a Mathematica package for studying supersymmetric models. It calculates for a given model the masses, tadpole equations and all vertices at tree-level. This information can be used by SARAH to write model files for CalcHep/ CompHep or FeynArts/ FormCalc. In addition, the second version of SARAH can derive the renormalization group equations for the gauge couplings, parameters of the superpotential and soft-breaking parameters at one- and two-loop level. Furthermore, it calculates the one-loop self-energies and the one-loop corrections to the tadpoles. SARAH can handle all N=1 SUSY models whose gauge sector is a direct product of SU(N) and U(1) gauge groups. The particle content of the model can be an arbitrary number of chiral superfields transforming as any irreducible representation with respect to the gauge groups. To implement a new model, the user has just to define the gauge sector, the particle, the superpotential and the field rotations to mass eigenstates. Program summaryProgram title: SARAH Catalogue identifier: AEIB_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIB_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 97 577 No. of bytes in distributed program, including test data, etc.: 2 009 769 Distribution format: tar.gz Programming language: Mathematica Computer: All systems that Mathematica is available for Operating system: All systems that Mathematica is available for Classification: 11.1, 11.6 Nature of problem: A supersymmetric model is usually characterized by the particle content, the gauge sector and the superpotential. It is a time consuming process to obtain all necessary information for phenomenological studies from these basic ingredients. Solution method: SARAH calculates the complete Lagrangian for a given model whose gauge sector can be any direct product of SU(N) gauge groups. The chiral superfields can transform as any, irreducible representation with respect to these gauge groups and it is possible to handle an arbitrary number of symmetry breakings or particle rotations. Also the gauge fixing terms can be specified. Using this information, SARAH derives the mass matrices and Feynman rules at tree-level and generates model files for CalcHep/CompHep and FeynArts/FormCalc. In addition, it can calculate the renormalization group equations at one- and two-loop level and the one-loop corrections to the one- and two-point functions. Unusual features: SARAH just needs the superpotential and gauge sector as input and not the complete Lagrangian. Therefore, the complete implementation of new models is done in some minutes. Running time: Measured CPU time for the evaluation of the MSSM on an Intel Q8200 with 2.33 GHz. Calculating the complete Lagrangian: 12 seconds. Calculating all vertices: 75 seconds. Calculating the one- and two-loop RGEs: 50 seconds. Calculating the one-loop corrections: 7 seconds. Writing a FeynArts file: 1 second. Writing a CalcHep/CompHep file: 6 seconds. Writing the LaTeX output: 1 second.
Virtualizing access to scientific applications with the Application Hosting Environment
NASA Astrophysics Data System (ADS)
Zasada, S. J.; Coveney, P. V.
2009-12-01
The growing power and number of high performance computing resources made available through computational grids present major opportunities as well as a number of challenges to the user. At issue is how these resources can be accessed and how their power can be effectively exploited. In this paper we first present our views on the usability of contemporary high-performance computational resources. We introduce the concept of grid application virtualization as a solution to some of the problems with grid-based HPC usability. We then describe a middleware tool that we have developed to realize the virtualization of grid applications, the Application Hosting Environment (AHE), and describe the features of the new release, AHE 2.0, which provides access to a common platform of federated computational grid resources in standard and non-standard ways. Finally, we describe a case study showing how AHE supports clinical use of whole brain blood flow modelling in a routine and automated fashion. Program summaryProgram title: Application Hosting Environment 2.0 Catalogue identifier: AEEJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEJ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU Public Licence, Version 2 No. of lines in distributed program, including test data, etc.: not applicable No. of bytes in distributed program, including test data, etc.: 1 685 603 766 Distribution format: tar.gz Programming language: Perl (server), Java (Client) Computer: x86 Operating system: Linux (Server), Linux/Windows/MacOS (Client) RAM: 134 217 728 (server), 67 108 864 (client) bytes Classification: 6.5 External routines: VirtualBox (server), Java (client) Nature of problem: The middleware that makes grid computing possible has been found by many users to be too unwieldy, and presents an obstacle to use rather than providing assistance [1,2]. Such problems are compounded when one attempts to harness the power of a grid, or a federation of different grids, rather than just a single resource on the grid. Solution method: To address the above problem, we have developed AHE, a lightweight interface, designed to simplify the process of running scientific codes on a grid of HPC and local resources. AHE does this by introducing a layer of middleware between the user and the grid, which encapsulates much of the complexity associated with launching grid applications. Unusual features: The server is distributed as a VirtualBox virtual machine. VirtualBox ( http://www.virtualbox.org) must be downloaded and installed in order to run the AHE server virtual machine. Details of how to do this are given in the AHE 2.0 Quick Start Guide. Running time: Not applicable References:J. Chin, P.V. Coveney, Towards tractable toolkits for the grid: A plea for lightweight, useable middleware, NeSC Technical Report, 2004, http://nesc.ac.uk/technical_papers/UKeS-2004-01.pdf. P.V. Coveney, R.S. Saksena, S.J. Zasada, M. McKeown, S. Pickles, The Application Hosting Environment: Lightweight middleware for grid-based computational science, Computer Physics Communications 176 (2007) 406-418.
Nadkarni, P. M.; Miller, P. L.
1991-01-01
A parallel program for inter-database sequence comparison was developed on the Intel Hypercube using two models of parallel programming. One version was built using machine-specific Hypercube parallel programming commands. The other version was built using Linda, a machine-independent parallel programming language. The two versions of the program provide a case study comparing these two approaches to parallelization in an important biological application area. Benchmark tests with both programs gave comparable results with a small number of processors. As the number of processors was increased, the Linda version was somewhat less efficient. The Linda version was also run without change on Network Linda, a virtual parallel machine running on a network of desktop workstations. PMID:1807632
The program LOPT for least-squares optimization of energy levels
NASA Astrophysics Data System (ADS)
Kramida, A. E.
2011-02-01
The article describes a program that solves the least-squares optimization problem for finding the energy levels of a quantum-mechanical system based on a set of measured energy separations or wavelengths of transitions between those energy levels, as well as determining the Ritz wavelengths of transitions and their uncertainties. The energy levels are determined by solving the matrix equation of the problem, and the uncertainties of the Ritz wavenumbers are determined from the covariance matrix of the problem. Program summaryProgram title: LOPT Catalogue identifier: AEHM_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHM_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 19 254 No. of bytes in distributed program, including test data, etc.: 427 839 Distribution format: tar.gz Programming language: Perl v.5 Computer: PC, Mac, Unix workstations Operating system: MS Windows (XP, Vista, 7), Mac OS X, Linux, Unix (AIX) RAM: 3 Mwords or more Word size: 32 or 64 Classification: 2.2 Nature of problem: The least-squares energy-level optimization problem, i.e., finding a set of energy level values that best fits the given set of transition intervals. Solution method: The solution of the least-squares problem is found by solving the corresponding linear matrix equation, where the matrix is constructed using a new method with variable substitution. Restrictions: A practical limitation on the size of the problem N is imposed by the execution time, which scales as N and depends on the computer. Unusual features: Properly rounds the resulting data and formats the output in a format suitable for viewing with spreadsheet editing software. Estimates numerical errors resulting from the limited machine precision. Running time: 1 s for N=100, or 60 s for N=400 on a typical PC.
NASA Technical Reports Server (NTRS)
Salas, M. D.; Kuehn, M. S.
1994-01-01
Original version of program incorporated into program SRGULL (LEW-15093) for use on National Aero-Space Plane project, its duty being to model forebody, inlet, and nozzle portions of vehicle. However, real-gas chemistry effects in hypersonic flow fields limited accuracy of that version, because it assumed perfect-gas properties. As a result, SEAGULL modified according to real-gas equilibrium-chemistry methodology. This program analyzes two-dimensional, hypersonic flows of real gases. Modified version of SEAGULL maintains as much of original program as possible, and retains ability to execute original perfect-gas version.
Computer programs for computing particle-size statistics of fluvial sediments
Stevens, H.H.; Hubbell, D.W.
1986-01-01
Two versions of computer programs for inputing data and computing particle-size statistics of fluvial sediments are presented. The FORTRAN 77 language versions are for use on the Prime computer, and the BASIC language versions are for use on microcomputers. The size-statistics program compute Inman, Trask , and Folk statistical parameters from phi values and sizes determined for 10 specified percent-finer values from inputed size and percent-finer data. The program also determines the percentage gravel, sand, silt, and clay, and the Meyer-Peter effective diameter. Documentation and listings for both versions of the programs are included. (Author 's abstract)
XTALOPT: An open-source evolutionary algorithm for crystal structure prediction
NASA Astrophysics Data System (ADS)
Lonie, David C.; Zurek, Eva
2011-02-01
The implementation and testing of XTALOPT, an evolutionary algorithm for crystal structure prediction, is outlined. We present our new periodic displacement (ripple) operator which is ideally suited to extended systems. It is demonstrated that hybrid operators, which combine two pure operators, reduce the number of duplicate structures in the search. This allows for better exploration of the potential energy surface of the system in question, while simultaneously zooming in on the most promising regions. A continuous workflow, which makes better use of computational resources as compared to traditional generation based algorithms, is employed. Various parameters in XTALOPT are optimized using a novel benchmarking scheme. XTALOPT is available under the GNU Public License, has been interfaced with various codes commonly used to study extended systems, and has an easy to use, intuitive graphical interface. Program summaryProgram title:XTALOPT Catalogue identifier: AEGX_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGX_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GPL v2.1 or later [1] No. of lines in distributed program, including test data, etc.: 36 849 No. of bytes in distributed program, including test data, etc.: 1 149 399 Distribution format: tar.gz Programming language: C++ Computer: PCs, workstations, or clusters Operating system: Linux Classification: 7.7 External routines: QT [2], OpenBabel [3], AVOGADRO [4], SPGLIB [8] and one of: VASP [5], PWSCF [6], GULP [7]. Nature of problem: Predicting the crystal structure of a system from its stoichiometry alone remains a grand challenge in computational materials science, chemistry, and physics. Solution method: Evolutionary algorithms are stochastic search techniques which use concepts from biological evolution in order to locate the global minimum on their potential energy surface. Our evolutionary algorithm, XTALOPT, is freely available to the scientific community for use and collaboration under the GNU Public License. Running time: User dependent. The program runs until stopped by the user.
GLISSANDO: GLauber Initial-State Simulation AND mOre…
NASA Astrophysics Data System (ADS)
Broniowski, Wojciech; Rybczyński, Maciej; Bożek, Piotr
2009-01-01
We present a Monte Carlo generator for a variety of Glauber-like models (the wounded-nucleon model, binary collisions model, mixed model, model with hot spots). These models describe the early stages of relativistic heavy-ion collisions, in particular the spatial distribution of the transverse energy deposition which ultimately leads to production of particles from the interaction region. The original geometric distribution of sources in the transverse plane can be superimposed with a statistical distribution simulating the dispersion in the generated transverse energy in each individual collision. The program generates inter alia the fixed-axes (standard) and variable-axes (participant) two-dimensional profiles of the density of sources in the transverse plane and their azimuthal Fourier components. These profiles can be used in further analysis of physical phenomena, such as the jet quenching, event-by-event hydrodynamics, or analysis of the elliptic flow and its fluctuations. Characteristics of the event (multiplicities, eccentricities, Fourier coefficients, etc.) are stored in a ROOT file and can be analyzed off-line. In particular, event-by-event studies can be carried out in a simple way. A number of ROOT scripts is provided for that purpose. Supplied variants of the code can also be used for the proton-nucleus and deuteron-nucleus collisions. Program summaryProgram title: GLISSANDO Catalogue identifier: AEBS_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEBS_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 4452 No. of bytes in distributed program, including test data, etc.: 34 766 Distribution format: tar.gz Programming language: C++ Computer: any computer with a C++ compiler and the ROOT environment [R. Brun, et al., Root Users Guide 5.16, CERN, 2007, http://root.cern.ch[1
Moment distributions of clusters and molecules in the adiabatic rotor model
NASA Astrophysics Data System (ADS)
Ballentine, G. E.; Bertsch, G. F.; Onishi, N.; Yabana, K.
2008-01-01
We present a Fortran program to compute the distribution of dipole moments of free particles for use in analyzing molecular beams experiments that measure moments by deflection in an inhomogeneous field. The theory is the same for magnetic and electric dipole moments, and is based on a thermal ensemble of classical particles that are free to rotate and that have moment vectors aligned along a principal axis of rotation. The theory has two parameters, the ratio of the magnetic (or electric) dipole energy to the thermal energy, and the ratio of moments of inertia of the rotor. Program summaryProgram title:AdiabaticRotor Catalogue identifier:ADZO_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZO_v1_0.html Program obtainable from:CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions:Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.:479 No. of bytes in distributed program, including test data, etc.:4853 Distribution format:tar.gz Programming language:Fortran 90 Computer:Pentium-IV, Macintosh Power PC G4 Operating system:Linux, Mac OS X RAM:600 Kbytes Word size:64 bits Classification:2.3 Nature of problem:The system considered is a thermal ensemble of rotors having a magnetic or electric moment aligned along one of the principal axes. The ensemble is placed in an external field which is turned on adiabatically. The problem is to find the distribution of moments in the presence of the external field. Solution method:There are three adiabatic invariants. The only nontrivial one is the action associated with the polar angle of the rotor axis with respect to external field. It is found by Newton's method. Running time:3 min on a 3 GHz Pentium IV processor.
Strongdeco: Expansion of analytical, strongly correlated quantum states into a many-body basis
NASA Astrophysics Data System (ADS)
Juliá-Díaz, Bruno; Graß, Tobias
2012-03-01
We provide a Mathematica code for decomposing strongly correlated quantum states described by a first-quantized, analytical wave function into many-body Fock states. Within them, the single-particle occupations refer to the subset of Fock-Darwin functions with no nodes. Such states, commonly appearing in two-dimensional systems subjected to gauge fields, were first discussed in the context of quantum Hall physics and are nowadays very relevant in the field of ultracold quantum gases. As important examples, we explicitly apply our decomposition scheme to the prominent Laughlin and Pfaffian states. This allows for easily calculating the overlap between arbitrary states with these highly correlated test states, and thus provides a useful tool to classify correlated quantum systems. Furthermore, we can directly read off the angular momentum distribution of a state from its decomposition. Finally we make use of our code to calculate the normalization factors for Laughlin's famous quasi-particle/quasi-hole excitations, from which we gain insight into the intriguing fractional behavior of these excitations. Program summaryProgram title: Strongdeco Catalogue identifier: AELA_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AELA_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 5475 No. of bytes in distributed program, including test data, etc.: 31 071 Distribution format: tar.gz Programming language: Mathematica Computer: Any computer on which Mathematica can be installed Operating system: Linux, Windows, Mac Classification: 2.9 Nature of problem: Analysis of strongly correlated quantum states. Solution method: The program makes use of the tools developed in Mathematica to deal with multivariate polynomials to decompose analytical strongly correlated states of bosons and fermions into a standard many-body basis. Operations with polynomials, determinants and permanents are the basic tools. Running time: The distributed notebook takes a couple of minutes to run.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-01
... Programs (NCPDP) Prescriber/ Pharmacist Interface SCRIPT standard, Implementation Guide, Version 10... Prescriber/Pharmacist Interface SCRIPT standard, Version 8, Release 1 and its equivalent NCPDP Prescriber/Pharmacist Interface SCRIPT Implementation Guide, Version 8, Release 1 (hereinafter referred to as the...
NASA Astrophysics Data System (ADS)
Müller, Thomas
2011-06-01
The new version of the Motion4D-library now also includes the integration of a Sachs basis and the Jacobi equation to determine gravitational lensing of pointlike sources for arbitrary spacetimes.New version program summaryProgram title: Motion4D-libraryCatalogue identifier: AEEX_v3_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEX_v3_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 219 441No. of bytes in distributed program, including test data, etc.: 6 968 223Distribution format: tar.gzProgramming language: C++Computer: All platforms with a C++ compilerOperating system: Linux, WindowsRAM: 61 MbytesClassification: 1.5External routines: Gnu Scientic Library (GSL) (http://www.gnu.org/software/gsl/)Catalogue identifier of previous version: AEEX_v2_0Journal reference of previous version: Comput. Phys. Comm. 181 (2010) 703Does the new version supersede the previous version?: YesNature of problem: Solve geodesic equation, parallel and Fermi-Walker transport in four-dimensional Lorentzian spacetimes. Determine gravitational lensing by integration of Jacobi equation and parallel transport of Sachs basis.Solution method: Integration of ordinary differential equations.Reasons for new version: The main novelty of the current version is the extension to integrate the Jacobi equation and the parallel transport of the Sachs basis along null geodesics. In combination, the change of the cross section of a light bundle and thus the gravitational lensing effect of a spacetime can be determined. Furthermore, we have implemented several new metrics.Summary of revisions: The main novelty of the current version is the integration of the Jacobi equation and the parallel transport of the Sachs basis along null geodesics. The corresponding set of equations readd2xμdλ2=-Γρσμdxρdλdxσdλ, ds1,2μdλ=-Γρσμdxρdλs1,2σ, d2Y1,2μdλ2=-2ΓρσμdxρdλdY1,2σdλ-Γρσ,νμdxρdλdxσdλYν, where (1) is the geodesic equation, (2) represents the parallel transport of the two Sachs basis vectors s, and (3) is the Jacobi equation for the two Jacobi fields Y.The initial directions of the Sachs basis vectors s=(0,s)=s1,2μ∂ are defined perpendicular to the initial direction υ→ of the light ray, see also Fig. 1,s=(-, s=(-. Display OmittedA congruence of null geodesics with central null geodesic γ which starts at the observer O with an infinitesimal circular cross section is defined by the above mentioned two Jacobi fields with initial conditions Y1,2μ|=0 and (dY1,2μ/dλ)|=s1,2μ. The cross section of this congruence along γ is described by the Jacobian J(λ)=gYiμsjν|. However, to determine the gravitational lensing of a pointlike source S that is connected to the observer via γ, we need the reverse Jacobian JS. Fortunately, the reverse Jacobian is just the negative transpose of the original Jacobian JO,J:=JS=-(J)T. The Jacobian J transforms the circular shape of the congruence into an ellipse whose shape parameters (M: major/minor axis, ψ: angle of major axis, ɛ: ellipticity) readM=2αsinζcosζ-βsin2ζ+J112+J212, ψ=arctan2(Jcosζ+Jsinζ,Jcosζ+Jsinζ), ɛ=‖M-M‖‖M+M‖ withζ=12arctan2αβ,ζ=ζ+π2, and the parameters α=JJ+JJ, β=J112-J122+J212-J222. The magnification factor is given byμ=λ2MM. These shape parameters can be easily visualized in the new version of the GeodesicViewer, see Ref. [1]. A detailed discussion of gravitational lensing can be found, for example, in Schneider et al. [2].In the following, a list of newly implemented metrics is given:BertottiKasner: see Rindler [3].BesselGravWaveCart: gravitational Bessel wave from Kramer [4].DeSitterUniv, DeSitterUnivConf: de Sitter universe in Cartesian and conformal coordinates.Ernst: Black hole in a magnetic universe by Ernst [5].ExtremeReissnerNordstromDihole: see Chandrasekhar [6].HalilsoyWave: see Ref. [7].JaNeWi: Janis-Newman-Winicour metric, see Ref. [8].MinkowskiConformal: Minkowski metric in conformally rescaled coordinates.PTD_AI, PTD_AII, PTD_AIII, PTD_BI, PTD_BII, PTD_BIII, PTD_C Petrov-Type D - Levi-Civita spacetimes, see Ref. [7].PainleveGullstrand: Schwarzschild metric in Painlevé-Gullstrand coordinates, see Ref. [9].PlaneGravWave: Plane gravitational wave, see Ref. [10].SchwarzschildIsotropic: Schwarzschild metric in isotropic coordinates, see Ref. [11].SchwarzschildTortoise: Schwarzschild metric in tortoise coordinates, see Ref. [11].Sultana-Dyer: A black hole in the Einstein-de Sitter universe by Sultana and Dyer [12].TaubNUT: see Ref. [13]. The Christoffel symbols and the natural local tetrads of these new metrics are given in the Catalogue of Spacetimes, Ref. [14].To study the behavior of geodesics, it is often useful to determine an effective potential like in classical mechanics. For several metrics, we followed the Euler-Lagrangian approach as described by Rindler [10] and implemented an effective potential for a specific situation. As an example, consider the Lagrangian L=-αt˙+α-1r˙+r2φ˙ for timelike geodesics in the ϑ=π/2 hypersurface in the Schwarzschild spacetime with α=1-2m/r. The Euler-Lagrangian equations lead to the energy balance equation r˙+V(r)=k2 with the effective potential V(r)=(r-2m)(r2+h2)/r3 and the constants of motion k=αt˙ and h=r2φ˙. The constants of motion for a timelike geodesic that starts at (r=10m,φ=0) with initial direction ξ=π/4 with respect to the black hole direction and with initial velocity β=0.7 read k≈1.252 and h≈6.931. Then, from the energy balance equation we immediately obtain the radius of closest approach r≈5.927.Beside a standard Runge-Kutta fourth-order integrator and the integrators of the Gnu Scientific Library (GSL), we also implemented a standard Bulirsch-Stoer integrator.Running time: The test runs provided with the distribution require only a few seconds to run.References:T. Müller, New version announcement to the GeodesicViewer, http://cpc.cs.qub.ac.uk/summaries/AEFP_v2_0.html.P. Schneider, J. Ehlers, E. E. Falco, Gravitational Lenses, Springer, 1992.W. Rindler, Phys. Lett. A 245 (1998) 363.D. Kramer, Ann. Phys. 9 (2000) 331.F.J. Ernst, J. Math. Phys. 17 (1976) 54.S. Chandrasekhar, Proc. R. Soc. Lond. A 421 (1989) 227.H. Stephani, D. Kramer, M. MacCallum, C. Hoenselaers, E. Herlt, Exact Solutions of the Einstein Field Equations, Cambridge University Press, 2009.A.I. Janis, E.T. Newman, J. Winicour, Phys. Rev. Lett. 20 (1968) 878.K. Martel, E. Poisson, Am. J. Phys. 69 (2001) 476.W. Rindler, Relativity - Special, General, and Cosmology, Oxford University Press, Oxford, 2007.C.W. Misner, K.S. Thorne, J.A. Wheeler, Gravitation, W.H. Freeman, 1973.J. Sultana, C.C. Dyer, Gen. Relativ. Gravit. 37 (2005) 1349.D. Bini, C. Cherubini, Robert T. Jantzen, Class. Quantum Grav. 19 (2002) 5481.T. Muller, F. Grave, arXiv:0904.4184 [gr-qc].
NASA Technical Reports Server (NTRS)
Lamar, J. E.; Herbert, H. E.
1982-01-01
The latest production version, MARK IV, of the NASA-Langley vortex lattice computer program is summarized. All viable subcritical aerodynamic features of previous versions were retained. This version extends the previously documented program capabilities to four planforms, 400 panels, and enables the user to obtain vortex-flow aerodynamics on cambered planforms, flowfield properties off the configuration in attached flow, and planform longitudinal load distributions.
Scilab software package for the study of dynamical systems
NASA Astrophysics Data System (ADS)
Bordeianu, C. C.; Beşliu, C.; Jipa, Al.; Felea, D.; Grossu, I. V.
2008-05-01
This work presents a new software package for the study of chaotic flows and maps. The codes were written using Scilab, a software package for numerical computations providing a powerful open computing environment for engineering and scientific applications. It was found that Scilab provides various functions for ordinary differential equation solving, Fast Fourier Transform, autocorrelation, and excellent 2D and 3D graphical capabilities. The chaotic behaviors of the nonlinear dynamics systems were analyzed using phase-space maps, autocorrelation functions, power spectra, Lyapunov exponents and Kolmogorov-Sinai entropy. Various well known examples are implemented, with the capability of the users inserting their own ODE. Program summaryProgram title: Chaos Catalogue identifier: AEAP_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAP_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 885 No. of bytes in distributed program, including test data, etc.: 5925 Distribution format: tar.gz Programming language: Scilab 3.1.1 Computer: PC-compatible running Scilab on MS Windows or Linux Operating system: Windows XP, Linux RAM: below 100 Megabytes Classification: 6.2 Nature of problem: Any physical model containing linear or nonlinear ordinary differential equations (ODE). Solution method: Numerical solving of ordinary differential equations. The chaotic behavior of the nonlinear dynamical system is analyzed using Poincaré sections, phase-space maps, autocorrelation functions, power spectra, Lyapunov exponents and Kolmogorov-Sinai entropies. Restrictions: The package routines are normally able to handle ODE systems of high orders (up to order twelve and possibly higher), depending on the nature of the problem. Running time: 10 to 20 seconds for problems that do not involve Lyapunov exponents calculation; 60 to 1000 seconds for problems that involve high orders ODE and Lyapunov exponents calculation.
Construction of SO(5)⊃SO(3) spherical harmonics and Clebsch-Gordan coefficients
NASA Astrophysics Data System (ADS)
Caprio, M. A.; Rowe, D. J.; Welsh, T. A.
2009-07-01
The SO(5)⊃SO(3) spherical harmonics form a natural basis for expansion of nuclear collective model angular wave functions. They underlie the recently-proposed algebraic method for diagonalization of the nuclear collective model Hamiltonian in an SU(1,1)×SO(5) basis. We present a computer code for explicit construction of the SO(5)⊃SO(3) spherical harmonics and use them to compute the Clebsch-Gordan coefficients needed for collective model calculations in an SO(3)-coupled basis. With these Clebsch-Gordan coefficients it becomes possible to compute the matrix elements of collective model observables by purely algebraic methods. Program summaryProgram title: GammaHarmonic Catalogue identifier: AECY_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECY_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 346 421 No. of bytes in distributed program, including test data, etc.: 16 037 234 Distribution format: tar.gz Programming language: Mathematica 6 Computer: Any which supports Mathematica Operating system: Any which supports Mathematica; tested under Microsoft Windows XP and Linux Classification: 4.2 Nature of problem: Explicit construction of SO(5) ⊃ SO(3) spherical harmonics on S. Evaluation of SO(3)-reduced matrix elements and SO(5) ⊃ SO(3) Clebsch-Gordan coefficients (isoscalar factors). Solution method: Construction of SO(5) ⊃ SO(3) spherical harmonics by orthonormalization, obtained from a generating set of functions, according to the method of Rowe, Turner, and Repka [1]. Matrix elements and Clebsch-Gordan coefficients follow by construction and integration of SO(3) scalar products. Running time: Depends strongly on the maximum SO(5) and SO(3) representation labels involved. A few minutes for the calculation in the Mathematica notebook. References: [1] D.J. Rowe, P.S. Turner, J. Repka, J. Math. Phys. 45 (2004) 2761.
NASA Astrophysics Data System (ADS)
Brzuszek, Marcin; Daniluk, Andrzej
2006-11-01
Writing a concurrent program can be more difficult than writing a sequential program. Programmer needs to think about synchronisation, race conditions and shared variables. Transactions help reduce the inconvenience of using threads. A transaction is an abstraction, which allows programmers to group a sequence of actions on the program into a logical, higher-level computation unit. This paper presents multithreaded versions of the GROWTH program, which allow to calculate the layer coverages during the growth of thin epitaxial films and the corresponding RHEED intensities according to the kinematical approximation. The presented programs also contain graphical user interfaces, which enable displaying program data at run-time. New version program summaryTitles of programs:GROWTHGr, GROWTH06 Catalogue identifier:ADVL_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVL_v2_0 Program obtainable from:CPC Program Library, Queen's University of Belfast, N. Ireland Catalogue identifier of previous version:ADVL Does the new version supersede the original program:No Computer for which the new version is designed and others on which it has been tested: Pentium-based PC Operating systems or monitors under which the new version has been tested: Windows 9x, XP, NT Programming language used:Object Pascal Memory required to execute with typical data:More than 1 MB Number of bits in a word:64 bits Number of processors used:1 No. of lines in distributed program, including test data, etc.:20 931 Number of bytes in distributed program, including test data, etc.: 1 311 268 Distribution format:tar.gz Nature of physical problem: The programs compute the RHEED intensities during the growth of thin epitaxial structures prepared using the molecular beam epitaxy (MBE). The computations are based on the use of kinematical diffraction theory [P.I. Cohen, G.S. Petrich, P.R. Pukite, G.J. Whaley, A.S. Arrott, Surf. Sci. 216 (1989) 222. [1
Detection of faults and software reliability analysis
NASA Technical Reports Server (NTRS)
Knight, John C.
1987-01-01
Multi-version or N-version programming is proposed as a method of providing fault tolerance in software. The approach requires the separate, independent preparation of multiple versions of a piece of software for some application. These versions are executed in parallel in the application environment; each receives identical inputs and each produces its version of the required outputs. The outputs are collected by a voter and, in principle, they should all be the same. In practice there may be some disagreement. If this occurs, the results of the majority are taken to be the correct output, and that is the output used by the system. A total of 27 programs were produced. Each of these programs was then subjected to one million randomly-generated test cases. The experiment yielded a number of programs containing faults that are useful for general studies of software reliability as well as studies of N-version programming. Fault tolerance through data diversity and analytic models of comparison testing are discussed.
Trace contaminant control simulation computer program, version 8.1
NASA Technical Reports Server (NTRS)
Perry, J. L.
1994-01-01
The Trace Contaminant Control Simulation computer program is a tool for assessing the performance of various process technologies for removing trace chemical contamination from a spacecraft cabin atmosphere. Included in the simulation are chemical and physical adsorption by activated charcoal, chemical adsorption by lithium hydroxide, absorption by humidity condensate, and low- and high-temperature catalytic oxidation. Means are provided for simulating regenerable as well as nonregenerable systems. The program provides an overall mass balance of chemical contaminants in a spacecraft cabin given specified generation rates. Removal rates are based on device flow rates specified by the user and calculated removal efficiencies based on cabin concentration and removal technology experimental data. Versions 1.0 through 8.0 are documented in NASA TM-108409. TM-108409 also contains a source file listing for version 8.0. Changes to version 8.0 are documented in this technical memorandum and a source file listing for the modified version, version 8.1, is provided. Detailed descriptions for the computer program subprograms are extracted from TM-108409 and modified as necessary to reflect version 8.1. Version 8.1 supersedes version 8.0. Information on a separate user's guide is available from the author.
USSAERO computer program development, versions B and C
NASA Technical Reports Server (NTRS)
Woodward, F. A.
1980-01-01
Versions B and C of the unified subsonic and supersonic aerodynamic analysis program, USSAERO, are described. Version B incorporates a new symmetrical singularity method to provide improved surface pressure distributions on wings in subsonic flow. Version C extends the range of application of the program to include the analysis of multiple engine nacelles or finned external stores. In addition, nonlinear compressibility effects in high subsonic and supersonic flows are approximated using a correction based on the local Mach number at panel control points. Several examples are presented comparing the results of these programs with other panel methods and experimental data.
COSMIC monthly progress report
NASA Technical Reports Server (NTRS)
1993-01-01
Activities of the Computer Software Management and Information Center (COSMIC) are summarized for the month of August, 1993. Tables showing the current inventory of programs available from COSMIC are presented and program processing and evaluation activities are discussed. Ten articles were prepared for publication in the NASA Tech Brief Journal. These articles (included in this report) describe the following software items: (1) MOM3D - A Method of Moments Code for Electromagnetic Scattering (UNIX Version); (2) EM-Animate - Computer Program for Displaying and Animating the Steady-State Time-Harmonic Electromagnetic Near Field and Surface-Current Solutions; (3) MOM3D - A Method of Moments Code for Electromagnetic Scattering (IBM PC Version); (4) M414 - MIL-STD-414 Variable Sampling Procedures Computer Program; (5) MEDOF - Minimum Euclidean Distance Optimal Filter; (6) CLIPS 6.0 - C Language Integrated Production System, Version 6.0 (Macintosh Version); (7) CLIPS 6.0 - C Language Integrated Production System, Version 6.0 (IBM PC Version); (8) CLIPS 6.0 - C Language Integrated Production System, Version 6.0 (UNIX Version); (9) CLIPS 6.0 - C Language Integrated Production System, Version 6.0 (DEC VAX VMS Version); and (10) TFSSRA - Thick Frequency Selective Surface with Rectangular Apertures. Activities in the areas of marketing, customer service, benefits identification, maintenance and support, and dissemination are also described along with a budget summary.
NASA Astrophysics Data System (ADS)
van Dyk, Danny; Geveler, Markus; Mallach, Sven; Ribbrock, Dirk; Göddeke, Dominik; Gutwenger, Carsten
2009-12-01
We present HONEI, an open-source collection of libraries offering a hardware oriented approach to numerical calculations. HONEI abstracts the hardware, and applications written on top of HONEI can be executed on a wide range of computer architectures such as CPUs, GPUs and the Cell processor. We demonstrate the flexibility and performance of our approach with two test applications, a Finite Element multigrid solver for the Poisson problem and a robust and fast simulation of shallow water waves. By linking against HONEI's libraries, we achieve a two-fold speedup over straight forward C++ code using HONEI's SSE backend, and additional 3-4 and 4-16 times faster execution on the Cell and a GPU. A second important aspect of our approach is that the full performance capabilities of the hardware under consideration can be exploited by adding optimised application-specific operations to the HONEI libraries. HONEI provides all necessary infrastructure for development and evaluation of such kernels, significantly simplifying their development. Program summaryProgram title: HONEI Catalogue identifier: AEDW_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDW_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GPLv2 No. of lines in distributed program, including test data, etc.: 216 180 No. of bytes in distributed program, including test data, etc.: 1 270 140 Distribution format: tar.gz Programming language: C++ Computer: x86, x86_64, NVIDIA CUDA GPUs, Cell blades and PlayStation 3 Operating system: Linux RAM: at least 500 MB free Classification: 4.8, 4.3, 6.1 External routines: SSE: none; [1] for GPU, [2] for Cell backend Nature of problem: Computational science in general and numerical simulation in particular have reached a turning point. The revolution developers are facing is not primarily driven by a change in (problem-specific) methodology, but rather by the fundamental paradigm shift of the underlying hardware towards heterogeneity and parallelism. This is particularly relevant for data-intensive problems stemming from discretisations with local support, such as finite differences, volumes and elements. Solution method: To address these issues, we present a hardware aware collection of libraries combining the advantages of modern software techniques and hardware oriented programming. Applications built on top of these libraries can be configured trivially to execute on CPUs, GPUs or the Cell processor. In order to evaluate the performance and accuracy of our approach, we provide two domain specific applications; a multigrid solver for the Poisson problem and a fully explicit solver for 2D shallow water equations. Restrictions: HONEI is actively being developed, and its feature list is continuously expanded. Not all combinations of operations and architectures might be supported in earlier versions of the code. Obtaining snapshots from http://www.honei.org is recommended. Unusual features: The considered applications as well as all library operations can be run on NVIDIA GPUs and the Cell BE. Running time: Depending on the application, and the input sizes. The Poisson solver executes in few seconds, while the SWE solver requires up to 5 minutes for large spatial discretisations or small timesteps. References:http://www.nvidia.com/cuda. http://www.ibm.com/developerworks/power/cell.
Kork, John O.
1983-01-01
Version 1.00 of the Asynchronous Communications Support supplied with the IBM Personal Computer must be modified to be used for communications with Multics. Version 2.00 can be used as supplied, but error checking and screen printing capabilities can be added by using modifications very similar to those required for Version 1.00. This paper describes and lists required programs on Multics and appropriate modifications to both Versions 1.00 and 2.00 of the programs supplied by IBM.
ERIC Educational Resources Information Center
Sander, Elisabeth; Heiß, Andrea
2014-01-01
Three different versions of a learning program on trigonometry were compared, a program controlled, non-interactive version (CG), an interactive, conflict inducing version (EG 1), and an interactive one which was supposed to reduce the occurrence of a cognitive conflict regarding the central problem solution (EG 2). Pupils (N = 101) of a…
NASA Technical Reports Server (NTRS)
Winter, O. A.
1980-01-01
The B01 version of the United Subsonic Supersonic Aerodynamic Analysis program is the result of numerous modifications and additions made to the B00 version. These modifications and additions affect the program input, its computational options, the code readability, and the overlay structure. The following are described: (1) the revised input; (2) the plotting overlay programs which were also modified, and their associated subroutines, (3) the auxillary files used by the program, the revised output data; and (4) the program overlay structure.
NASA Astrophysics Data System (ADS)
Dobaczewski, J.; Satuła, W.; Carlsson, B. G.; Engel, J.; Olbratowski, P.; Powałowski, P.; Sadziak, M.; Sarich, J.; Schunck, N.; Staszczak, A.; Stoitsov, M.; Zalewski, M.; Zduńczuk, H.
2009-11-01
We describe the new version (v2.40h) of the code HFODD which solves the nuclear Skyrme-Hartree-Fock or Skyrme-Hartree-Fock-Bogolyubov problem by using the Cartesian deformed harmonic-oscillator basis. In the new version, we have implemented: (i) projection on good angular momentum (for the Hartree-Fock states), (ii) calculation of the GCM kernels, (iii) calculation of matrix elements of the Yukawa interaction, (iv) the BCS solutions for state-dependent pairing gaps, (v) the HFB solutions for broken simplex symmetry, (vi) calculation of Bohr deformation parameters, (vii) constraints on the Schiff moments and scalar multipole moments, (viii) the DT2h transformations and rotations of wave functions, (ix) quasiparticle blocking for the HFB solutions in odd and odd-odd nuclei, (x) the Broyden method to accelerate the convergence, (xi) the Lipkin-Nogami method to treat pairing correlations, (xii) the exact Coulomb exchange term, (xiii) several utility options, and we have corrected three insignificant errors. New version program summaryProgram title: HFODD (v2.40h) Catalogue identifier: ADFL_v2_2 Program summary URL:
Barlow, Paul M.; Moench, Allen F.
2011-01-01
The computer program WTAQ simulates axial-symmetric flow to a well pumping from a confined or unconfined (water-table) aquifer. WTAQ calculates dimensionless or dimensional drawdowns that can be used with measured drawdown data from aquifer tests to estimate aquifer hydraulic properties. Version 2 of the program, which is described in this report, provides an alternative analytical representation of drainage to water-table aquifers from the unsaturated zone than that which was available in the initial versions of the code. The revised drainage model explicitly accounts for hydraulic characteristics of the unsaturated zone, specifically, the moisture retention and relative hydraulic conductivity of the soil. The revised program also retains the original conceptualizations of drainage from the unsaturated zone that were available with version 1 of the program to provide alternative approaches to simulate the drainage process. Version 2 of the program includes all other simulation capabilities of the first versions, including partial penetration of the pumped well and of observation wells and piezometers, well-bore storage and skin effects at the pumped well, and delayed drawdown response of observation wells and piezometers.
NASA Astrophysics Data System (ADS)
Bytev, Vladimir V.; Kniehl, Bernd A.
2016-09-01
We present a further extension of the HYPERDIRE project, which is devoted to the creation of a set of Mathematica-based program packages for manipulations with Horn-type hypergeometric functions on the basis of differential equations. Specifically, we present the implementation of the differential reduction for the Lauricella function FC of three variables. Catalogue identifier: AEPP_v4_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEPP_v4_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 243461 No. of bytes in distributed program, including test data, etc.: 61610782 Distribution format: tar.gz Programming language: Mathematica. Computer: All computers running Mathematica. Operating system: Operating systems running Mathematica. Classification: 4.4. Does the new version supersede the previous version?: No, it significantly extends the previous version. Nature of problem: Reduction of hypergeometric function FC of three variables to a set of basis functions. Solution method: Differential reduction. Reasons for new version: The extension package allows the user to handle the Lauricella function FC of three variables. Summary of revisions: The previous version goes unchanged. Running time: Depends on the complexity of the problem.
Coding coarse grained polymer model for LAMMPS and its application to polymer crystallization
NASA Astrophysics Data System (ADS)
Luo, Chuanfu; Sommer, Jens-Uwe
2009-08-01
We present a patch code for LAMMPS to implement a coarse grained (CG) model of poly(vinyl alcohol) (PVA). LAMMPS is a powerful molecular dynamics (MD) simulator developed at Sandia National Laboratories. Our patch code implements tabulated angular potential and Lennard-Jones-9-6 (LJ96) style interaction for PVA. Benefited from the excellent parallel efficiency of LAMMPS, our patch code is suitable for large-scale simulations. This CG-PVA code is used to study polymer crystallization, which is a long-standing unsolved problem in polymer physics. By using parallel computing, cooling and heating processes for long chains are simulated. The results show that chain-folded structures resembling the lamellae of polymer crystals are formed during the cooling process. The evolution of the static structure factor during the crystallization transition indicates that long-range density order appears before local crystalline packing. This is consistent with some experimental observations by small/wide angle X-ray scattering (SAXS/WAXS). During the heating process, it is found that the crystalline regions are still growing until they are fully melted, which can be confirmed by the evolution both of the static structure factor and average stem length formed by the chains. This two-stage behavior indicates that melting of polymer crystals is far from thermodynamic equilibrium. Our results concur with various experiments. It is the first time that such growth/reorganization behavior is clearly observed by MD simulations. Our code can be easily used to model other type of polymers by providing a file containing the tabulated angle potential data and a set of appropriate parameters. Program summaryProgram title: lammps-cgpva Catalogue identifier: AEDE_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDE_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU's GPL No. of lines in distributed program, including test data, etc.: 940 798 No. of bytes in distributed program, including test data, etc.: 12 536 245 Distribution format: tar.gz Programming language: C++/MPI Computer: Tested on Intel-x86 and AMD64 architectures. Should run on any architecture providing a C++ compiler Operating system: Tested under Linux. Any other OS with C++ compiler and MPI library should suffice Has the code been vectorized or parallelized?: Yes RAM: Depends on system size and how many CPUs are used Classification: 7.7 External routines: LAMMPS ( http://lammps.sandia.gov/), FFTW ( http://www.fftw.org/) Nature of problem: Implementing special tabular angle potentials and Lennard-Jones-9-6 style interactions of a coarse grained polymer model for LAMMPS code. Solution method: Cubic spline interpolation of input tabulated angle potential data. Restrictions: The code is based on a former version of LAMMPS. Unusual features.: Any special angular potential can be used if it can be tabulated. Running time: Seconds to weeks, depending on system size, speed of CPU and how many CPUs are used. The test run provided with the package takes about 5 minutes on 4 AMD's opteron (2.6 GHz) CPUs. References:D. Reith, H. Meyer, F. Müller-Plathe, Macromolecules 34 (2001) 2335-2345. H. Meyer, F. Müller-Plathe, J. Chem. Phys. 115 (2001) 7807. H. Meyer, F. Müller-Plathe, Macromolecules 35 (2002) 1241-1252.
An open-source library for the numerical modeling of mass-transfer in solid oxide fuel cells
NASA Astrophysics Data System (ADS)
Novaresio, Valerio; García-Camprubí, María; Izquierdo, Salvador; Asinari, Pietro; Fueyo, Norberto
2012-01-01
The generation of direct current electricity using solid oxide fuel cells (SOFCs) involves several interplaying transport phenomena. Their simulation is crucial for the design and optimization of reliable and competitive equipment, and for the eventual market deployment of this technology. An open-source library for the computational modeling of mass-transport phenomena in SOFCs is presented in this article. It includes several multicomponent mass-transport models ( i.e. Fickian, Stefan-Maxwell and Dusty Gas Model), which can be applied both within porous media and in porosity-free domains, and several diffusivity models for gases. The library has been developed for its use with OpenFOAM ®, a widespread open-source code for fluid and continuum mechanics. The library can be used to model any fluid flow configuration involving multicomponent transport phenomena and it is validated in this paper against the analytical solution of one-dimensional test cases. In addition, it is applied for the simulation of a real SOFC and further validated using experimental data. Program summaryProgram title: multiSpeciesTransportModels Catalogue identifier: AEKB_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKB_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License No. of lines in distributed program, including test data, etc.: 18 140 No. of bytes in distributed program, including test data, etc.: 64 285 Distribution format: tar.gz Programming language:: C++ Computer: Any x86 (the instructions reported in the paper consider only the 64 bit case for the sake of simplicity) Operating system: Generic Linux (the instructions reported in the paper consider only the open-source Ubuntu distribution for the sake of simplicity) Classification: 12 External routines: OpenFOAM® (version 1.6-ext) ( http://www.extend-project.de) Nature of problem: This software provides a library of models for the simulation of the steady state mass and momentum transport in a multi-species gas mixture, possibly in a porous medium. The software is particularly designed to be used as the mass-transport library for the modeling of solid oxide fuel cells (SOFC). When supplemented with other sub-models, such as thermal and charge-transport ones, it allows the prediction of the cell polarization curve and hence the cell performance. Solution method: Standard finite volume method (FVM) is used for solving all the conservation equations. The pressure-velocity coupling is solved using the SIMPLE algorithm (possibly adding a porous drag term if required). The mass transport can be calculated using different alternative models, namely Fick, Maxwell-Stefan or dusty gas model. The code adopts a segregated method to solve the resulting linear system of equations. The different regions of the SOFC, namely gas channels, electrodes and electrolyte, are solved independently, and coupled through boundary conditions. Restrictions: When extremely large species fluxes are considered, current implementation of the Neumann and Robin boundary conditions do not avoid negative values of molar and/or mass fractions, which finally end up with numerical instability. However this never happened in the documented runs. Eventually these boundary conditions could be reformulated to become more robust. Running time: From seconds to hours depending on the mesh size and number of species. For example, on a 64 bit machine with Intel Core Duo T8300 and 3 GBytes of RAM, the provided test run requires less than 1 second.
Monte Carlo event generators in atomic collisions: A new tool to tackle the few-body dynamics
NASA Astrophysics Data System (ADS)
Ciappina, M. F.; Kirchner, T.; Schulz, M.
2010-04-01
We present a set of routines to produce theoretical event files, for both single and double ionization of atoms by ion impact, based on a Monte Carlo event generator (MCEG) scheme. Such event files are the theoretical counterpart of the data obtained from a kinematically complete experiment; i.e. they contain the momentum components of all collision fragments for a large number of ionization events. Among the advantages of working with theoretical event files is the possibility to incorporate the conditions present in a real experiment, such as the uncertainties in the measured quantities. Additionally, by manipulating them it is possible to generate any type of cross sections, specially those that are usually too complicated to compute with conventional methods due to a lack of symmetry. Consequently, the numerical effort of such calculations is dramatically reduced. We show examples for both single and double ionization, with special emphasis on a new data analysis tool, called four-body Dalitz plots, developed very recently. Program summaryProgram title: MCEG Catalogue identifier: AEFV_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFV_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 2695 No. of bytes in distributed program, including test data, etc.: 18 501 Distribution format: tar.gz Programming language: FORTRAN 77 with parallelization directives using scripting Computer: Single machines using Linux and Linux servers/clusters (with cores with any clock speed, cache memory and bits in a word) Operating system: Linux (any version and flavor) and FORTRAN 77 compilers Has the code been vectorised or parallelized?: Yes RAM: 64-128 kBytes (the codes are very cpu intensive) Classification: 2.6 Nature of problem: The code deals with single and double ionization of atoms by ion impact. Conventional theoretical approaches aim at a direct calculation of the corresponding cross sections. This has the important shortcoming that it is difficult to account for the experimental conditions when comparing results to measured data. In contrast, the present code generates theoretical event files of the same type as are obtained in a real experiment. From these event files any type of cross sections can be easily extracted. The theoretical schemes are based on distorted wave formalisms for both processes of interest. Solution method: The codes employ a Monte Carlo Event Generator based on theoretical formalisms to generate event files for both single and double ionization. One of the main advantages of having access to theoretical event files is the possibility of adding the conditions present in real experiments (parameter uncertainties, environmental conditions, etc.) and to incorporate additional physics in the resulting event files (e.g. elastic scattering or other interactions absent in the underlying calculations). Additional comments: The computational time can be dramatically reduced if a large number of processors is used. Since the codes has no communication between processes it is possible to achieve an efficiency of a 100% (this number certainly will be penalized by the queuing waiting time). Running time: Times vary according to the process, single or double ionization, to be simulated, the number of processors and the type of theoretical model. The typical running time is between several hours and up to a few weeks.
TaylUR 3, a multivariate arbitrary-order automatic differentiation package for Fortran 95
NASA Astrophysics Data System (ADS)
von Hippel, G. M.
2010-03-01
This new version of TaylUR is based on a completely new core, which now is able to compute the numerical values of all of a complex-valued function's partial derivatives up to an arbitrary order, including mixed partial derivatives. New version program summaryProgram title: TaylUR Catalogue identifier: ADXR_v3_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXR_v3_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GPLv2 No. of lines in distributed program, including test data, etc.: 6750 No. of bytes in distributed program, including test data, etc.: 19 162 Distribution format: tar.gz Programming language: Fortran 95 Computer: Any computer with a conforming Fortran 95 compiler Operating system: Any system with a conforming Fortran 95 compiler Classification: 4.12, 4.14 Catalogue identifier of previous version: ADXR_v2_0 Journal reference of previous version: Comput. Phys. Comm. 176 (2007) 710 Does the new version supersede the previous version?: Yes Nature of problem: Problems that require potentially high orders of partial derivatives with respect to several variables or derivatives of complex-valued functions, such as e.g. momentum or mass expansions of Feynman diagrams in perturbative QFT, and which previous versions of this TaylUR [1,2] cannot handle due to their lack of support for mixed partial derivatives. Solution method: Arithmetic operators and Fortran intrinsics are overloaded to act correctly on objects of a defined type taylor, which encodes a function along with its first few partial derivatives with respect to the user-defined independent variables. Derivatives of products and composite functions are computed using multivariate forms [3] of Leibniz's rule D(fg)=∑{ν!}/{μ!(μ-ν)!}DfDg where ν=(ν,…,ν), |ν|=∑j=1dν, ν!=∏j=1dν!, Df=∂f/(∂x⋯∂x), and μ<ν iff either |μ|<|ν| or |μ|=|ν|,μ=ν,…,μ=ν,μ<ν for some k∈{0,…,d-1}, and of Fàa di Bruno's formula D(f○g)=∑p=1|ν|(f○g)∑s=1|ν|∑,…,k;λ,…,λ)}ν!/(∏j=1sk!λ!)(g)k where the sum is over {(k,…,k;λ,…,λ)∈Z:k>0,0<λ<⋯<λ, ∑i=1sk=p,∑i=1skλ=ν}. An indexed storage system is used to store the higher-order derivative tensors in a one-dimensional array. The relevant indices (k,…,k;λ,…,λ) and the weights occurring in the sums in Leibniz's and Fàa di Bruno's formula are precomputed at startup and stored in static arrays for later use. Reasons for new version: The earlier version lacked support for mixed partial derivatives, but a number of projects of interest required them. Summary of revisions: The internal representation of a taylor object has changed to a one-dimensional array which contains the partial derivatives in ascending order, and in lexicographic order of the corresponding multiindex within the same order. The necessary mappings between multiindices and indices into the taylor objects' internal array are computed at startup. To support the change to a genuinely multivariate taylor type, the DERIVATIVE function is now implemented via an interface that accepts both the older format derivative(f,mu,n)=∂μnf and also a new format derivative(f,mu(:))=Df that allows access to mixed partial derivatives. Another related extension to the functionality of the module is the HESSIAN function that returns the Hessian matrix of second derivatives of its argument. Since the calculation of all mixed partial derivatives can be very costly, and in many cases only some subset is actually needed, a masking facility has been added. Calling the subroutine DEACTIVATE_DERIVATIVE with a multiindex as an argument will deactivate the calculation of the partial derivative belonging to that multiindex, and of all partial derivatives it can feed into. Similarly, calling the subroutine ACTIVATE_DERIVATIVE will activate the calculation of the partial derivative belonging to its argument, and of all partial derivatives that can feed into it. Moreover, it is possible to turn off the computation of mixed derivatives altogether by setting Diagonal_taylors to .TRUE.. It should be noted that any change of Diagonal_taylors or Taylor_order invalidates all existing taylor objects. To aid the better integration of TaylUR into the HPSrc library [4], routines SET_DERIVATIVE and SET_ALL_DERIVATIVES are provided as a means of manually constructing a taylor object with given derivatives. Restrictions: Memory and CPU time constraints may restrict the number of variables and Taylor expansion order that can be achieved. Loss of numerical accuracy due to cancellation may become an issue at very high orders. Unusual features: These are the same as in previous versions, but are enumerated again here for clarity. The complex conjugation operation assumes all independent variables to be real. The functions REAL and AIMAG do not convert to real type, but return a result of type taylor (with the real/imaginary part of each derivative taken) instead. The user-defined functions VALUE, REALVALUE and IMAGVALUE, which return the value of a taylor object as a complex number, and the real and imaginary part of this value, respectively, as a real number are also provided. Fortran 95 intrinsics that are defined only for arguments of real type ( ACOS, AINT, ANINT, ASIN, ATAN, ATAN2, CEILING, DIM, FLOOR, INT, LOG10, MAX, MAXLOC, MAXVAL, MIN, MINLOC, MINVAL, MOD, MODULO, NINT, SIGN) will silently take the real part of taylor-valued arguments unless the module variable Real_args_warn is set to .TRUE., in which case they will return a quiet NaN value (if supported by the compiler) when called with a taylor argument whose imaginary part exceeds the module variable Real_args_tol. In those cases where the derivative of a function becomes undefined at certain points (as for ABS, AINT, ANINT, MAX, MIN, MOD, and MODULO), while the value is well defined, the derivative fields will be filled with quiet NaN values (if supported by the compiler). Additional comments: This version of TaylUR is released under the second version of the GNU General Public License (GPLv2). Therefore anyone is free to use or modify the code for their own calculations. As part of the licensing, it is requested that any publications including results from the use of TaylUR or any modification derived from it cite Refs. [1,2] as well as this paper. Finally, users are also requested to communicate to the author details of such publications, as well as of any bugs found or of required or useful modifications made or desired by them. Running time: The running time of TaylUR operations grows rapidly with both the number of variables and the Taylor expansion order. Judicious use of the masking facility to drop unneeded higher derivatives can lead to significant accelerations, as can activation of the Diagonal_taylors variable whenever mixed partial derivatives are not needed. Acknowledgments: The author thanks Alistair Hart for helpful comments and suggestions. This work is supported by the Deutsche Forschungsgemeinschaft in the SFB/TR 09. References:G.M. von Hippel, TaylUR, an arbitrary-order diagonal automatic differentiation package for Fortran 95, Comput. Phys. Comm. 174 (2006) 569. G.M. von Hippel, New version announcement for TaylUR, an arbitrary-order diagonal automatic differentiation package for Fortran 95, Comput. Phys. Comm. 176 (2007) 710. G.M. Constantine, T.H. Savits, A multivariate Faa di Bruno formula with applications, Trans. Amer. Math. Soc. 348 (2) (1996) 503. A. Hart, G.M. von Hippel, R.R. Horgan, E.H. Müller, Automated generation of lattice QCD Feynman rules, Comput. Phys. Comm. 180 (2009) 2698, doi:10.1016/j.cpc.2009.04.021, arXiv:0904.0375.
NASA Astrophysics Data System (ADS)
Caffo, Michele; Czyż, Henryk; Gunia, Michał; Remiddi, Ettore
2009-03-01
We present the program BOKASUN for fast and precise evaluation of the Master Integrals of the two-loop self-mass sunrise diagram for arbitrary values of the internal masses and the external four-momentum. We use a combination of two methods: a Bernoulli accelerated series expansion and a Runge-Kutta numerical solution of a system of linear differential equations. Program summaryProgram title: BOKASUN Catalogue identifier: AECG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECG_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 9404 No. of bytes in distributed program, including test data, etc.: 104 123 Distribution format: tar.gz Programming language: FORTRAN77 Computer: Any computer with a Fortran compiler accepting FORTRAN77 standard. Tested on various PC's with LINUX Operating system: LINUX RAM: 120 kbytes Classification: 4.4 Nature of problem: Any integral arising in the evaluation of the two-loop sunrise Feynman diagram can be expressed in terms of a given set of Master Integrals, which should be calculated numerically. The program provides a fast and precise evaluation method of the Master Integrals for arbitrary (but not vanishing) masses and arbitrary value of the external momentum. Solution method: The integrals depend on three internal masses and the external momentum squared p. The method is a combination of an accelerated expansion in 1/p in its (pretty large!) region of fast convergence and of a Runge-Kutta numerical solution of a system of linear differential equations. Running time: To obtain 4 Master Integrals on PC with 2 GHz processor it takes 3 μs for series expansion with pre-calculated coefficients, 80 μs for series expansion without pre-calculated coefficients, from a few seconds up to a few minutes for Runge-Kutta method (depending on the required accuracy and the values of the physical parameters).
NASA Astrophysics Data System (ADS)
Daniluk, Andrzej
2011-06-01
A computational model is a computer program, which attempts to simulate an abstract model of a particular system. Computational models use enormous calculations and often require supercomputer speed. As personal computers are becoming more and more powerful, more laboratory experiments can be converted into computer models that can be interactively examined by scientists and students without the risk and cost of the actual experiments. The future of programming is concurrent programming. The threaded programming model provides application programmers with a useful abstraction of concurrent execution of multiple tasks. The objective of this release is to address the design of architecture for scientific application, which may execute as multiple threads execution, as well as implementations of the related shared data structures. New version program summaryProgram title: GrowthCP Catalogue identifier: ADVL_v4_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVL_v4_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 32 269 No. of bytes in distributed program, including test data, etc.: 8 234 229 Distribution format: tar.gz Programming language: Free Object Pascal Computer: multi-core x64-based PC Operating system: Windows XP, Vista, 7 Has the code been vectorised or parallelized?: No RAM: More than 1 GB. The program requires a 32-bit or 64-bit processor to run the generated code. Memory is addressed using 32-bit (on 32-bit processors) or 64-bit (on 64-bit processors with 64-bit addressing) pointers. The amount of addressed memory is limited only by the available amount of virtual memory. Supplementary material: The figures mentioned in the "Summary of revisions" section can be obtained here. Classification: 4.3, 7.2, 6.2, 8, 14 External routines: Lazarus [1] Catalogue identifier of previous version: ADVL_v3_0 Journal reference of previous version: Comput. Phys. Comm. 181 (2010) 709 Does the new version supersede the previous version?: Yes Nature of problem: Reflection high-energy electron diffraction (RHEED) is an important in-situ analysis technique, which is capable of giving quantitative information about the growth process of thin layers and its control. It can be used to calibrate growth rate, analyze surface morphology, calibrate surface temperature, monitor the arrangement of the surface atoms, and provide information about growth kinetics. Such control allows the development of structures where the electrons can be confined in space, giving quantum wells or even quantum dots. In order to determine the atomic positions of atoms in the first few layers, the RHEED intensity must be measured as a function of the scattering angles and then compared with dynamic calculations. The objective of this release is to address the design of architecture for application that simulates the rocking curves RHEED intensities during hetero-epitaxial growth process of thin films. Solution method: The GrowthCP is a complex numerical model that uses multiple threads for simulation of epitaxial growth of thin layers. This model consists of two transactional parts. The first part is a mathematical model being based on the Runge-Kutta method with adaptive step-size control. The second part represents first-principles of the one-dimensional RHEED computational model. This model is based on solving a one-dimensional Schrödinger equation. Several problems can arise when applications contain a mixture of data access code, numerical code, and presentation code. Such applications are difficult to maintain, because interdependencies between all the components cause strong ripple effects whenever a change is made anywhere. Adding new data views often requires reimplementing a numerical code, which then requires maintenance in multiple places. In order to solve problems of this type, the computational and threading layers of the project have been implemented in the form of one design pattern as a part of Model-View-Controller architecture. Reasons for new version: Responding to the users' feedback the Growth09 project has been upgraded to a standard that allows the carrying out of sample computations of the RHEED intensities for a disordered surface for a wide range of single- and epitaxial hetero-structures. The design pattern on which the project is based has also been improved. It is shown that this model can be effectively used for multithreaded growth simulations of thin epitaxial layers and corresponding RHEED intensities for a wide range of single- and hetero-structures. Responding to the users' feedback the present release has been implemented using a well-documented free compiler [1] not requiring the special configuration and installation additional libraries. Summary of revisions: The logical structure of the Growth09 program has been modified according to the scheme showed in Fig. 1. The class diagram in Fig. 1 is a static view of the main platform-specific elements of the GrowthCP architecture. Fig. 2 provides a dynamic view by showing the creation and destruction simplistic sequence diagram for the process. The program requires the user to provide the appropriate parameters in the form of a knowledge base for the crystal structures under investigation. These parameters are loaded from the parameters. ini files at run-time. Instructions to prepare the .ini files can be found in the new distribution. The program enables carrying out different growth models and one-dimensional dynamical RHEED calculations for the fcc lattice with basis of three-atoms, fcc lattice with basis of two-atoms, fcc lattice with single atom basis, Zinc-Blende, Sodium Chloride, and Wurtzite crystalline structures and hetero-structures, but yet the Fourier component of the scattering potential in the TRHEEDCalculations.crystPotUgXXX() procedure can be modified and implemented according to users' specific application requirements. The Fourier component of the scattering potential of the whole crystalline hetero-structures can be determined as a sum of contributions coming from all thin slices of individual atomic layers. To carry out one-dimensional calculations of the scattering potentials, the program uses properly constructed self-consistent procedures. Each component of the system shown in Figs. 1 and 2 is fully extendable and can easily be adapted to new changeable requirements. Two essential logical elements of the system, i.e. TGrowthTransaction and TRHEEDCalculations classes, were designed and implemented in this way for them to pass the information to themselves without the need to use the data-exchange files given. In consequence each of them can be independently modified and/or extended. Implementing other types of differential equations and the different algorithm for solving them in the TGrowthTransaction class does not require another implementation of the TRHEEDCalculations class. Similarly, implementing other forms of scattering potential and different algorithm for RHEED calculation stays without the influence on the TGrowthTransaction class construction. Unusual features: The program is distributed in the form of main project GrowthCP.lpr, with associated files, and should be compiled using Lazarus IDE. The program should be compiled with English/USA regional and language options. Running time: The typical running time is machine and user-parameters dependent. References: http://sourceforge.net/projects/lazarus/files/.
VSHEC—A program for the automatic spectrum calibration
NASA Astrophysics Data System (ADS)
Zlokazov, V. B.; Utyonkov, V. K.; Tsyganov, Yu. S.
2013-02-01
Calibration is the transformation of the output channels of a measuring device into the physical values (energies, times, angles, etc.). If dealt with manually, it is a labor- and time-consuming procedure even if only a few detectors are used. However, the situation changes appreciably if a calibration of multi-detector systems is required, where the number of registering devices extends to hundreds (Tsyganov et al. (2004) [1]). The calibration is aggravated by the fact that needed pivotal channel numbers should be determined from peak-like distributions. But peak distribution is an informal pattern so that a procedure of pattern recognition should be employed to discard the operator interference. The automatic calibration is the determination of the calibration curve parameters on the basis of reference quantity list and the data which partially are characterized by these quantities (energies, angles, etc). The program allows the physicist to perform the calibration of the spectrometric detectors for both the cases: that of one tract and that of many. Program summaryProgram title: VSHEC Catalogue identifier: AENN_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENN_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 6403 No. of bytes in distributed program, including test data, etc.: 325847 Distribution format: tar.gz Programming language: DELPHI-5 and higher. Computer: Any IBM PC compatible. Operating system: Windows XX. Classification: 2.3, 4.9. Nature of problem: Automatic conversion of detector channels into their energy equivalents. Solution method: Automatic decomposition of a spectrum into geometric figures such as peaks and an envelope of peaks from below, estimation of peak centers and search for the maximum peak center subsequence which matches the reference energies in the statistically most plausible way. Running time: On Celeron (R) (CPU 2.66 GHh) it is the time needed for the dialog via the visual interface. Pure computation—less than 1 s for the test run.
Program CALIB. [for computing noise levels for helicopter version of S-191 filter wheel spectrometer
NASA Technical Reports Server (NTRS)
Mendlowitz, M. A.
1973-01-01
The program CALIB, which was written to compute noise levels and average signal levels of aperture radiance for the helicopter version of the S-191 filter wheel spectrometer is described. The program functions, and input description are included along with a compiled program listing.
STAR adaptation of QR algorithm. [program for solving over-determined systems of linear equations
NASA Technical Reports Server (NTRS)
Shah, S. N.
1981-01-01
The QR algorithm used on a serial computer and executed on the Control Data Corporation 6000 Computer was adapted to execute efficiently on the Control Data STAR-100 computer. How the scalar program was adapted for the STAR-100 and why these adaptations yielded an efficient STAR program is described. Program listings of the old scalar version and the vectorized SL/1 version are presented in the appendices. Execution times for the two versions applied to the same system of linear equations, are compared.
PROLOG to the Future: A Glimpse of Things to Come in Artificial Intelligence.
ERIC Educational Resources Information Center
Herther, Nancy K.
1986-01-01
Briefly introduces the programming languages of artificial intelligence and presents information on some of the new versions of these languages available for microcomputers. A tutorial for PROLOG-86, a new microcomputer version of PROLOG, is given. Information on other microcomputer versions of these programs and a bibliography are included.…
Siegel, Michael; Kurland, Rachel P; Castrini, Marisa; Morse, Catherine; de Groot, Alexander; Retamozo, Cynthia; Roberts, Sarah P; Ross, Craig S; Jernigan, David H
No previous paper has examined alcohol advertising on the internet versions of television programs popular among underage youth. To assess the volume of alcohol advertising on web sites of television networks which stream television programs popular among youth. Multiple viewers analyzed the product advertising appearing on 12 television programs that are available in full episode format on the internet. During a baseline period of one week, six coders analyzed all 12 programs. For the nine programs that contained alcohol advertising, three underage coders (ages 10, 13, and 18) analyzed the programs to quantify the extent of that advertising over a four-week period. Alcohol advertisements are highly prevalent on these programs, with nine of the 12 shows carrying alcohol ads, and six programs averaging at least one alcohol ad per episode. There was no difference in alcohol ad exposure for underage and legal age viewers. There is a substantial potential for youth exposure to alcohol advertising on the internet through internet-based versions of television programs. The Federal Trade Commission should require alcohol companies to report the underage youth and adult audiences for internet versions of television programs on which they advertise.
MEKS: A program for computation of inclusive jet cross sections at hadron colliders
NASA Astrophysics Data System (ADS)
Gao, Jun; Liang, Zhihua; Soper, Davison E.; Lai, Hung-Liang; Nadolsky, Pavel M.; Yuan, C.-P.
2013-06-01
EKS is a numerical program that predicts differential cross sections for production of single-inclusive hadronic jets and jet pairs at next-to-leading order (NLO) accuracy in a perturbative QCD calculation. We describe MEKS 1.0, an upgraded EKS program with increased numerical precision, suitable for comparisons to the latest experimental data from the Large Hadron Collider and Tevatron. The program integrates the regularized patron-level matrix elements over the kinematical phase space for production of two and three partons using the VEGAS algorithm. It stores the generated weighted events in finely binned two-dimensional histograms for fast offline analysis. A user interface allows one to customize computation of inclusive jet observables. Results of a benchmark comparison of the MEKS program and the commonly used FastNLO program are also documented. Program SummaryProgram title: MEKS 1.0 Catalogue identifier: AEOX_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOX_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland. Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 9234 No. of bytes in distributed program, including test data, etc.: 51997 Distribution format: tar.gz Programming language: Fortran (main program), C (CUBA library and analysis program). Computer: All. Operating system: Any UNIX-like system. RAM: ˜300 MB Classification: 11.1. External routines: LHAPDF (https://lhapdf.hepforge.org/) Nature of problem: Computation of differential cross sections for inclusive production of single hadronic jets and jet pairs at next-to-leading order accuracy in perturbative quantum chromodynamics. Solution method: Upon subtraction of infrared singularities, the hard-scattering matrix elements are integrated over available phase space using an optimized VEGAS algorithm. Weighted events are generated and filled into a finely binned two-dimensional histogram, from which the final cross sections with typical experimental binning and cuts are computed by an independent analysis program. Monte Carlo sampling of event weights is tuned automatically to get better efficiency. Running time: Depends on details of the calculation and sought numerical accuracy. See benchmark performance in Section 4. The tests provided take approximately 27 min for the jetbin run and a few seconds for jetana.
Chaste: A test-driven approach to software development for biological modelling
NASA Astrophysics Data System (ADS)
Pitt-Francis, Joe; Pathmanathan, Pras; Bernabeu, Miguel O.; Bordas, Rafel; Cooper, Jonathan; Fletcher, Alexander G.; Mirams, Gary R.; Murray, Philip; Osborne, James M.; Walter, Alex; Chapman, S. Jon; Garny, Alan; van Leeuwen, Ingeborg M. M.; Maini, Philip K.; Rodríguez, Blanca; Waters, Sarah L.; Whiteley, Jonathan P.; Byrne, Helen M.; Gavaghan, David J.
2009-12-01
Chaste ('Cancer, heart and soft-tissue environment') is a software library and a set of test suites for computational simulations in the domain of biology. Current functionality has arisen from modelling in the fields of cancer, cardiac physiology and soft-tissue mechanics. It is released under the LGPL 2.1 licence. Chaste has been developed using agile programming methods. The project began in 2005 when it was reasoned that the modelling of a variety of physiological phenomena required both a generic mathematical modelling framework, and a generic computational/simulation framework. The Chaste project evolved from the Integrative Biology (IB) e-Science Project, an inter-institutional project aimed at developing a suitable IT infrastructure to support physiome-level computational modelling, with a primary focus on cardiac and cancer modelling. Program summaryProgram title: Chaste Catalogue identifier: AEFD_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFD_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: LGPL 2.1 No. of lines in distributed program, including test data, etc.: 5 407 321 No. of bytes in distributed program, including test data, etc.: 42 004 554 Distribution format: tar.gz Programming language: C++ Operating system: Unix Has the code been vectorised or parallelized?: Yes. Parallelized using MPI. RAM:<90 Megabytes for two of the scenarios described in Section 6 of the manuscript (Monodomain re-entry on a slab or Cylindrical crypt simulation). Up to 16 Gigabytes (distributed across processors) for full resolution bidomain cardiac simulation. Classification: 3. External routines: Boost, CodeSynthesis XSD, CxxTest, HDF5, METIS, MPI, PETSc, Triangle, Xerces Nature of problem: Chaste may be used for solving coupled ODE and PDE systems arising from modelling biological systems. Use of Chaste in two application areas are described in this paper: cardiac electrophysiology and intestinal crypt dynamics. Solution method: Coupled multi-physics with PDE, ODE and discrete mechanics simulation. Running time: The largest cardiac simulation described in the manuscript takes about 6 hours to run on a single 3 GHz core. See results section (Section 6) of the manuscript for discussion on parallel scaling.
Aircraft noise prediction program propeller analysis system IBM-PC version user's manual version 2.0
NASA Technical Reports Server (NTRS)
Nolan, Sandra K.
1988-01-01
The IBM-PC version of the Aircraft Noise Prediction Program (ANOPP) Propeller Analysis System (PAS) is a set of computational programs for predicting the aerodynamics, performance, and noise of propellers. The ANOPP-PAS is a subset of a larger version of ANOPP which can be executed on CDC or VAX computers. This manual provides a description of the IBM-PC version of the ANOPP-PAS and its prediction capabilities, and instructions on how to use the system on an IBM-XT or IBM-AT personal computer. Sections within the manual document installation, system design, ANOPP-PAS usage, data entry preprocessors, and ANOPP-PAS functional modules and procedures. Appendices to the manual include a glossary of ANOPP terms and information on error diagnostics and recovery techniques.
42 CFR 423.160 - Standards for electronic prescribing.
Code of Federal Regulations, 2011 CFR
2011-10-01
... Prescriber/Pharmacist Interface SCRIPT Standard, Implementation Guide, Version 8, Release 1, (Version 8.1... Prescriber/Pharmacist Interface SCRIPT Standard, Implementation Guide, Version 8, Release 1 (Version 8.1... Programs Prescriber/Pharmacist Interface SCRIPT Standard, Implementation Guide Version 8, Release 1...
Ahn, SangNam; Smith, Matthew Lee; Altpeter, Mary; Belza, Basia; Post, Lindsey; Ory, Marcia G.
2015-01-01
Maintaining intervention fidelity should be part of any programmatic quality assurance (QA) plan and is often a licensure requirement. However, fidelity checklists designed by original program developers are often lengthy, which makes compliance difficult once programs become widely disseminated in the field. As a case example, we used Stanford’s original Chronic Disease Self-Management Program (CDSMP) fidelity checklist of 157 items to demonstrate heuristic procedures for generating shorter fidelity checklists. Using an expert consensus approach, we sought feedback from active master trainers registered with the Stanford University Patient Education Research Center about which items were most essential to, and also feasible for, assessing fidelity. We conducted three sequential surveys and one expert group-teleconference call. Three versions of the fidelity checklist were created using different statistical and methodological criteria. In a final group-teleconference call with seven national experts, there was unanimous agreement that all three final versions (e.g., a 34-item version, a 20-item version, and a 12-item version) should be made available because the purpose and resources for administering a checklist might vary from one setting to another. This study highlights the methodology used to generate shorter versions of a fidelity checklist, which has potential to inform future QA efforts for this and other evidence-based programs (EBP) for older adults delivered in community settings. With CDSMP and other EBP, it is important to differentiate between program fidelity as mandated by program developers for licensure, and intervention fidelity tools for providing an “at-a-glance” snapshot of the level of compliance to selected program indicators. PMID:25964941
A users' guide to the trace contaminant control simulation computer program
NASA Technical Reports Server (NTRS)
Perry, J. L.
1994-01-01
The Trace Contaminant Control Simulation computer program is a tool for assessing the performance of various trace contaminant control technologies for removing trace chemical contamination from a spacecraft cabin atmosphere. The results obtained from the program can be useful in assessing different technology combinations, system sizing, system location with respect to other life support systems, and the overall life cycle economics of a trace contaminant control system. The user's manual is extracted in its entirety from NASA TM-108409 to provide a stand-alone reference for using any version of the program. The first publication of the manual as part of TM-108409 also included a detailed listing of version 8.0 of the program. As changes to the code were necessary, it became apparent that the user's manual should be separate from the computer code documentation and be general enough to provide guidance in using any version of the program. Provided in the guide are tips for input file preparation, general program execution, and output file manipulation. Information concerning source code listings of the latest version of the computer program may be obtained by contacting the author.
HYDROLOGIC EVALUATION OF LANDFILL PERFORMANCE (HELP) MODEL - USER'S GUIDE FOR VERSION 3
This report documents the solution methods and process descriptions used in the Version 3 of the HELP model. Program documentation including program options, system and operating requirements, file structures, program structure and variable descriptions are provided in a separat...
NASA Astrophysics Data System (ADS)
Reisel, John R.; Jablonski, Marissa; Hosseini, Hossein; Munson, Ethan
2012-06-01
A summer bridge program for incoming engineering and computer science freshmen has been used at the University of Wisconsin-Milwaukee from 2007 to 2010. The primary purpose of this program has been to improve the mathematics course placement for incoming students who initially place into a course below Calculus I on the math placement examination. The students retake the university's math placement examination after completing the bridge program to determine if they then place into a higher-level mathematics course. If the students improve their math placement, the program is considered successful for that student. The math portion of the bridge program is designed around using the ALEKS software package for targeted, self-guided learning. In the 2007 and 2008 versions of the program, both an on-line version and an on-campus version with additional instruction were offered. In 2009 and 2010, the program was exclusively in an on-campus format, and also featured a required residential component and additional engineering activities for the students. From the results of these four programs, we are able to evaluate the success of the program in its different formats. In addition, data has been collected and analysed regarding the impact of other factors on the program's success. The factors include student preparation before the beginning of the program (as measured by math ACT scores) and the amount of time the student spent working on the material during the program. Better math preparation and the amount of time spent on the program were found to be good indicators of success. Furthermore, the on-campus version of the program is more effective than the on-line version.
Avionic Data Bus Integration Technology
1991-12-01
address the hardware-software interaction between a digital data bus and an avionic system. Very Large Scale Integration (VLSI) ICs and multiversion ...the SCP. In 1984, the Sperry Corporation developed a fault tolerant system which employed multiversion programming, voting, and monitoring for error... MULTIVERSION PROGRAMMING. N-version programming. 226 N-VERSION PROGRAMMING. The independent coding of a number, N, of redundant computer programs that
Validation studies of the DOE-2 Building Energy Simulation Program. Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sullivan, R.; Winkelmann, F.
1998-06-01
This report documents many of the validation studies (Table 1) of the DOE-2 building energy analysis simulation program that have taken place since 1981. Results for several versions of the program are presented with the most recent study conducted in 1996 on version DOE-2.1E and the most distant study conducted in 1981 on version DOE-1.3. This work is part of an effort related to continued development of DOE-2, particularly in its use as a simulation engine for new specialized versions of the program such as the recently released RESFEN 3.1. RESFEN 3.1 is a program specifically dealing with analyzing themore » energy performance of windows in residential buildings. The intent in providing the results of these validation studies is to give potential users of the program a high degree of confidence in the calculated results. Validation studies in which calculated simulation data is compared to measured data have been conducted throughout the development of the DOE-2 program. Discrepancies discovered during the course of such work has resulted in improvements in the simulation algorithms. Table 2 provides a listing of additions and modifications that have been made to various versions of the program since version DOE-2.1A. One of the most significant recent changes in the program occurred with version DOE-2.1E. An improved algorithm for calculating the outside surface film coefficient was implemented. In addition, integration of the WINDOW 4 program was accomplished resulting in improved ability in analyzing window energy performance. Validation and verification of a program as sophisticated as DOE-2 must necessarily be limited because of the approximations inherent in the program. For example, the most accurate model of the heat transfer processes in a building would include a three-dimensional analysis. To justify such detailed algorithmic procedures would correspondingly require detailed information describing the building and/or HVAC system and energy plant parameters. Until building simulation programs can get this data directly from CAD programs, such detail would negate the usefulness of the program for the practicing engineers and architects who currently use the program. In addition, the validation studies discussed herein indicate that such detail is really unnecessary. The comparison of calculated and measured quantities have resulted in a satisfactory level of confidence that is sufficient for continued use of the DOE-2 program. However, additional validation is warranted, particularly at the component level, to further improve the program.« less
Comprehensive Monitoring Program: Air Quality Data Assessment Report for FY90. Volume 2. Version 3.1
1991-09-01
91311R01 If VERSION 3.10) VOLUME II Comm 2ND COPY COMPREHENSIVE MONITORING PROGRAM Contract Number DAAAI5-87-0095 AIR QUALITY DATA ASSESSMENT REPORT...MONITORING PROGRAM. FINAL AIR QUALITY DATA ASSESSMENT REPORT FOR FY90, VERSION 3.1 NONE 6. AUTHOR(S) 7. PERFORMING ORGANIZATION NAME(S) AND ADDRES.S(S) 8...RELEASE; DISTRIBUTION IS UNLIMITED 13. ABSTRACT (Maximum 200 words) THE OBJECTIVE OF THIS CMP IS TO: VERIFY AND EVALUATE POTENTIAL AIR QUALITY HEALTH
Excoffier, Laurent; Lischer, Heidi E L
2010-05-01
We present here a new version of the Arlequin program available under three different forms: a Windows graphical version (Winarl35), a console version of Arlequin (arlecore), and a specific console version to compute summary statistics (arlsumstat). The command-line versions run under both Linux and Windows. The main innovations of the new version include enhanced outputs in XML format, the possibility to embed graphics displaying computation results directly into output files, and the implementation of a new method to detect loci under selection from genome scans. Command-line versions are designed to handle large series of files, and arlsumstat can be used to generate summary statistics from simulated data sets within an Approximate Bayesian Computation framework. © 2010 Blackwell Publishing Ltd.
TweezPal - Optical tweezers analysis and calibration software
NASA Astrophysics Data System (ADS)
Osterman, Natan
2010-11-01
Optical tweezers, a powerful tool for optical trapping, micromanipulation and force transduction, have in recent years become a standard technique commonly used in many research laboratories and university courses. Knowledge about the optical force acting on a trapped object can be gained only after a calibration procedure which has to be performed (by an expert) for each type of trapped objects. In this paper we present TweezPal, a user-friendly, standalone Windows software tool for optical tweezers analysis and calibration. Using TweezPal, the procedure can be performed in a matter of minutes even by non-expert users. The calibration is based on the Brownian motion of a particle trapped in a stationary optical trap, which is being monitored using video or photodiode detection. The particle trajectory is imported into the software which instantly calculates position histogram, trapping potential, stiffness and anisotropy. Program summaryProgram title: TweezPal Catalogue identifier: AEGR_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGR_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 44 891 No. of bytes in distributed program, including test data, etc.: 792 653 Distribution format: tar.gz Programming language: Borland Delphi Computer: Any PC running Microsoft Windows Operating system: Windows 95, 98, 2000, XP, Vista, 7 RAM: 12 Mbytes Classification: 3, 4.14, 18, 23 Nature of problem: Quick, robust and user-friendly calibration and analysis of optical tweezers. The optical trap is calibrated from the trajectory of a trapped particle undergoing Brownian motion in a stationary optical trap (input data) using two methods. Solution method: Elimination of the experimental drift in position data. Direct calculation of the trap stiffness from the positional variance. Calculation of 1D optical trapping potential from the positional distribution of data points. Trap stiffness calculation by fitting a parabola to the trapping potential. Presentation of X-Y positional density for close inspection of the 2D trapping potential. Calculation of the trap anisotropy. Running time: Seconds
FeynArts model file for MSSM transition counterterms from DREG to DRED
NASA Astrophysics Data System (ADS)
Stöckinger, Dominik; Varšo, Philipp
2012-02-01
The FeynArts model file MSSMdreg2dred implements MSSM transition counterterms which can convert one-loop Green functions from dimensional regularization to dimensional reduction. They correspond to a slight extension of the well-known Martin/Vaughn counterterms, specialized to the MSSM, and can serve also as supersymmetry-restoring counterterms. The paper provides full analytic results for the counterterms and gives one- and two-loop usage examples. The model file can simplify combining MS¯-parton distribution functions with supersymmetric renormalization or avoiding the renormalization of ɛ-scalars in dimensional reduction. Program summaryProgram title:MSSMdreg2dred.mod Catalogue identifier: AEKR_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKR_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: LGPL-License [1] No. of lines in distributed program, including test data, etc.: 7600 No. of bytes in distributed program, including test data, etc.: 197 629 Distribution format: tar.gz Programming language: Mathematica, FeynArts Computer: Any, capable of running Mathematica and FeynArts Operating system: Any, with running Mathematica, FeynArts installation Classification: 4.4, 5, 11.1 Subprograms used: Cat Id Title Reference ADOW_v1_0 FeynArts CPC 140 (2001) 418 Nature of problem: The computation of one-loop Feynman diagrams in the minimal supersymmetric standard model (MSSM) requires regularization. Two schemes, dimensional regularization and dimensional reduction are both common but have different pros and cons. In order to combine the advantages of both schemes one would like to easily convert existing results from one scheme into the other. Solution method: Finite counterterms are constructed which correspond precisely to the one-loop scheme differences for the MSSM. They are provided as a FeynArts [2] model file. Using this model file together with FeynArts, the (ultra-violet) regularization of any MSSM one-loop Green function is switched automatically from dimensional regularization to dimensional reduction. In particular the counterterms serve as supersymmetry-restoring counterterms for dimensional regularization. Restrictions: The counterterms are restricted to the one-loop level and the MSSM. Running time: A few seconds to generate typical Feynman graphs with FeynArts.
Plato: A localised orbital based density functional theory code
NASA Astrophysics Data System (ADS)
Kenny, S. D.; Horsfield, A. P.
2009-12-01
The Plato package allows both orthogonal and non-orthogonal tight-binding as well as density functional theory (DFT) calculations to be performed within a single framework. The package also provides extensive tools for analysing the results of simulations as well as a number of tools for creating input files. The code is based upon the ideas first discussed in Sankey and Niklewski (1989) [1] with extensions to allow high-quality DFT calculations to be performed. DFT calculations can utilise either the local density approximation or the generalised gradient approximation. Basis sets from minimal basis through to ones containing multiple radial functions per angular momenta and polarisation functions can be used. Illustrations of how the package has been employed are given along with instructions for its utilisation. Program summaryProgram title: Plato Catalogue identifier: AEFC_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFC_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 219 974 No. of bytes in distributed program, including test data, etc.: 1 821 493 Distribution format: tar.gz Programming language: C/MPI and PERL Computer: Apple Macintosh, PC, Unix machines Operating system: Unix, Linux and Mac OS X Has the code been vectorised or parallelised?: Yes, up to 256 processors tested RAM: Up to 2 Gbytes per processor Classification: 7.3 External routines: LAPACK, BLAS and optionally ScaLAPACK, BLACS, PBLAS, FFTW Nature of problem: Density functional theory study of electronic structure and total energies of molecules, crystals and surfaces. Solution method: Localised orbital based density functional theory. Restrictions: Tight-binding and density functional theory only, no exact exchange. Unusual features: Both atom centred and uniform meshes available. Can deal with arbitrary angular momenta for orbitals, whilst still retaining Slater-Koster tables for accuracy. Running time: Test cases will run in a few minutes, large calculations may run for several days.
QuTiP 2: A Python framework for the dynamics of open quantum systems
NASA Astrophysics Data System (ADS)
Johansson, J. R.; Nation, P. D.; Nori, Franco
2013-04-01
We present version 2 of QuTiP, the Quantum Toolbox in Python. Compared to the preceding version [J.R. Johansson, P.D. Nation, F. Nori, Comput. Phys. Commun. 183 (2012) 1760.], we have introduced numerous new features, enhanced performance, and made changes in the Application Programming Interface (API) for improved functionality and consistency within the package, as well as increased compatibility with existing conventions used in other scientific software packages for Python. The most significant new features include efficient solvers for arbitrary time-dependent Hamiltonians and collapse operators, support for the Floquet formalism, and new solvers for Bloch-Redfield and Floquet-Markov master equations. Here we introduce these new features, demonstrate their use, and give a summary of the important backward-incompatible API changes introduced in this version. Catalog identifier: AEMB_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEMB_v2_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 33625 No. of bytes in distributed program, including test data, etc.: 410064 Distribution format: tar.gz Programming language: Python. Computer: i386, x86-64. Operating system: Linux, Mac OSX. RAM: 2+ Gigabytes Classification: 7. External routines: NumPy, SciPy, Matplotlib, Cython Catalog identifier of previous version: AEMB_v1_0 Journal reference of previous version: Comput. Phys. Comm. 183 (2012) 1760 Does the new version supercede the previous version?: Yes Nature of problem: Dynamics of open quantum systems Solution method: Numerical solutions to Lindblad, Floquet-Markov, and Bloch-Redfield master equations, as well as the Monte Carlo wave function method. Reasons for new version: Compared to the preceding version we have introduced numerous new features, enhanced performance, and made changes in the Application Programming Interface (API) for improved functionality and consistency within the package, as well as increased compatibility with existing conventions used in other scientific software packages for Python. The most significant new features include efficient solvers for arbitrary time-dependent Hamiltonians and collapse operators, support for the Floquet formalism, and new solvers for Bloch-Redfield and Floquet-Markov master equations. Restrictions: Problems must meet the criteria for using the master equation in Lindblad, Floquet-Markov, or Bloch-Redfield form. Running time: A few seconds up to several tens of hours, depending on size of the underlying Hilbert space.
Siegel, Michael; Kurland, Rachel P.; Castrini, Marisa; Morse, Catherine; de Groot, Alexander; Retamozo, Cynthia; Roberts, Sarah P.; Ross, Craig S.; Jernigan, David H.
2015-01-01
Background No previous paper has examined alcohol advertising on the internet versions of television programs popular among underage youth. Objectives To assess the volume of alcohol advertising on web sites of television networks which stream television programs popular among youth. Methods Multiple viewers analyzed the product advertising appearing on 12 television programs that are available in full episode format on the internet. During a baseline period of one week, six coders analyzed all 12 programs. For the nine programs that contained alcohol advertising, three underage coders (ages 10, 13, and 18) analyzed the programs to quantify the extent of that advertising over a four-week period. Results Alcohol advertisements are highly prevalent on these programs, with nine of the 12 shows carrying alcohol ads, and six programs averaging at least one alcohol ad per episode. There was no difference in alcohol ad exposure for underage and legal age viewers. Conclusions There is a substantial potential for youth exposure to alcohol advertising on the internet through internet-based versions of television programs. The Federal Trade Commission should require alcohol companies to report the underage youth and adult audiences for internet versions of television programs on which they advertise. PMID:27212891
Expert System for Automated Design Synthesis
NASA Technical Reports Server (NTRS)
Rogers, James L., Jr.; Barthelemy, Jean-Francois M.
1987-01-01
Expert-system computer program EXADS developed to aid users of Automated Design Synthesis (ADS) general-purpose optimization program. EXADS aids engineer in determining best combination based on knowledge of specific problem and expert knowledge stored in knowledge base. Available in two interactive machine versions. IBM PC version (LAR-13687) written in IQ-LISP. DEC VAX version (LAR-13688) written in Franz-LISP.
A new version of a computer program for dynamical calculations of RHEED intensity oscillations
NASA Astrophysics Data System (ADS)
Daniluk, Andrzej; Skrobas, Kazimierz
2006-01-01
We present a new version of the RHEED program which contains a graphical user interface enabling the use of the program in the graphical environment. The presented program also contains a graphical component which enables displaying program data at run-time through an easy-to-use graphical interface. New version program summaryTitle of program: RHEEDGr Catalogue identifier: ADWV Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADWV Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Catalogue identifier of previous version: ADUY Authors of the original program: A. Daniluk Does the new version supersede the original program: no Computer for which the new version is designed and others on which it has been tested: Pentium-based PC Operating systems or monitors under which the new version has been tested: Windows 9x, XP, NT Programming language used: Borland C++ Builder Memory required to execute with typical data: more than 1 MB Number of bits in a word: 64 bits Number of processors used: 1 Number of lines in distributed program, including test data, etc.: 5797 Number of bytes in distributed program, including test data, etc.: 588 121 Distribution format: tar.gz Nature of physical problem: Reflection high-energy electron diffraction (RHEED) is a very useful technique for studying growth and surface analysis of thin epitaxial structures prepared by the molecular beam epitaxy (MBE). The RHEED technique can reveal, almost instantaneously, changes either in the coverage of the sample surface by adsorbates or in the surface structure of a thin film. Method of solution: RHEED intensities are calculated within the framework of the general matrix formulation of Peng and Whelan [1] under the one-beam condition. Reasons for the new version: Responding to the user feedback we designed a graphical package that enables displaying program data at run-time through an easy-to-use graphical interface. Summary of revisions:In the present form the code is an object-oriented extension of previous version [2]. Fig. 1 shows the static structure of classes and their possible relationships (i.e. inheritance, association, aggregation and dependency) in the code. The code has been modified and optimized to compile under the C++ Builder integrated development environment (IDE). A graphical user interface (GUI) for the program has been created. The application is a standard multiple document interface (MDI) project from Builder's object repository. The MDI application spawns child window that reside within the client window; the main form contains child object. We have added an original graphical component [3] which has been tested successfully in the C++ Builder programming environment under Microsoft Windows platform. Fig. 2 shows internal structure of the component. This diagram is a graphic presentation of the static view which shows a collection of declarative model elements, such as classes, types, and their relationships. Each of the model elements shown in Fig. 2 is manifested by one header file Graph2D.h, and one code file Graph2D.cpp. Fig. 3 sets the stage by showing the package which supplies the C++ Builder elements used in the component. Installation instructions of the TGraph2D.bpk package can be found in the new distribution. The program has been constructed according to the systems development live cycle (SDLC) methodology [4]. Typical running time: The typical running time is machine and user-parameters dependent. Unusual features of the program: The program is distributed in the form of a main project RHEEDGr.bpr with associated files, and should be compiled using Borland C++ Builder compilers version 5 or later.
Software fault-tolerance by design diversity DEDIX: A tool for experiments
NASA Technical Reports Server (NTRS)
Avizienis, A.; Gunningberg, P.; Kelly, J. P. J.; Lyu, R. T.; Strigini, L.; Traverse, P. J.; Tso, K. S.; Voges, U.
1986-01-01
The use of multiple versions of a computer program, independently designed from a common specification, to reduce the effects of an error is discussed. If these versions are designed by independent programming teams, it is expected that a fault in one version will not have the same behavior as any fault in the other versions. Since the errors in the output of the versions are different and uncorrelated, it is possible to run the versions concurrently, cross-check their results at prespecified points, and mask errors. A DEsign DIversity eXperiments (DEDIX) testbed was implemented to study the influence of common mode errors which can result in a failure of the entire system. The layered design of DEDIX and its decision algorithm are described.
ERIC Educational Resources Information Center
Phornphisutthimas, Somkiat; Panijpan, Bhinyo; Wood, Edward J.; Booth, Andrew G.
2007-01-01
To support student learning in biochemistry and related courses, a simulation program, the Protein Purification Program, offers an alternative multimedia-based tool. This program has now been translated to produce a Thai version. However, translation from the original into the Thai language is limited by the differences between the language…
TERS v2.0: An improved version of TERS
NASA Astrophysics Data System (ADS)
Nath, S.
2009-11-01
We present a new version of the semimicroscopic Monte Carlo code "TERS". The procedure for calculating multiple small angle Coulomb scattering of the residues in the target has been modified. Target-backing and residue charge-reset foils, which are often used in heavy ion-induced complete fusion reactions, are included in the code. New version program summaryProgram title: TERS v2.0 Catalogue identifier: AEBD_v2_0 Program summary URL:
Las Palmeras Molecular Dynamics: A flexible and modular molecular dynamics code
NASA Astrophysics Data System (ADS)
Davis, Sergio; Loyola, Claudia; González, Felipe; Peralta, Joaquín
2010-12-01
Las Palmeras Molecular Dynamics (LPMD) is a highly modular and extensible molecular dynamics (MD) code using interatomic potential functions. LPMD is able to perform equilibrium MD simulations of bulk crystalline solids, amorphous solids and liquids, as well as non-equilibrium MD (NEMD) simulations such as shock wave propagation, projectile impacts, cluster collisions, shearing, deformation under load, heat conduction, heterogeneous melting, among others, which involve unusual MD features like non-moving atoms and walls, unstoppable atoms with constant-velocity, and external forces like electric fields. LPMD is written in C++ as a compromise between efficiency and clarity of design, and its architecture is based on separate components or plug-ins, implemented as modules which are loaded on demand at runtime. The advantage of this architecture is the ability to completely link together the desired components involved in the simulation in different ways at runtime, using a user-friendly control file language which describes the simulation work-flow. As an added bonus, the plug-in API (Application Programming Interface) makes it possible to use the LPMD components to analyze data coming from other simulation packages, convert between input file formats, apply different transformations to saved MD atomic trajectories, and visualize dynamical processes either in real-time or as a post-processing step. Individual components, such as a new potential function, a new integrator, a new file format, new properties to calculate, new real-time visualizers, and even a new algorithm for handling neighbor lists can be easily coded, compiled and tested within LPMD by virtue of its object-oriented API, without the need to modify the rest of the code. LPMD includes already several pair potential functions such as Lennard-Jones, Morse, Buckingham, MCY and the harmonic potential, as well as embedded-atom model (EAM) functions such as the Sutton-Chen and Gupta potentials. Integrators to choose include Euler (if only for demonstration purposes), Verlet and Velocity Verlet, Leapfrog and Beeman, among others. Electrostatic forces are treated as another potential function, by default using the plug-in implementing the Ewald summation method. Program summaryProgram title: LPMD Catalogue identifier: AEHG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHG_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License version 3 No. of lines in distributed program, including test data, etc.: 509 490 No. of bytes in distributed program, including test data, etc.: 6 814 754 Distribution format: tar.gz Programming language: C++ Computer: 32-bit and 64-bit workstation Operating system: UNIX RAM: Minimum 1024 bytes Classification: 7.7 External routines: zlib, OpenGL Nature of problem: Study of Statistical Mechanics and Thermodynamics of condensed matter systems, as well as kinetics of non-equilibrium processes in the same systems. Solution method: Equilibrium and non-equilibrium molecular dynamics method, Monte Carlo methods. Restrictions: Rigid molecules are not supported. Polarizable atoms and chemical bonds (proteins) either. Unusual features: The program is able to change the temperature of the simulation cell, the pressure, cut regions of the cell, color the atoms by properties, even during the simulation. It is also possible to fix the positions and/or velocity of groups of atoms. Visualization of atoms and some physical properties during the simulation. Additional comments: The program does not only perform molecular dynamics and Monte Carlo simulations, it is also able to filter and manipulate atomic configurations, read and write different file formats, convert between them, evaluate different structural and dynamical properties. Running time: 50 seconds on a 1000-step simulation of 4000 argon atoms, running on a single 2.67 GHz Intel processor.
Calculating Trajectories And Orbits
NASA Technical Reports Server (NTRS)
Alderson, Daniel J.; Brady, Franklyn H.; Breckheimer, Peter J.; Campbell, James K.; Christensen, Carl S.; Collier, James B.; Ekelund, John E.; Ellis, Jordan; Goltz, Gene L.; Hintz, Gerarld R.;
1989-01-01
Double-Precision Trajectory Analysis Program, DPTRAJ, and Orbit Determination Program, ODP, developed and improved over years to provide highly reliable and accurate navigation capability for deep-space missions like Voyager. Each collection of programs working together to provide desired computational results. DPTRAJ, ODP, and supporting utility programs capable of handling massive amounts of data and performing various numerical calculations required for solving navigation problems associated with planetary fly-by and lander missions. Used extensively in support of NASA's Voyager project. DPTRAJ-ODP available in two machine versions. UNIVAC version, NPO-15586, written in FORTRAN V, SFTRAN, and ASSEMBLER. VAX/VMS version, NPO-17201, written in FORTRAN V, SFTRAN, PL/1 and ASSEMBLER.
An all-FORTRAN version of NASTRAN for the VAX
NASA Technical Reports Server (NTRS)
Purves, L.
1981-01-01
All FORTRAN version of NASA structural analysis program NASATRAN is implemented on DEC VAX-series computer. Applications of NASATRAN extend to almost every type of linear structure and construction. Two special features are available in VAX version; program is executed from terminal in manner permitting use of VAX interactive debugger, and links are interactively restarted when desired by first making copy of all NASATRAN work files.
NASA Astrophysics Data System (ADS)
Carles, Guillem; Ferran, Carme; Carnicer, Artur; Bosch, Salvador
2012-01-01
A computational imaging system based on wavefront coding is presented. Wavefront coding provides an extension of the depth-of-field at the expense of a slight reduction of image quality. This trade-off results from the amount of coding used. By using spatial light modulators, a flexible coding is achieved which permits it to be increased or decreased as needed. In this paper a computational method is proposed for evaluating the output of a wavefront coding imaging system equipped with a spatial light modulator, with the aim of thus making it possible to implement the most suitable coding strength for a given scene. This is achieved in an unsupervised manner, thus the whole system acts as a dynamically selfadaptable imaging system. The program presented here controls the spatial light modulator and the camera, and also processes the images in a synchronised way in order to implement the dynamic system in real time. A prototype of the system was implemented in the laboratory and illustrative examples of the performance are reported in this paper. Program summaryProgram title: DynWFC (Dynamic WaveFront Coding) Catalogue identifier: AEKC_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKC_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 10 483 No. of bytes in distributed program, including test data, etc.: 2 437 713 Distribution format: tar.gz Programming language: Labview 8.5 and NI Vision and MinGW C Compiler Computer: Tested on PC Intel ® Pentium ® Operating system: Tested on Windows XP Classification: 18 Nature of problem: The program implements an enhanced wavefront coding imaging system able to adapt the degree of coding to the requirements of a specific scene. The program controls the acquisition by a camera, the display of a spatial light modulator and the image processing operations synchronously. The spatial light modulator is used to implement the phase mask with flexibility given the trade-off between depth-of-field extension and image quality achieved. The action of the program is to evaluate the depth-of-field requirements of the specific scene and subsequently control the coding established by the spatial light modulator, in real time.
The report is a reference manual for RASSMlT Version 2.1, a computer program that was developed to simulate and aid in the design of sub-slab depressurization systems used for indoor radon mitigation. The program was designed to run on DOS-compatible personal computers to ensure ...
User's guide for Northeast Stand Exam Program (NEST Version 2.1).
Thomas M. Schuler; Brian T. Simpson
1991-01-01
Explains the Northeast Stand Exam (NEST Version 2.1) program. The NEST program was designed for use on the Polycorder 600 Series electronic portable data recorder to record data collected from the standard permanent plot as described by the Stand Culture and Stand Establishment Working Groups of the Northeastern Forest Experiment Station.
ERIC Educational Resources Information Center
Cole, Renee E.; Horacek, Tanya
2009-01-01
Objective: To describe the use of a consolidated version of the PRECEDE-PROCEED participatory program planning model to collaboratively design an intuitive eating program with Fort Drum military spouses tailored to their readiness to reject the dieting mentality and make healthful lifestyle modifications. Design: A consolidated version of…
ERIC Educational Resources Information Center
Abramson, Theodore; Kagen, Edward
This study investigated attribute by treatment interactions between prior familiarity and response mode to programmed materials for college level subjects by manipulating subjects' familiarity. The programs were a revised version of Diagnosis of Myocardial Infraction in standard format and in a reading version. Materials to familiarize subjects…
42 CFR 423.160 - Standards for electronic prescribing.
Code of Federal Regulations, 2012 CFR
2012-10-01
... Council for Prescription Drug Programs Prescriber/Pharmacist Interface SCRIPT Standard, Implementation... National Council for Prescription Drug Programs Prescriber/Pharmacist Interface SCRIPT Standard... Prescriber/Pharmacist Interface SCRIPT Standard, Implementation Guide Version 8, Release 1 (Version 8.1...
42 CFR 423.160 - Standards for electronic prescribing.
Code of Federal Regulations, 2013 CFR
2013-10-01
... National Council for Prescription Drug Programs Prescriber/Pharmacist Interface SCRIPT Standard... National Council for Prescription Drug Programs Prescriber/Pharmacist Interface SCRIPT Standard... Prescriber/Pharmacist Interface SCRIPT Standard, Implementation Guide Version 8, Release 1 (Version 8.1...
42 CFR 423.160 - Standards for electronic prescribing.
Code of Federal Regulations, 2014 CFR
2014-10-01
... National Council for Prescription Drug Programs Prescriber/Pharmacist Interface SCRIPT Standard... National Council for Prescription Drug Programs Prescriber/Pharmacist Interface SCRIPT Standard... Prescriber/Pharmacist Interface SCRIPT Standard, Implementation Guide Version 8, Release 1 (Version 8.1...
10 CFR 431.203 - Materials incorporated by reference.
Code of Federal Regulations, 2012 CFR
2012-01-01
.... Environmental Protection Agency “ENERGY STAR Program Requirements for Exit Signs,” Version 2.0 issued January 1... Protection Agency “ENERGY STAR Program Requirements for Exit Signs,” Version 2.0, may be obtained from the...
10 CFR 431.203 - Materials incorporated by reference.
Code of Federal Regulations, 2014 CFR
2014-01-01
.... Environmental Protection Agency “ENERGY STAR Program Requirements for Exit Signs,” Version 2.0 issued January 1... Protection Agency “ENERGY STAR Program Requirements for Exit Signs,” Version 2.0, may be obtained from the...
10 CFR 431.203 - Materials incorporated by reference.
Code of Federal Regulations, 2011 CFR
2011-01-01
.... Environmental Protection Agency “ENERGY STAR Program Requirements for Exit Signs,” Version 2.0 issued January 1... Protection Agency “ENERGY STAR Program Requirements for Exit Signs,” Version 2.0, may be obtained from the...
10 CFR 431.203 - Materials incorporated by reference.
Code of Federal Regulations, 2013 CFR
2013-01-01
.... Environmental Protection Agency “ENERGY STAR Program Requirements for Exit Signs,” Version 2.0 issued January 1... Protection Agency “ENERGY STAR Program Requirements for Exit Signs,” Version 2.0, may be obtained from the...
10 CFR 431.203 - Materials incorporated by reference.
Code of Federal Regulations, 2010 CFR
2010-01-01
.... Environmental Protection Agency “ENERGY STAR Program Requirements for Exit Signs,” Version 2.0 issued January 1... Protection Agency “ENERGY STAR Program Requirements for Exit Signs,” Version 2.0, may be obtained from the...
ERIC Educational Resources Information Center
Hasson, H.; Brown, C.; Hasson, D.
2010-01-01
In web-based health promotion programs, large variations in participant engagement are common. The aim was to investigate determinants of high use of a worksite self-help web-based program for stress management. Two versions of the program were offered to randomly selected departments in IT and media companies. A static version of the program…
ERIC Educational Resources Information Center
Hopson, Laura M.; Holleran Steiker, Lori K.
2010-01-01
Although there is a strong evidence base for effective substance abuse prevention programs for youths, there is a need to facilitate the implementation and evaluation of these programs in real-world settings. This study evaluates the effectiveness of adapted versions of an evidence-based prevention program, keepin' it REAL (kiR), with alternative…
CLIPS - C LANGUAGE INTEGRATED PRODUCTION SYSTEM (IBM PC VERSION)
NASA Technical Reports Server (NTRS)
Riley, G.
1994-01-01
The C Language Integrated Production System, CLIPS, is a shell for developing expert systems. It is designed to allow artificial intelligence research, development, and delivery on conventional computers. The primary design goals for CLIPS are portability, efficiency, and functionality. For these reasons, the program is written in C. CLIPS meets or outperforms most micro- and minicomputer based artificial intelligence tools. CLIPS is a forward chaining rule-based language. The program contains an inference engine and a language syntax that provide a framework for the construction of an expert system. It also includes tools for debugging an application. CLIPS is based on the Rete algorithm, which enables very efficient pattern matching. The collection of conditions and actions to be taken if the conditions are met is constructed into a rule network. As facts are asserted either prior to or during a session, CLIPS pattern-matches the number of fields. Wildcards and variables are supported for both single and multiple fields. CLIPS syntax allows the inclusion of externally defined functions (outside functions which are written in a language other than CLIPS). CLIPS itself can be embedded in a program such that the expert system is available as a simple subroutine call. Advanced features found in CLIPS version 4.3 include an integrated microEMACS editor, the ability to generate C source code from a CLIPS rule base to produce a dedicated executable, binary load and save capabilities for CLIPS rule bases, and the utility program CRSV (Cross-Reference, Style, and Verification) designed to facilitate the development and maintenance of large rule bases. Five machine versions are available. Each machine version includes the source and the executable for that machine. The UNIX version includes the source and binaries for IBM RS/6000, Sun3 series, and Sun4 series computers. The UNIX, DEC VAX, and DEC RISC Workstation versions are line oriented. The PC version and the Macintosh version each contain a windowing variant of CLIPS as well as the standard line oriented version. The mouse/window interface version for the PC works with a Microsoft compatible mouse or without a mouse. This window version uses the proprietary CURSES library for the PC, but a working executable of the window version is provided. The window oriented version for the Macintosh includes a version which uses a full Macintosh-style interface, including an integrated editor. This version allows the user to observe the changing fact base and rule activations in separate windows while a CLIPS program is executing. The IBM PC version is available bundled with CLIPSITS, The CLIPS Intelligent Tutoring System for a special combined price (COS-10025). The goal of CLIPSITS is to provide the student with a tool to practice the syntax and concepts covered in the CLIPS User's Guide. It attempts to provide expert diagnosis and advice during problem solving which is typically not available without an instructor. CLIPSITS is divided into 10 lessons which mirror the first 10 chapters of the CLIPS User's Guide. The program was developed for the IBM PC series with a hard disk. CLIPSITS is also available separately as MSC-21679. The CLIPS program is written in C for interactive execution and has been implemented on an IBM PC computer operating under DOS, a Macintosh and DEC VAX series computers operating under VMS or ULTRIX. The line oriented version should run on any computer system which supports a full (Kernighan and Ritchie) C compiler or the ANSI standard C language. CLIPS was developed in 1986 and Version 4.2 was released in July of 1988. Version 4.3 was released in June of 1989.
CLIPS - C LANGUAGE INTEGRATED PRODUCTION SYSTEM (MACINTOSH VERSION)
NASA Technical Reports Server (NTRS)
Culbert, C.
1994-01-01
The C Language Integrated Production System, CLIPS, is a shell for developing expert systems. It is designed to allow artificial intelligence research, development, and delivery on conventional computers. The primary design goals for CLIPS are portability, efficiency, and functionality. For these reasons, the program is written in C. CLIPS meets or outperforms most micro- and minicomputer based artificial intelligence tools. CLIPS is a forward chaining rule-based language. The program contains an inference engine and a language syntax that provide a framework for the construction of an expert system. It also includes tools for debugging an application. CLIPS is based on the Rete algorithm, which enables very efficient pattern matching. The collection of conditions and actions to be taken if the conditions are met is constructed into a rule network. As facts are asserted either prior to or during a session, CLIPS pattern-matches the number of fields. Wildcards and variables are supported for both single and multiple fields. CLIPS syntax allows the inclusion of externally defined functions (outside functions which are written in a language other than CLIPS). CLIPS itself can be embedded in a program such that the expert system is available as a simple subroutine call. Advanced features found in CLIPS version 4.3 include an integrated microEMACS editor, the ability to generate C source code from a CLIPS rule base to produce a dedicated executable, binary load and save capabilities for CLIPS rule bases, and the utility program CRSV (Cross-Reference, Style, and Verification) designed to facilitate the development and maintenance of large rule bases. Five machine versions are available. Each machine version includes the source and the executable for that machine. The UNIX version includes the source and binaries for IBM RS/6000, Sun3 series, and Sun4 series computers. The UNIX, DEC VAX, and DEC RISC Workstation versions are line oriented. The PC version and the Macintosh version each contain a windowing variant of CLIPS as well as the standard line oriented version. The mouse/window interface version for the PC works with a Microsoft compatible mouse or without a mouse. This window version uses the proprietary CURSES library for the PC, but a working executable of the window version is provided. The window oriented version for the Macintosh includes a version which uses a full Macintosh-style interface, including an integrated editor. This version allows the user to observe the changing fact base and rule activations in separate windows while a CLIPS program is executing. The IBM PC version is available bundled with CLIPSITS, The CLIPS Intelligent Tutoring System for a special combined price (COS-10025). The goal of CLIPSITS is to provide the student with a tool to practice the syntax and concepts covered in the CLIPS User's Guide. It attempts to provide expert diagnosis and advice during problem solving which is typically not available without an instructor. CLIPSITS is divided into 10 lessons which mirror the first 10 chapters of the CLIPS User's Guide. The program was developed for the IBM PC series with a hard disk. CLIPSITS is also available separately as MSC-21679. The CLIPS program is written in C for interactive execution and has been implemented on an IBM PC computer operating under DOS, a Macintosh and DEC VAX series computers operating under VMS or ULTRIX. The line oriented version should run on any computer system which supports a full (Kernighan and Ritchie) C compiler or the ANSI standard C language. CLIPS was developed in 1986 and Version 4.2 was released in July of 1988. Version 4.3 was released in June of 1989.
CLIPS - C LANGUAGE INTEGRATED PRODUCTION SYSTEM (IBM PC VERSION WITH CLIPSITS)
NASA Technical Reports Server (NTRS)
Riley, , .
1994-01-01
The C Language Integrated Production System, CLIPS, is a shell for developing expert systems. It is designed to allow artificial intelligence research, development, and delivery on conventional computers. The primary design goals for CLIPS are portability, efficiency, and functionality. For these reasons, the program is written in C. CLIPS meets or outperforms most micro- and minicomputer based artificial intelligence tools. CLIPS is a forward chaining rule-based language. The program contains an inference engine and a language syntax that provide a framework for the construction of an expert system. It also includes tools for debugging an application. CLIPS is based on the Rete algorithm, which enables very efficient pattern matching. The collection of conditions and actions to be taken if the conditions are met is constructed into a rule network. As facts are asserted either prior to or during a session, CLIPS pattern-matches the number of fields. Wildcards and variables are supported for both single and multiple fields. CLIPS syntax allows the inclusion of externally defined functions (outside functions which are written in a language other than CLIPS). CLIPS itself can be embedded in a program such that the expert system is available as a simple subroutine call. Advanced features found in CLIPS version 4.3 include an integrated microEMACS editor, the ability to generate C source code from a CLIPS rule base to produce a dedicated executable, binary load and save capabilities for CLIPS rule bases, and the utility program CRSV (Cross-Reference, Style, and Verification) designed to facilitate the development and maintenance of large rule bases. Five machine versions are available. Each machine version includes the source and the executable for that machine. The UNIX version includes the source and binaries for IBM RS/6000, Sun3 series, and Sun4 series computers. The UNIX, DEC VAX, and DEC RISC Workstation versions are line oriented. The PC version and the Macintosh version each contain a windowing variant of CLIPS as well as the standard line oriented version. The mouse/window interface version for the PC works with a Microsoft compatible mouse or without a mouse. This window version uses the proprietary CURSES library for the PC, but a working executable of the window version is provided. The window oriented version for the Macintosh includes a version which uses a full Macintosh-style interface, including an integrated editor. This version allows the user to observe the changing fact base and rule activations in separate windows while a CLIPS program is executing. The IBM PC version is available bundled with CLIPSITS, The CLIPS Intelligent Tutoring System for a special combined price (COS-10025). The goal of CLIPSITS is to provide the student with a tool to practice the syntax and concepts covered in the CLIPS User's Guide. It attempts to provide expert diagnosis and advice during problem solving which is typically not available without an instructor. CLIPSITS is divided into 10 lessons which mirror the first 10 chapters of the CLIPS User's Guide. The program was developed for the IBM PC series with a hard disk. CLIPSITS is also available separately as MSC-21679. The CLIPS program is written in C for interactive execution and has been implemented on an IBM PC computer operating under DOS, a Macintosh and DEC VAX series computers operating under VMS or ULTRIX. The line oriented version should run on any computer system which supports a full (Kernighan and Ritchie) C compiler or the ANSI standard C language. CLIPS was developed in 1986 and Version 4.2 was released in July of 1988. Version 4.3 was released in June of 1989.
Three Dimensional Thermal Pollution Models. Volume 2; Rigid-Lid Models
NASA Technical Reports Server (NTRS)
Lee, S. S.; Sengupta, S.
1978-01-01
Three versions of rigid lid programs are presented: one for near field simulation; the second for far field unstratified situations; and the third for stratified basins, far field simulation. The near field simulates thermal plume areas, and the far field version simulates larger receiving aquatic ecosystems. Since these versions have many common subroutines, a unified testing is provided, with main programs for the three possible conditions listed.
NASA Astrophysics Data System (ADS)
Wainwright, Carroll L.
2012-09-01
I present a numerical package (CosmoTransitions) for analyzing finite-temperature cosmological phase transitions driven by single or multiple scalar fields. The package analyzes the different vacua of a theory to determine their critical temperatures (where the vacuum energy levels are degenerate), their supercooling temperatures, and the bubble wall profiles which separate the phases and describe their tunneling dynamics. I introduce a new method of path deformation to find the profiles of both thin- and thick-walled bubbles. CosmoTransitions is freely available for public use.Program summaryProgram Title: CosmoTransitionsCatalogue identifier: AEML_v1_0Program summary URL: http://cpc.cs.qub.ac.uk/summaries/AEML_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 8775No. of bytes in distributed program, including test data, etc.: 621096Distribution format: tar.gzProgramming language: Python.Computer: Developed on a 2009 MacBook Pro. No computer-specific optimization was performed.Operating system: Designed and tested on Mac OS X 10.6.8. Compatible with any OS with Python installed.RAM: Approximately 50 MB, mostly for loading plotting packages.Classification: 1.9, 11.1.External routines: SciPy, NumPy, matplotLibNature of problem: I describe a program to analyze early-Universe finite-temperature phase transitions with multiple scalar fields. The goal is to analyze the phase structure of an input theory, determine the amount of supercooling at each phase transition, and find the bubble-wall profiles of the nucleated bubbles that drive the transitions.Solution method: To find the bubble-wall profile, the program assumes that tunneling happens along a fixed path in field space. This reduces the equations of motion to one dimension, which can then be solved using the overshoot/undershoot method. The path iteratively deforms in the direction opposite the forces perpendicular to the path until the perpendicular forces vanish (or become very small). To find the phase structure, the program finds and integrates the change in a phase's minimum with respect to temperature.Running time: Approximately 1 minute for full analysis of the two-scalar-field test model on a 2.5 GHz CPU.
NASA Astrophysics Data System (ADS)
Diaz-Torres, Alexis
2011-04-01
A self-contained Fortran-90 program based on a three-dimensional classical dynamical reaction model with stochastic breakup is presented, which is a useful tool for quantifying complete and incomplete fusion, and breakup in reactions induced by weakly-bound two-body projectiles near the Coulomb barrier. The code calculates (i) integrated complete and incomplete fusion cross sections and their angular momentum distribution, (ii) the excitation energy distribution of the primary incomplete-fusion products, (iii) the asymptotic angular distribution of the incomplete-fusion products and the surviving breakup fragments, and (iv) breakup observables, such as angle, kinetic energy and relative energy distributions. Program summaryProgram title: PLATYPUS Catalogue identifier: AEIG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIG_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 332 342 No. of bytes in distributed program, including test data, etc.: 344 124 Distribution format: tar.gz Programming language: Fortran-90 Computer: Any Unix/Linux workstation or PC with a Fortran-90 compiler Operating system: Linux or Unix RAM: 10 MB Classification: 16.9, 17.7, 17.8, 17.11 Nature of problem: The program calculates a wide range of observables in reactions induced by weakly-bound two-body nuclei near the Coulomb barrier. These include integrated complete and incomplete fusion cross sections and their spin distribution, as well as breakup observables (e.g. the angle, kinetic energy, and relative energy distributions of the fragments). Solution method: All the observables are calculated using a three-dimensional classical dynamical model combined with the Monte Carlo sampling of probability-density distributions. See Refs. [1,2] for further details. Restrictions: The program is suited for a weakly-bound two-body projectile colliding with a stable target. The initial orientation of the segment joining the two breakup fragments is considered to be isotropic. Additional comments: Several source routines from Numerical Recipies, and the Mersenne Twister random number generator package are included to enable independent compilation. Running time: About 75 minutes for input provided, using a PC with 1.5 GHz processor.
NCRETURN Computer Program for Evaluating Investments Revised to Provide Additional Information
Allen L. Lundgren; Dennis L. Schweitzer
1971-01-01
Reports a modified version of NCRETURN, a computer program for evauating forestry investments. The revised version, RETURN, provides additional information about each investment, including future net worths and benefit-cost ratios, with no added input.
NASA Technical Reports Server (NTRS)
Burgin, G. H.; Fogel, L. J.; Phelps, J. P.
1975-01-01
A technique for computer simulation of air combat is described. Volume 1 decribes the computer program and its development in general terms. Two versions of the program exist. Both incorporate a logic for selecting and executing air combat maneuvers with performance models of specific fighter aircraft. In the batch processing version the flight paths of two aircraft engaged in interactive aerial combat and controlled by the same logic are computed. The realtime version permits human pilots to fly air-to-air combat against the adaptive maneuvering logic (AML) in Langley Differential Maneuvering Simulator (DMS). Volume 2 consists of a detailed description of the computer programs.
The methods described in the report can be used with the modified N.R.C. version of the U.S.G.S. Solute Transport Model to predict the concentration of chemical parameters in a contaminant plume. The two volume report contains program documentation and user's manual. The program ...
CLIPS 6.0 - C LANGUAGE INTEGRATED PRODUCTION SYSTEM, VERSION 6.0 (UNIX VERSION)
NASA Technical Reports Server (NTRS)
Donnell, B.
1994-01-01
CLIPS, the C Language Integrated Production System, is a complete environment for developing expert systems -- programs which are specifically intended to model human expertise or knowledge. It is designed to allow artificial intelligence research, development, and delivery on conventional computers. CLIPS 6.0 provides a cohesive tool for handling a wide variety of knowledge with support for three different programming paradigms: rule-based, object-oriented, and procedural. Rule-based programming allows knowledge to be represented as heuristics, or "rules-of-thumb" which specify a set of actions to be performed for a given situation. Object-oriented programming allows complex systems to be modeled as modular components (which can be easily reused to model other systems or create new components). The procedural programming capabilities provided by CLIPS 6.0 allow CLIPS to represent knowledge in ways similar to those allowed in languages such as C, Pascal, Ada, and LISP. Using CLIPS 6.0, one can develop expert system software using only rule-based programming, only object-oriented programming, only procedural programming, or combinations of the three. CLIPS provides extensive features to support the rule-based programming paradigm including seven conflict resolution strategies, dynamic rule priorities, and truth maintenance. CLIPS 6.0 supports more complex nesting of conditional elements in the if portion of a rule ("and", "or", and "not" conditional elements can be placed within a "not" conditional element). In addition, there is no longer a limitation on the number of multifield slots that a deftemplate can contain. The CLIPS Object-Oriented Language (COOL) provides object-oriented programming capabilities. Features supported by COOL include classes with multiple inheritance, abstraction, encapsulation, polymorphism, dynamic binding, and message passing with message-handlers. CLIPS 6.0 supports tight integration of the rule-based programming features of CLIPS with COOL (that is, a rule can pattern match on objects created using COOL). CLIPS 6.0 provides the capability to define functions, overloaded functions, and global variables interactively. In addition, CLIPS can be embedded within procedural code, called as a subroutine, and integrated with languages such as C, FORTRAN and Ada. CLIPS can be easily extended by a user through the use of several well-defined protocols. CLIPS provides several delivery options for programs including the ability to generate stand alone executables or to load programs from text or binary files. CLIPS 6.0 provides support for the modular development and execution of knowledge bases with the defmodule construct. CLIPS modules allow a set of constructs to be grouped together such that explicit control can be maintained over restricting the access of the constructs by other modules. This type of control is similar to global and local scoping used in languages such as C or Ada. By restricting access to deftemplate and defclass constructs, modules can function as blackboards, permitting only certain facts and instances to be seen by other modules. Modules are also used by rules to provide execution control. The CRSV (Cross-Reference, Style, and Verification) utility included with previous version of CLIPS is no longer supported. The capabilities provided by this tool are now available directly within CLIPS 6.0 to aid in the development, debugging, and verification of large rule bases. COSMIC offers four distribution versions of CLIPS 6.0: UNIX (MSC-22433), VMS (MSC-22434), MACINTOSH (MSC-22429), and IBM PC (MSC-22430). Executable files, source code, utilities, documentation, and examples are included on the program media. All distribution versions include identical source code for the command line version of CLIPS 6.0. This source code should compile on any platform with an ANSI C compiler. Each distribution version of CLIPS 6.0, except that for the Macintosh platform, includes an executable for the command line version. For the UNIX version of CLIPS 6.0, the command line interface has been successfully implemented on a Sun4 running SunOS, a DECstation running DEC RISC ULTRIX, an SGI Indigo Elan running IRIX, a DEC Alpha AXP running OSF/1, and an IBM RS/6000 running AIX. Command line interface executables are included for Sun4 computers running SunOS 4.1.1 or later and for the DEC RISC ULTRIX platform. The makefiles may have to be modified slightly to be used on other UNIX platforms. The UNIX, Macintosh, and IBM PC versions of CLIPS 6.0 each have a platform specific interface. Source code, a makefile, and an executable for the Windows 3.1 interface version of CLIPS 6.0 are provided only on the IBM PC distribution diskettes. Source code, a makefile, and an executable for the Macintosh interface version of CLIPS 6.0 are provided only on the Macintosh distribution diskettes. Likewise, for the UNIX version of CLIPS 6.0, only source code and a makefile for an X-Windows interface are provided. The X-Windows interface requires MIT's X Window System, Version 11, Release 4 (X11R4), the Athena Widget Set, and the Xmu library. The source code for the Athena Widget Set is provided on the distribution medium. The X-Windows interface has been successfully implemented on a Sun4 running SunOS 4.1.2 with the MIT distribution of X11R4 (not OpenWindows), an SGI Indigo Elan running IRIX 4.0.5, and a DEC Alpha AXP running OSF/1 1.2. The VAX version of CLIPS 6.0 comes only with the generic command line interface. ASCII makefiles for the command line version of CLIPS are provided on all the distribution media for UNIX, VMS, and DOS. Four executables are provided with the IBM PC version: a windowed interface executable for Windows 3.1 built using Borland C++ v3.1, an editor for use with the windowed interface, a command line version of CLIPS for Windows 3.1, and a 386 command line executable for DOS built using Zortech C++ v3.1. All four executables are capable of utilizing extended memory and require an 80386 CPU or better. Users needing an 8086/8088 or 80286 executable must recompile the CLIPS source code themselves. Users who wish to recompile the DOS executable using Borland C++ or MicroSoft C must use a DOS extender program to produce an executable capable of using extended memory. The version of CLIPS 6.0 for IBM PC compatibles requires DOS v3.3 or later and/or Windows 3.1 or later. It is distributed on a set of three 1.4Mb 3.5 inch diskettes. A hard disk is required. The Macintosh version is distributed in compressed form on two 3.5 inch 1.4Mb Macintosh format diskettes, and requires System 6.0.5, or higher, and 1Mb RAM. The version for DEC VAX/VMS is available in VAX BACKUP format on a 1600 BPI 9-track magnetic tape (standard distribution medium) or a TK50 tape cartridge. The UNIX version is distributed in UNIX tar format on a .25 inch streaming magnetic tape cartridge (Sun QIC-24). For the UNIX version, alternate distribution media and formats are available upon request. The CLIPS 6.0 documentation includes a User's Guide and a three volume Reference Manual consisting of Basic and Advanced Programming Guides and an Interfaces Guide. An electronic version of the documentation is provided on the distribution medium for each version: in MicroSoft Word format for the Macintosh and PC versions of CLIPS, and in both PostScript format and MicroSoft Word for Macintosh format for the UNIX and DEC VAX versions of CLIPS. CLIPS was developed in 1986 and Version 6.0 was released in 1993.
CLIPS 6.0 - C LANGUAGE INTEGRATED PRODUCTION SYSTEM, VERSION 6.0 (IBM PC VERSION)
NASA Technical Reports Server (NTRS)
Donnell, B.
1994-01-01
CLIPS, the C Language Integrated Production System, is a complete environment for developing expert systems -- programs which are specifically intended to model human expertise or knowledge. It is designed to allow artificial intelligence research, development, and delivery on conventional computers. CLIPS 6.0 provides a cohesive tool for handling a wide variety of knowledge with support for three different programming paradigms: rule-based, object-oriented, and procedural. Rule-based programming allows knowledge to be represented as heuristics, or "rules-of-thumb" which specify a set of actions to be performed for a given situation. Object-oriented programming allows complex systems to be modeled as modular components (which can be easily reused to model other systems or create new components). The procedural programming capabilities provided by CLIPS 6.0 allow CLIPS to represent knowledge in ways similar to those allowed in languages such as C, Pascal, Ada, and LISP. Using CLIPS 6.0, one can develop expert system software using only rule-based programming, only object-oriented programming, only procedural programming, or combinations of the three. CLIPS provides extensive features to support the rule-based programming paradigm including seven conflict resolution strategies, dynamic rule priorities, and truth maintenance. CLIPS 6.0 supports more complex nesting of conditional elements in the if portion of a rule ("and", "or", and "not" conditional elements can be placed within a "not" conditional element). In addition, there is no longer a limitation on the number of multifield slots that a deftemplate can contain. The CLIPS Object-Oriented Language (COOL) provides object-oriented programming capabilities. Features supported by COOL include classes with multiple inheritance, abstraction, encapsulation, polymorphism, dynamic binding, and message passing with message-handlers. CLIPS 6.0 supports tight integration of the rule-based programming features of CLIPS with COOL (that is, a rule can pattern match on objects created using COOL). CLIPS 6.0 provides the capability to define functions, overloaded functions, and global variables interactively. In addition, CLIPS can be embedded within procedural code, called as a subroutine, and integrated with languages such as C, FORTRAN and Ada. CLIPS can be easily extended by a user through the use of several well-defined protocols. CLIPS provides several delivery options for programs including the ability to generate stand alone executables or to load programs from text or binary files. CLIPS 6.0 provides support for the modular development and execution of knowledge bases with the defmodule construct. CLIPS modules allow a set of constructs to be grouped together such that explicit control can be maintained over restricting the access of the constructs by other modules. This type of control is similar to global and local scoping used in languages such as C or Ada. By restricting access to deftemplate and defclass constructs, modules can function as blackboards, permitting only certain facts and instances to be seen by other modules. Modules are also used by rules to provide execution control. The CRSV (Cross-Reference, Style, and Verification) utility included with previous version of CLIPS is no longer supported. The capabilities provided by this tool are now available directly within CLIPS 6.0 to aid in the development, debugging, and verification of large rule bases. COSMIC offers four distribution versions of CLIPS 6.0: UNIX (MSC-22433), VMS (MSC-22434), MACINTOSH (MSC-22429), and IBM PC (MSC-22430). Executable files, source code, utilities, documentation, and examples are included on the program media. All distribution versions include identical source code for the command line version of CLIPS 6.0. This source code should compile on any platform with an ANSI C compiler. Each distribution version of CLIPS 6.0, except that for the Macintosh platform, includes an executable for the command line version. For the UNIX version of CLIPS 6.0, the command line interface has been successfully implemented on a Sun4 running SunOS, a DECstation running DEC RISC ULTRIX, an SGI Indigo Elan running IRIX, a DEC Alpha AXP running OSF/1, and an IBM RS/6000 running AIX. Command line interface executables are included for Sun4 computers running SunOS 4.1.1 or later and for the DEC RISC ULTRIX platform. The makefiles may have to be modified slightly to be used on other UNIX platforms. The UNIX, Macintosh, and IBM PC versions of CLIPS 6.0 each have a platform specific interface. Source code, a makefile, and an executable for the Windows 3.1 interface version of CLIPS 6.0 are provided only on the IBM PC distribution diskettes. Source code, a makefile, and an executable for the Macintosh interface version of CLIPS 6.0 are provided only on the Macintosh distribution diskettes. Likewise, for the UNIX version of CLIPS 6.0, only source code and a makefile for an X-Windows interface are provided. The X-Windows interface requires MIT's X Window System, Version 11, Release 4 (X11R4), the Athena Widget Set, and the Xmu library. The source code for the Athena Widget Set is provided on the distribution medium. The X-Windows interface has been successfully implemented on a Sun4 running SunOS 4.1.2 with the MIT distribution of X11R4 (not OpenWindows), an SGI Indigo Elan running IRIX 4.0.5, and a DEC Alpha AXP running OSF/1 1.2. The VAX version of CLIPS 6.0 comes only with the generic command line interface. ASCII makefiles for the command line version of CLIPS are provided on all the distribution media for UNIX, VMS, and DOS. Four executables are provided with the IBM PC version: a windowed interface executable for Windows 3.1 built using Borland C++ v3.1, an editor for use with the windowed interface, a command line version of CLIPS for Windows 3.1, and a 386 command line executable for DOS built using Zortech C++ v3.1. All four executables are capable of utilizing extended memory and require an 80386 CPU or better. Users needing an 8086/8088 or 80286 executable must recompile the CLIPS source code themselves. Users who wish to recompile the DOS executable using Borland C++ or MicroSoft C must use a DOS extender program to produce an executable capable of using extended memory. The version of CLIPS 6.0 for IBM PC compatibles requires DOS v3.3 or later and/or Windows 3.1 or later. It is distributed on a set of three 1.4Mb 3.5 inch diskettes. A hard disk is required. The Macintosh version is distributed in compressed form on two 3.5 inch 1.4Mb Macintosh format diskettes, and requires System 6.0.5, or higher, and 1Mb RAM. The version for DEC VAX/VMS is available in VAX BACKUP format on a 1600 BPI 9-track magnetic tape (standard distribution medium) or a TK50 tape cartridge. The UNIX version is distributed in UNIX tar format on a .25 inch streaming magnetic tape cartridge (Sun QIC-24). For the UNIX version, alternate distribution media and formats are available upon request. The CLIPS 6.0 documentation includes a User's Guide and a three volume Reference Manual consisting of Basic and Advanced Programming Guides and an Interfaces Guide. An electronic version of the documentation is provided on the distribution medium for each version: in MicroSoft Word format for the Macintosh and PC versions of CLIPS, and in both PostScript format and MicroSoft Word for Macintosh format for the UNIX and DEC VAX versions of CLIPS. CLIPS was developed in 1986 and Version 6.0 was released in 1993.
CLIPS 6.0 - C LANGUAGE INTEGRATED PRODUCTION SYSTEM, VERSION 6.0 (MACINTOSH VERSION)
NASA Technical Reports Server (NTRS)
Riley, G.
1994-01-01
CLIPS, the C Language Integrated Production System, is a complete environment for developing expert systems -- programs which are specifically intended to model human expertise or knowledge. It is designed to allow artificial intelligence research, development, and delivery on conventional computers. CLIPS 6.0 provides a cohesive tool for handling a wide variety of knowledge with support for three different programming paradigms: rule-based, object-oriented, and procedural. Rule-based programming allows knowledge to be represented as heuristics, or "rules-of-thumb" which specify a set of actions to be performed for a given situation. Object-oriented programming allows complex systems to be modeled as modular components (which can be easily reused to model other systems or create new components). The procedural programming capabilities provided by CLIPS 6.0 allow CLIPS to represent knowledge in ways similar to those allowed in languages such as C, Pascal, Ada, and LISP. Using CLIPS 6.0, one can develop expert system software using only rule-based programming, only object-oriented programming, only procedural programming, or combinations of the three. CLIPS provides extensive features to support the rule-based programming paradigm including seven conflict resolution strategies, dynamic rule priorities, and truth maintenance. CLIPS 6.0 supports more complex nesting of conditional elements in the if portion of a rule ("and", "or", and "not" conditional elements can be placed within a "not" conditional element). In addition, there is no longer a limitation on the number of multifield slots that a deftemplate can contain. The CLIPS Object-Oriented Language (COOL) provides object-oriented programming capabilities. Features supported by COOL include classes with multiple inheritance, abstraction, encapsulation, polymorphism, dynamic binding, and message passing with message-handlers. CLIPS 6.0 supports tight integration of the rule-based programming features of CLIPS with COOL (that is, a rule can pattern match on objects created using COOL). CLIPS 6.0 provides the capability to define functions, overloaded functions, and global variables interactively. In addition, CLIPS can be embedded within procedural code, called as a subroutine, and integrated with languages such as C, FORTRAN and Ada. CLIPS can be easily extended by a user through the use of several well-defined protocols. CLIPS provides several delivery options for programs including the ability to generate stand alone executables or to load programs from text or binary files. CLIPS 6.0 provides support for the modular development and execution of knowledge bases with the defmodule construct. CLIPS modules allow a set of constructs to be grouped together such that explicit control can be maintained over restricting the access of the constructs by other modules. This type of control is similar to global and local scoping used in languages such as C or Ada. By restricting access to deftemplate and defclass constructs, modules can function as blackboards, permitting only certain facts and instances to be seen by other modules. Modules are also used by rules to provide execution control. The CRSV (Cross-Reference, Style, and Verification) utility included with previous version of CLIPS is no longer supported. The capabilities provided by this tool are now available directly within CLIPS 6.0 to aid in the development, debugging, and verification of large rule bases. COSMIC offers four distribution versions of CLIPS 6.0: UNIX (MSC-22433), VMS (MSC-22434), MACINTOSH (MSC-22429), and IBM PC (MSC-22430). Executable files, source code, utilities, documentation, and examples are included on the program media. All distribution versions include identical source code for the command line version of CLIPS 6.0. This source code should compile on any platform with an ANSI C compiler. Each distribution version of CLIPS 6.0, except that for the Macintosh platform, includes an executable for the command line version. For the UNIX version of CLIPS 6.0, the command line interface has been successfully implemented on a Sun4 running SunOS, a DECstation running DEC RISC ULTRIX, an SGI Indigo Elan running IRIX, a DEC Alpha AXP running OSF/1, and an IBM RS/6000 running AIX. Command line interface executables are included for Sun4 computers running SunOS 4.1.1 or later and for the DEC RISC ULTRIX platform. The makefiles may have to be modified slightly to be used on other UNIX platforms. The UNIX, Macintosh, and IBM PC versions of CLIPS 6.0 each have a platform specific interface. Source code, a makefile, and an executable for the Windows 3.1 interface version of CLIPS 6.0 are provided only on the IBM PC distribution diskettes. Source code, a makefile, and an executable for the Macintosh interface version of CLIPS 6.0 are provided only on the Macintosh distribution diskettes. Likewise, for the UNIX version of CLIPS 6.0, only source code and a makefile for an X-Windows interface are provided. The X-Windows interface requires MIT's X Window System, Version 11, Release 4 (X11R4), the Athena Widget Set, and the Xmu library. The source code for the Athena Widget Set is provided on the distribution medium. The X-Windows interface has been successfully implemented on a Sun4 running SunOS 4.1.2 with the MIT distribution of X11R4 (not OpenWindows), an SGI Indigo Elan running IRIX 4.0.5, and a DEC Alpha AXP running OSF/1 1.2. The VAX version of CLIPS 6.0 comes only with the generic command line interface. ASCII makefiles for the command line version of CLIPS are provided on all the distribution media for UNIX, VMS, and DOS. Four executables are provided with the IBM PC version: a windowed interface executable for Windows 3.1 built using Borland C++ v3.1, an editor for use with the windowed interface, a command line version of CLIPS for Windows 3.1, and a 386 command line executable for DOS built using Zortech C++ v3.1. All four executables are capable of utilizing extended memory and require an 80386 CPU or better. Users needing an 8086/8088 or 80286 executable must recompile the CLIPS source code themselves. Users who wish to recompile the DOS executable using Borland C++ or MicroSoft C must use a DOS extender program to produce an executable capable of using extended memory. The version of CLIPS 6.0 for IBM PC compatibles requires DOS v3.3 or later and/or Windows 3.1 or later. It is distributed on a set of three 1.4Mb 3.5 inch diskettes. A hard disk is required. The Macintosh version is distributed in compressed form on two 3.5 inch 1.4Mb Macintosh format diskettes, and requires System 6.0.5, or higher, and 1Mb RAM. The version for DEC VAX/VMS is available in VAX BACKUP format on a 1600 BPI 9-track magnetic tape (standard distribution medium) or a TK50 tape cartridge. The UNIX version is distributed in UNIX tar format on a .25 inch streaming magnetic tape cartridge (Sun QIC-24). For the UNIX version, alternate distribution media and formats are available upon request. The CLIPS 6.0 documentation includes a User's Guide and a three volume Reference Manual consisting of Basic and Advanced Programming Guides and an Interfaces Guide. An electronic version of the documentation is provided on the distribution medium for each version: in MicroSoft Word format for the Macintosh and PC versions of CLIPS, and in both PostScript format and MicroSoft Word for Macintosh format for the UNIX and DEC VAX versions of CLIPS. CLIPS was developed in 1986 and Version 6.0 was released in 1993.
CLIPS 6.0 - C LANGUAGE INTEGRATED PRODUCTION SYSTEM, VERSION 6.0 (DEC VAX VMS VERSION)
NASA Technical Reports Server (NTRS)
Donnell, B.
1994-01-01
CLIPS, the C Language Integrated Production System, is a complete environment for developing expert systems -- programs which are specifically intended to model human expertise or knowledge. It is designed to allow artificial intelligence research, development, and delivery on conventional computers. CLIPS 6.0 provides a cohesive tool for handling a wide variety of knowledge with support for three different programming paradigms: rule-based, object-oriented, and procedural. Rule-based programming allows knowledge to be represented as heuristics, or "rules-of-thumb" which specify a set of actions to be performed for a given situation. Object-oriented programming allows complex systems to be modeled as modular components (which can be easily reused to model other systems or create new components). The procedural programming capabilities provided by CLIPS 6.0 allow CLIPS to represent knowledge in ways similar to those allowed in languages such as C, Pascal, Ada, and LISP. Using CLIPS 6.0, one can develop expert system software using only rule-based programming, only object-oriented programming, only procedural programming, or combinations of the three. CLIPS provides extensive features to support the rule-based programming paradigm including seven conflict resolution strategies, dynamic rule priorities, and truth maintenance. CLIPS 6.0 supports more complex nesting of conditional elements in the if portion of a rule ("and", "or", and "not" conditional elements can be placed within a "not" conditional element). In addition, there is no longer a limitation on the number of multifield slots that a deftemplate can contain. The CLIPS Object-Oriented Language (COOL) provides object-oriented programming capabilities. Features supported by COOL include classes with multiple inheritance, abstraction, encapsulation, polymorphism, dynamic binding, and message passing with message-handlers. CLIPS 6.0 supports tight integration of the rule-based programming features of CLIPS with COOL (that is, a rule can pattern match on objects created using COOL). CLIPS 6.0 provides the capability to define functions, overloaded functions, and global variables interactively. In addition, CLIPS can be embedded within procedural code, called as a subroutine, and integrated with languages such as C, FORTRAN and Ada. CLIPS can be easily extended by a user through the use of several well-defined protocols. CLIPS provides several delivery options for programs including the ability to generate stand alone executables or to load programs from text or binary files. CLIPS 6.0 provides support for the modular development and execution of knowledge bases with the defmodule construct. CLIPS modules allow a set of constructs to be grouped together such that explicit control can be maintained over restricting the access of the constructs by other modules. This type of control is similar to global and local scoping used in languages such as C or Ada. By restricting access to deftemplate and defclass constructs, modules can function as blackboards, permitting only certain facts and instances to be seen by other modules. Modules are also used by rules to provide execution control. The CRSV (Cross-Reference, Style, and Verification) utility included with previous version of CLIPS is no longer supported. The capabilities provided by this tool are now available directly within CLIPS 6.0 to aid in the development, debugging, and verification of large rule bases. COSMIC offers four distribution versions of CLIPS 6.0: UNIX (MSC-22433), VMS (MSC-22434), MACINTOSH (MSC-22429), and IBM PC (MSC-22430). Executable files, source code, utilities, documentation, and examples are included on the program media. All distribution versions include identical source code for the command line version of CLIPS 6.0. This source code should compile on any platform with an ANSI C compiler. Each distribution version of CLIPS 6.0, except that for the Macintosh platform, includes an executable for the command line version. For the UNIX version of CLIPS 6.0, the command line interface has been successfully implemented on a Sun4 running SunOS, a DECstation running DEC RISC ULTRIX, an SGI Indigo Elan running IRIX, a DEC Alpha AXP running OSF/1, and an IBM RS/6000 running AIX. Command line interface executables are included for Sun4 computers running SunOS 4.1.1 or later and for the DEC RISC ULTRIX platform. The makefiles may have to be modified slightly to be used on other UNIX platforms. The UNIX, Macintosh, and IBM PC versions of CLIPS 6.0 each have a platform specific interface. Source code, a makefile, and an executable for the Windows 3.1 interface version of CLIPS 6.0 are provided only on the IBM PC distribution diskettes. Source code, a makefile, and an executable for the Macintosh interface version of CLIPS 6.0 are provided only on the Macintosh distribution diskettes. Likewise, for the UNIX version of CLIPS 6.0, only source code and a makefile for an X-Windows interface are provided. The X-Windows interface requires MIT's X Window System, Version 11, Release 4 (X11R4), the Athena Widget Set, and the Xmu library. The source code for the Athena Widget Set is provided on the distribution medium. The X-Windows interface has been successfully implemented on a Sun4 running SunOS 4.1.2 with the MIT distribution of X11R4 (not OpenWindows), an SGI Indigo Elan running IRIX 4.0.5, and a DEC Alpha AXP running OSF/1 1.2. The VAX version of CLIPS 6.0 comes only with the generic command line interface. ASCII makefiles for the command line version of CLIPS are provided on all the distribution media for UNIX, VMS, and DOS. Four executables are provided with the IBM PC version: a windowed interface executable for Windows 3.1 built using Borland C++ v3.1, an editor for use with the windowed interface, a command line version of CLIPS for Windows 3.1, and a 386 command line executable for DOS built using Zortech C++ v3.1. All four executables are capable of utilizing extended memory and require an 80386 CPU or better. Users needing an 8086/8088 or 80286 executable must recompile the CLIPS source code themselves. Users who wish to recompile the DOS executable using Borland C++ or MicroSoft C must use a DOS extender program to produce an executable capable of using extended memory. The version of CLIPS 6.0 for IBM PC compatibles requires DOS v3.3 or later and/or Windows 3.1 or later. It is distributed on a set of three 1.4Mb 3.5 inch diskettes. A hard disk is required. The Macintosh version is distributed in compressed form on two 3.5 inch 1.4Mb Macintosh format diskettes, and requires System 6.0.5, or higher, and 1Mb RAM. The version for DEC VAX/VMS is available in VAX BACKUP format on a 1600 BPI 9-track magnetic tape (standard distribution medium) or a TK50 tape cartridge. The UNIX version is distributed in UNIX tar format on a .25 inch streaming magnetic tape cartridge (Sun QIC-24). For the UNIX version, alternate distribution media and formats are available upon request. The CLIPS 6.0 documentation includes a User's Guide and a three volume Reference Manual consisting of Basic and Advanced Programming Guides and an Interfaces Guide. An electronic version of the documentation is provided on the distribution medium for each version: in MicroSoft Word format for the Macintosh and PC versions of CLIPS, and in both PostScript format and MicroSoft Word for Macintosh format for the UNIX and DEC VAX versions of CLIPS. CLIPS was developed in 1986 and Version 6.0 was released in 1993.
An Improved Version of the NASA-Lockheed Multielement Airfoil Analysis Computer Program
NASA Technical Reports Server (NTRS)
Brune, G. W.; Manke, J. W.
1978-01-01
An improved version of the NASA-Lockheed computer program for the analysis of multielement airfoils is described. The predictions of the program are evaluated by comparison with recent experimental high lift data including lift, pitching moment, profile drag, and detailed distributions of surface pressures and boundary layer parameters. The results of the evaluation show that the contract objectives of improving program reliability and accuracy have been met.
Digital Systems Validation Handbook. Volume 2. Chapter 18. Avionic Data Bus Integration Technology
1993-11-01
interaction between a digital data bus and an avionic system. Very Large Scale Integration (VLSI) ICs and multiversion software, which make up digital...1984, the Sperry Corporation developed a fault tolerant system which employed multiversion programming, voting, and monitoring for error detection and...formulate all the significant behavior of a system. MULTIVERSION PROGRAMMING. N-version programming. N-VERSION PROGRAMMING. The independent coding of a
USSAERO version D computer program development using ANSI standard FORTRAN 77 and DI-3000 graphics
NASA Technical Reports Server (NTRS)
Wiese, M. R.
1986-01-01
The D version of the Unified Subsonic Supersonic Aerodynamic Analysis (USSAERO) program is the result of numerous modifications and enhancements to the B01 version. These changes include conversion to ANSI standard FORTRAN 77; use of the DI-3000 graphics package; removal of the overlay structure; a revised input format; the addition of an input data analysis routine; and increasing the number of aeronautical components allowed.
JEFI: a cash flow analysis program (Version 3.0 for Windows). [Computer program].
Bruce Hansen; Jeff Palmer
1998-01-01
JEFFI/3 is a Windows-version of JEFFI/2. The differences between the two versions are the new interface, an investment term of 1 to 30 years (instead of 4 to 30), and a rich set of detailed online help documents. JEFFI/3 still retains a number of unique features of JEFFII2 related to treatment of the final year cash flows, depreciation, working capital, and derivation...
The mathematical statement for the solving of the problem of N-version software system design
NASA Astrophysics Data System (ADS)
Kovalev, I. V.; Kovalev, D. I.; Zelenkov, P. V.; Voroshilova, A. A.
2015-10-01
The N-version programming, as a methodology of the fault-tolerant software systems design, allows successful solving of the mentioned tasks. The use of N-version programming approach turns out to be effective, since the system is constructed out of several parallel executed versions of some software module. Those versions are written to meet the same specification but by different programmers. The problem of developing an optimal structure of N-version software system presents a kind of very complex optimization problem. This causes the use of deterministic optimization methods inappropriate for solving the stated problem. In this view, exploiting heuristic strategies looks more rational. In the field of pseudo-Boolean optimization theory, the so called method of varied probabilities (MVP) has been developed to solve problems with a large dimensionality.
COSMIC monthly progress report
NASA Technical Reports Server (NTRS)
1994-01-01
Activities of the Computer Software Management and Information Center (COSMIC) are summarized for the month of May 1994. Tables showing the current inventory of programs available from COSMIC are presented and program processing and evaluation activities are summarized. Nine articles were prepared for publication in the NASA Tech Brief Journal. These articles (included in this report) describe the following software items: (1) WFI - Windowing System for Test and Simulation; (2) HZETRN - A Free Space Radiation Transport and Shielding Program; (3) COMGEN-BEM - Composite Model Generation-Boundary Element Method; (4) IDDS - Interactive Data Display System; (5) CET93/PC - Chemical Equilibrium with Transport Properties, 1993; (6) SDVIC - Sub-pixel Digital Video Image Correlation; (7) TRASYS - Thermal Radiation Analyzer System (HP9000 Series 700/800 Version without NASADIG); (8) NASADIG - NASA Device Independent Graphics Library, Version 6.0 (VAX VMS Version); and (9) NASADIG - NASA Device Independent Graphics Library, Version 6.0 (UNIX Version). Activities in the areas of marketing, customer service, benefits identification, maintenance and support, and dissemination are also described along with a budget summary.
Nuclear Engine System Simulation (NESS). Version 2.0: Program user's guide
NASA Technical Reports Server (NTRS)
Pelaccio, Dennis G.; Scheil, Christine M.; Petrosky, Lyman
1993-01-01
This Program User's Guide discusses the Nuclear Thermal Propulsion (NTP) engine system design features and capabilities modeled in the Nuclear Engine System Simulation (NESS): Version 2.0 program (referred to as NESS throughout the remainder of this document), as well as its operation. NESS was upgraded to include many new modeling capabilities not available in the original version delivered to NASA LeRC in Dec. 1991, NESS's new features include the following: (1) an improved input format; (2) an advanced solid-core NERVA-type reactor system model (ENABLER 2); (3) a bleed-cycle engine system option; (4) an axial-turbopump design option; (5) an automated pump-out turbopump assembly sizing option; (6) an off-design gas generator engine cycle design option; (7) updated hydrogen properties; (8) an improved output format; and (9) personal computer operation capability. Sample design cases are presented in the user's guide that demonstrate many of the new features associated with this upgraded version of NESS, as well as design modeling features associated with the original version of NESS.
Evaluating Youth Development Programs: Progress and Promise
Brooks-Gunn, Jeanne
2016-01-01
Advances in theories of adolescent development and positive youth development have greatly increased our understanding of how programs and practices with adolescents can impede or enhance their development. In this paper the authors reflect on the progress in research on youth development programs in the last two decades, since possibly the first review of empirical evaluations by Roth, Brooks-Gunn, Murray, and Foster (1998). The authors use the terms Version 1.0, 2.0 and 3.0 to refer to changes in youth development research and programs over time. They argue that advances in theory and descriptive accounts of youth development programs (Version 2.0) need to be coupled with progress in definitions of youth development programs, measurement of inputs and outputs that incorporate an understanding of programs as contexts for development, and stronger design and evaluation of programs (Version 3.0). The authors also advocate for an integration of prevention and promotion research, and for use of the term youth development rather than positive youth development. PMID:28077922
ERIC Educational Resources Information Center
McMillan, Whitney; Stice, Eric; Rohde, Paul
2011-01-01
Objective: As cognitive dissonance is theorized to contribute to the effects of dissonance-based eating disorder prevention programs, we evaluated a high-dissonance version of this program against a low-dissonance version and a wait-list control condition to provide an experimental test of the mechanism of intervention effects. Method: Female…
Mass Spectral Library with Search Program, Data Version: NIST v17
National Institute of Standards and Technology Data Gateway
SRD 1A NIST/EPA/NIH Mass Spectral Library with Search Program, Data Version: NIST v17 (PC database for purchase) Available with full-featured NIST MS Search Program for Windows integrated tools, the NIST '98 is a fully evaluated collection of electron-ionization mass spectra. (147,198 Compounds with Spectra; 147,194 Chemical Structures; 174,948 Spectra )
NASA Technical Reports Server (NTRS)
Hockney, George; Lee, Seungwon
2008-01-01
A computer program known as PyPele, originally written as a Pythonlanguage extension module of a C++ language program, has been rewritten in pure Python language. The original version of PyPele dispatches and coordinates parallel-processing tasks on cluster computers and provides a conceptual framework for spacecraft-mission- design and -analysis software tools to run in an embarrassingly parallel mode. The original version of PyPele uses SSH (Secure Shell a set of standards and an associated network protocol for establishing a secure channel between a local and a remote computer) to coordinate parallel processing. Instead of SSH, the present Python version of PyPele uses Message Passing Interface (MPI) [an unofficial de-facto standard language-independent application programming interface for message- passing on a parallel computer] while keeping the same user interface. The use of MPI instead of SSH and the preservation of the original PyPele user interface make it possible for parallel application programs written previously for the original version of PyPele to run on MPI-based cluster computers. As a result, engineers using the previously written application programs can take advantage of embarrassing parallelism without need to rewrite those programs.
Economic-Analysis Program for a Communication System
NASA Technical Reports Server (NTRS)
Chamberlain, R. G.
1986-01-01
Prices and profits of alternative designs compared. Objective of Land Mobile Satellite Service Finance Report (LMSS) program is to provide means for comparing alternative designs of LMSS systems. Program is Multiplan worksheet program. Labels used in worksheet chosen for satellite-based cellular communication service, but analysis not restricted to such cases. LMSS written for interactive execution with Multiplan (version 1.2) and implemented on IBM PC series computer operating under DOS (version 2.11).
NASA Technical Reports Server (NTRS)
Mcmanus, John W.; Goodrich, Kenneth H.
1989-01-01
A research program investigating the use of Artificial Intelligence (AI) programming techniques to aid in the development of a Tactical Decision Generator (TDG) for Within-Visual-Range (WVR) air combat engagements is discussed. The application of AI methods for development and implementation of the TDG is presented. The history of the Adaptive Maneuvering Logic (AML) program is traced and current versions of the (AML) program is traced and current versions of the AML program are compared and contrasted with the TDG system. The Knowledge-Based Systems (KBS) used by the TDG to aid in the decision-making process are outlined and example rules are presented. The results of tests to evaluate the performance of the TDG against a version of AML and against human pilots in the Langley Differential Maneuvering Simulator (DMS) are presented. To date, these results have shown significant performance gains in one-versus-one air combat engagements.
GEMPAK 5.1 - A GENERAL METEOROLOGICAL PACKAGE (UNIX VERSION)
NASA Technical Reports Server (NTRS)
Desjardins, M. L.
1994-01-01
GEMPAK is a general meteorological software package developed at NASA/Goddard Space Flight Center. It includes programs to analyze and display surface, upper-air, and gridded data, including model output. There are very general programs to list, edit, and plot data on maps, to display profiles and time series, to draw and fill contours, to draw streamlines, to plot symbols for clouds, sky cover, and pressure tendency, and draw cross sections in the case of gridded data and sounding data. In addition, there are Barnes objective analysis programs to grid surface and upper-air data. The programs include the capabilities to derive meteorological parameters from those found in the dataset, to perform vertical interpolations of sounding data to different coordinate systems, and to compute an extensive set of gridded diagnostic quantities by specifying various nested combinations of scalars and vector arithmetic, algebraic, and differential operators. The GEMPAK 5.1 graphics/transformation subsystem, GEMPLT, provides device-independent graphics. GEMPLT also has the capability to display output in a variety of map projections or overlaid on satellite imagery. GEMPAK 5.1 is written in FORTRAN 77 and C-language and has been implemented on VAX computers under VMS and on computers running the UNIX operating system. During installation and normal use, this package occupies approximately 100Mb of hard disk space. The UNIX version of GEMPAK includes drivers for several graphic output systems including MIT's X Window System (X11,R4), Sun GKS, PostScript (color and monochrome), Silicon Graphics, and others. The VMS version of GEMPAK also includes drivers for several graphic output systems including PostScript (color and monochrome). The VMS version is delivered with the object code for the Transportable Applications Environment (TAE) program, version 4.1 which serves as a user interface. A color monitor is recommended for displaying maps on video display devices. Data for rendering regional maps is included with this package. The standard distribution medium for the UNIX version of GEMPAK 5.1 is a .25 inch streaming magnetic tape cartridge in UNIX tar format. The standard distribution medium for the VMS version of GEMPAK 5.1 is a 6250 BPI 9-track magnetic tape in DEC VAX BACKUP format. The VMS version is also available on a TK50 tape cartridge in DEC VAX BACKUP format. This program was developed in 1985. The current version, GEMPAK 5.1, was released in 1992. The package is delivered with source code. An extensive collection of subroutine libraries allows users to format data for use by GEMPAK, to develop new programs, and to enhance existing ones.
Long wavelength propagation capacity, version 1.1 (computer diskette)
NASA Astrophysics Data System (ADS)
1994-05-01
File Characteristics: software and data file. (72 files); ASCII character set. Physical Description: 2 computer diskettes; 3 1/2 in.; high density; 1.44 MB. System Requirements: PC compatible; Digital Equipment Corp. VMS; PKZIP (included on diskette). This report describes a revision of the Naval Command, Control and Ocean Surveillance Center RDT&E Division's Long Wavelength Propagation Capability (LWPC). The first version of this capability was a collection of separate FORTRAN programs linked together in operation by a command procedure written in an operating system unique to the Digital Equipment Corporation (Ferguson & Snyder, 1989a, b). A FORTRAN computer program named Long Wavelength Propagation Model (LWPM) was developed to replace the VMS control system (Ferguson & Snyder, 1990; Ferguson, 1990). This was designated version 1 (LWPC-1). This program implemented all the features of the original VMS plus a number of auxiliary programs that provided summaries of the files and graphical displays of the output files. This report describes a revision of the LWPC, designated version 1.1 (LWPC-1.1)
2012-12-19
full scope” life extension program for the B61 bomb, the weapon that is currently deployed in Europe, “to ensure its functionality with the F-35...This life extension program will consolidate four versions of the B61 bomb, including the B61 -3 and B61 - 4 that are currently deployed in Europe, into...one version, the B61 -12. Reports indicate that this new version will reuse the nuclear components of the older bombs, but will include enhanced
2014-01-03
NPR also indicated that the United States would conduct a “full scope” life extension program for the B61 bomb, the weapon that is currently deployed...in Europe, “to ensure its functionality with the F-35.” This life extension program will consolidate four versions of the B61 bomb, including the B61 ...3 and B61 - 4 that are currently deployed in Europe, into one version, the B61 -12. Reports indicate that this new version will reuse the nuclear
Spacecraft Orbit Design and Analysis (SODA), version 1.0 user's guide
NASA Technical Reports Server (NTRS)
Stallcup, Scott S.; Davis, John S.
1989-01-01
The Spacecraft Orbit Design and Analysis (SODA) computer program, Version 1.0 is described. SODA is a spaceflight mission planning system which consists of five program modules integrated around a common database and user interface. SODA runs on a VAX/VMS computer with an EVANS & SUTHERLAND PS300 graphics workstation. BOEING RIM-Version 7 relational database management system performs transparent database services. In the current version three program modules produce an interactive three dimensional (3D) animation of one or more satellites in planetary orbit. Satellite visibility and sensor coverage capabilities are also provided. One module produces an interactive 3D animation of the solar system. Another module calculates cumulative satellite sensor coverage and revisit time for one or more satellites. Currently Earth, Moon, and Mars systems are supported for all modules except the solar system module.
AESS: Accelerated Exact Stochastic Simulation
NASA Astrophysics Data System (ADS)
Jenkins, David D.; Peterson, Gregory D.
2011-12-01
The Stochastic Simulation Algorithm (SSA) developed by Gillespie provides a powerful mechanism for exploring the behavior of chemical systems with small species populations or with important noise contributions. Gene circuit simulations for systems biology commonly employ the SSA method, as do ecological applications. This algorithm tends to be computationally expensive, so researchers seek an efficient implementation of SSA. In this program package, the Accelerated Exact Stochastic Simulation Algorithm (AESS) contains optimized implementations of Gillespie's SSA that improve the performance of individual simulation runs or ensembles of simulations used for sweeping parameters or to provide statistically significant results. Program summaryProgram title: AESS Catalogue identifier: AEJW_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJW_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: University of Tennessee copyright agreement No. of lines in distributed program, including test data, etc.: 10 861 No. of bytes in distributed program, including test data, etc.: 394 631 Distribution format: tar.gz Programming language: C for processors, CUDA for NVIDIA GPUs Computer: Developed and tested on various x86 computers and NVIDIA C1060 Tesla and GTX 480 Fermi GPUs. The system targets x86 workstations, optionally with multicore processors or NVIDIA GPUs as accelerators. Operating system: Tested under Ubuntu Linux OS and CentOS 5.5 Linux OS Classification: 3, 16.12 Nature of problem: Simulation of chemical systems, particularly with low species populations, can be accurately performed using Gillespie's method of stochastic simulation. Numerous variations on the original stochastic simulation algorithm have been developed, including approaches that produce results with statistics that exactly match the chemical master equation (CME) as well as other approaches that approximate the CME. Solution method: The Accelerated Exact Stochastic Simulation (AESS) tool provides implementations of a wide variety of popular variations on the Gillespie method. Users can select the specific algorithm considered most appropriate. Comparisons between the methods and with other available implementations indicate that AESS provides the fastest known implementation of Gillespie's method for a variety of test models. Users may wish to execute ensembles of simulations to sweep parameters or to obtain better statistical results, so AESS supports acceleration of ensembles of simulation using parallel processing with MPI, SSE vector units on x86 processors, and/or using NVIDIA GPUs with CUDA.
A high-performance Fortran code to calculate spin- and parity-dependent nuclear level densities
NASA Astrophysics Data System (ADS)
Sen'kov, R. A.; Horoi, M.; Zelevinsky, V. G.
2013-01-01
A high-performance Fortran code is developed to calculate the spin- and parity-dependent shell model nuclear level densities. The algorithm is based on the extension of methods of statistical spectroscopy and implies exact calculation of the first and second Hamiltonian moments for different configurations at fixed spin and parity. The proton-neutron formalism is used. We have applied the method for calculating the level densities for a set of nuclei in the sd-, pf-, and pf+g- model spaces. Examples of the calculations for 28Si (in the sd-model space) and 64Ge (in the pf+g-model space) are presented. To illustrate the power of the method we estimate the ground state energy of 64Ge in the larger model space pf+g, which is not accessible to direct shell model diagonalization due to the prohibitively large dimension, by comparing with the nuclear level densities at low excitation energy calculated in the smaller model space pf. Program summaryProgram title: MM Catalogue identifier: AENM_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENM_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 193181 No. of bytes in distributed program, including test data, etc.: 1298585 Distribution format: tar.gz Programming language: Fortran 90, MPI. Computer: Any architecture with a Fortran 90 compiler and MPI. Operating system: Linux. RAM: Proportional to the system size, in our examples, up to 75Mb Classification: 17.15. External routines: MPICH2 (http://www.mcs.anl.gov/research/projects/mpich2/) Nature of problem: Calculating of the spin- and parity-dependent nuclear level density. Solution method: The algorithm implies exact calculation of the first and second Hamiltonian moments for different configurations at fixed spin and parity. The code is parallelized using the Message Passing Interface and a master-slaves dynamical load-balancing approach. Restrictions: The program uses two-body interaction in a restricted single-level basis. For example, GXPF1A in the pf-valence space. Running time: Depends on the system size and the number of processors used (from 1 min to several hours).
NASA Technical Reports Server (NTRS)
Justus, C. G.; Johnson, Dale
1990-01-01
The Global Reference Atmospheric Model (GRAM) is currently available in the 'GRAM-88' version (Justus, et al., 1986; 1988), which includes relatively minor upgrades and changes from the 'MOD-3' version (Justus, et al., 1980). Currently a project is underway to use large amounts of data, mostly collected under the Middle Atmosphere Program (MAP) to produce a major upgrade of the program planned for release as the GRAM-90 version. The new data and program revisions will particularly affect the 25-90 km height range. Sources of data and preliminary results are described here in the form of cross-sectional plots.
SITE CHARACTERIZATION LIBRARY VERSION 3.0
The Site Characterization Library is a CD that provides a centralized, field-portable source for site characterization information. Version 3 of the Site Characterization Library contains additional (from earlier versions) electronic documents and computer programs related to th...
Updated System-Availability and Resource-Allocation Program
NASA Technical Reports Server (NTRS)
Viterna, Larry
2004-01-01
A second version of the Availability, Cost and Resource Allocation (ACARA) computer program has become available. The first version was reported in an earlier tech brief. To recapitulate: ACARA analyzes the availability, mean-time-between-failures of components, life-cycle costs, and scheduling of resources of a complex system of equipment. ACARA uses a statistical Monte Carlo method to simulate the failure and repair of components while complying with user-specified constraints on spare parts and resources. ACARA evaluates the performance of the system on the basis of a mathematical model developed from a block-diagram representation. The previous version utilized the MS-DOS operating system and could not be run by use of the most recent versions of the Windows operating system. The current version incorporates the algorithms of the previous version but is compatible with Windows and utilizes menus and a file-management approach typical of Windows-based software.
Application of majority voting and consensus voting algorithms in N-version software
NASA Astrophysics Data System (ADS)
Tsarev, R. Yu; Durmuş, M. S.; Üstoglu, I.; Morozov, V. A.
2018-05-01
N-version programming is one of the most common techniques which is used to improve the reliability of software by building in fault tolerance, redundancy and decreasing common cause failures. N different equivalent software versions are developed by N different and isolated workgroups by considering the same software specifications. The versions solve the same task and return results that have to be compared to determine the correct result. Decisions of N different versions are evaluated by a voting algorithm or the so-called voter. In this paper, two of the most commonly used software voting algorithms such as the majority voting algorithm and the consensus voting algorithm are studied. The distinctive features of Nversion programming with majority voting and N-version programming with consensus voting are described. These two algorithms make a decision about the correct result on the base of the agreement matrix. However, if the equivalence relation on the agreement matrix is not satisfied it is impossible to make a decision. It is shown that the agreement matrix can be transformed into an appropriate form by using the Boolean compositions when the equivalence relation is satisfied.
NASA Technical Reports Server (NTRS)
Anderson, W. F.; Conway, J. R.; Keller, L. C.
1972-01-01
The characteristics of the application program were developed to verify and demonstrate the SEL 840MP Multi-Processing Control System - Version I (MPCS/1). The application program emphasizes the display support and task control capabilities. The application program is further intended to be used as an aid to familization with MPCS/1. It complements the information provided in the MPCS/1 Users Guide, Volume I and II.
NASA Technical Reports Server (NTRS)
Olmedo, L.
1980-01-01
The changes, modifications, and inclusions which were adapted to the current version of the MINIVER program are discussed. Extensive modifications were made to various subroutines, and a new plot package added. This plot package is the Johnson Space Center DISSPLA Graphics System currently driven under an 1110 EXEC 8 configuration. User instructions on executing the MINIVER program are provided and the plot package is described.
Code of Federal Regulations, 2013 CFR
2013-01-01
... “ENERGY STAR Program Requirements for [Compact Fluorescent Lamps] CFLs,” Version dated August 9, 2001... DOE's “ENERGY STAR Program Requirements for [Compact Fluorescent Lamps] CFLs,” Version dated August 9...
The Role of Graphic and Sanitized Violence in the Enjoyment of Television Dramas
ERIC Educational Resources Information Center
Weaver, Andrew J.; Wilson, Barbara J.
2009-01-01
This experiment explores the relationship between television violence and viewer enjoyment. Over 400 participants were randomly assigned to one of 15 conditions that were created by editing five TV programs into three versions each: A graphically violent version, a sanitized violent version, and a nonviolent version. After viewing, participants…
Effectiveness of Two Versions of a STD/HIV Prevention Program
2002-01-01
NAVAL HEALTH RESEARCH CENTER EFFECTIVENESS OF TWO VERSIONS OF A STD/HIV PREVENTION PROGRAM S . Booth-Kewley R. A. Shaffer...R. Y. Minagawa S . K. Brodine Report No. 01-01 Approved for public release; distribution unlimited...of a behavioral intervention called the STD/HIV Intervention Program (SHIP) in a sample of Marines. Marines were exposed to either a 6 hr or a 3 hr
Personalization of structural PDB files.
Woźniak, Tomasz; Adamiak, Ryszard W
2013-01-01
PDB format is most commonly applied by various programs to define three-dimensional structure of biomolecules. However, the programs often use different versions of the format. Thus far, no comprehensive solution for unifying the PDB formats has been developed. Here we present an open-source, Python-based tool called PDBinout for processing and conversion of various versions of PDB file format for biostructural applications. Moreover, PDBinout allows to create one's own PDB versions. PDBinout is freely available under the LGPL licence at http://pdbinout.ibch.poznan.pl.
Regression Verification Using Impact Summaries
NASA Technical Reports Server (NTRS)
Backes, John; Person, Suzette J.; Rungta, Neha; Thachuk, Oksana
2013-01-01
Regression verification techniques are used to prove equivalence of syntactically similar programs. Checking equivalence of large programs, however, can be computationally expensive. Existing regression verification techniques rely on abstraction and decomposition techniques to reduce the computational effort of checking equivalence of the entire program. These techniques are sound but not complete. In this work, we propose a novel approach to improve scalability of regression verification by classifying the program behaviors generated during symbolic execution as either impacted or unimpacted. Our technique uses a combination of static analysis and symbolic execution to generate summaries of impacted program behaviors. The impact summaries are then checked for equivalence using an o-the-shelf decision procedure. We prove that our approach is both sound and complete for sequential programs, with respect to the depth bound of symbolic execution. Our evaluation on a set of sequential C artifacts shows that reducing the size of the summaries can help reduce the cost of software equivalence checking. Various reduction, abstraction, and compositional techniques have been developed to help scale software verification techniques to industrial-sized systems. Although such techniques have greatly increased the size and complexity of systems that can be checked, analysis of large software systems remains costly. Regression analysis techniques, e.g., regression testing [16], regression model checking [22], and regression verification [19], restrict the scope of the analysis by leveraging the differences between program versions. These techniques are based on the idea that if code is checked early in development, then subsequent versions can be checked against a prior (checked) version, leveraging the results of the previous analysis to reduce analysis cost of the current version. Regression verification addresses the problem of proving equivalence of closely related program versions [19]. These techniques compare two programs with a large degree of syntactic similarity to prove that portions of one program version are equivalent to the other. Regression verification can be used for guaranteeing backward compatibility, and for showing behavioral equivalence in programs with syntactic differences, e.g., when a program is refactored to improve its performance, maintainability, or readability. Existing regression verification techniques leverage similarities between program versions by using abstraction and decomposition techniques to improve scalability of the analysis [10, 12, 19]. The abstractions and decomposition in the these techniques, e.g., summaries of unchanged code [12] or semantically equivalent methods [19], compute an over-approximation of the program behaviors. The equivalence checking results of these techniques are sound but not complete-they may characterize programs as not functionally equivalent when, in fact, they are equivalent. In this work we describe a novel approach that leverages the impact of the differences between two programs for scaling regression verification. We partition program behaviors of each version into (a) behaviors impacted by the changes and (b) behaviors not impacted (unimpacted) by the changes. Only the impacted program behaviors are used during equivalence checking. We then prove that checking equivalence of the impacted program behaviors is equivalent to checking equivalence of all program behaviors for a given depth bound. In this work we use symbolic execution to generate the program behaviors and leverage control- and data-dependence information to facilitate the partitioning of program behaviors. The impacted program behaviors are termed as impact summaries. The dependence analyses that facilitate the generation of the impact summaries, we believe, could be used in conjunction with other abstraction and decomposition based approaches, [10, 12], as a complementary reduction technique. An evaluation of our regression verification technique shows that our approach is capable of leveraging similarities between program versions to reduce the size of the queries and the time required to check for logical equivalence. The main contributions of this work are: - A regression verification technique to generate impact summaries that can be checked for functional equivalence using an off-the-shelf decision procedure. - A proof that our approach is sound and complete with respect to the depth bound of symbolic execution. - An implementation of our technique using the LLVMcompiler infrastructure, the klee Symbolic Virtual Machine [4], and a variety of Satisfiability Modulo Theory (SMT) solvers, e.g., STP [7] and Z3 [6]. - An empirical evaluation on a set of C artifacts which shows that the use of impact summaries can reduce the cost of regression verification.
NASA Astrophysics Data System (ADS)
Chang, Chao-Hsi; Wang, Jian-Xiong; Wu, Xing-Gang
2006-11-01
An upgraded version of the package BCVEGPY2.0: [C.-H. Chang, J.-X. Wang, X.-G. Wu, Comput. Phys. Commun. 174 (2006) 241] is presented, which works under LINUX system and is named as BCVEGPY2.1. With the version and a GNU C compiler additionally, users may simulate the B-events in various experimental environments very conveniently. It has been manipulated in better modularity and code reusability (less cross communication among various modules) than BCVEGPY2.0 has. Furthermore, in the upgraded version a special execution is arranged as that the GNU command make compiles a requested code with the help of a master makefile in main code directory, and then builds an executable file with the default name run. Finally, this paper may also be considered as an erratum, i.e., typo errors in BCVEGPY2.0 and corrections accordingly have been listed. New version program (BCVEGPY2.1) summaryTitle of program: BCVEGPY2.1 Catalogue identifier: ADTJ_v2_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADTJ_v2_1 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Reference to original program: BCVEGPY2.0 Reference in CPC: Comput. Phys. Commun. 174 (2006) 241 Does the new version supersede the old program: No Computer: Any LINUX based on PC with FORTRAN 77 or FORTRAN 90 and GNU C compiler as well Operating systems: LINUX Programming language used: FORTRAN 77/90 Memory required to execute with typical data: About 2.0 MB No. of lines in distributed program, including test data, etc.: 31 521 No. of bytes in distributed program, including test data, etc.: 1 310 179 Distribution format: tar.gz Nature of physical problem: Hadronic production of B meson itself and its excited states Method of solution: The code with option can generate weighted and unweighted events. An interface to PYTHIA is provided to meet the needs of jets hadronization in the production. Restrictions on the complexity of the problem: The hadronic production of (cb¯)-quarkonium in S-wave and P-wave states via the mechanism of gluon-gluon fusion are given by the so-called 'complete calculation' approach. Reasons for new version: Responding to the feedback from users, we rearrange the program in a convenient way and then it can be easily adopted by the users to do the simulations according to their own experimental environment (e.g. detector acceptances and experimental cuts). We have paid many efforts to rearrange the program into several modules with less cross communication among the modules, the main program is slimmed down and all the further actions are decoupled from the main program and can be easily called for various purposes. Typical running time: The typical running time is machine and user-parameters dependent. Typically, for production of the S-wave (cb¯)-quarkonium, when IDWTUP = 1, it takes about 20 hour on a 1.8 GHz Intel P4-processor machine to generate 1000 events; however, when IDWTUP = 3, to generate 10 6 events it takes about 40 minutes only. Of the production, the time for the P-wave (cb¯)-quarkonium will take almost two times longer than that for its S-wave quarkonium. Summary of the changes (improvements): (1) The structure and organization of the program have been changed a lot. The new version package BCVEGPY2.1 has been divided into several modules with less cross communication among the modules (some old version source files are divided into several parts for the purpose). The main program is slimmed down and all the further actions are decoupled from the main program so that they can be easily called for various applications. All of the Fortran codes are organized in the main code directory named as bcvegpy2.1, which contains the main program, all of its prerequisite files and subsidiary 'folders' (subdirectory to the main code directory). The method for setting the parameter is the same as that of the previous versions [C.-H. Chang, C. Driouich, P. Eerola, X.-G. Wu, Comput. Phys. Commun. 159 (2004) 192, hep-ph/0309120. [1
IMS Version 3 Student Data Base Maintenance Program.
ERIC Educational Resources Information Center
Brown, John R.
Computer routines that update the Instructional Management System (IMS) Version 3 student data base which supports the Southwest Regional Laboratory's (SWRL) student monitoring system are described. Written in IBM System 360 FORTRAN IV, the program updates the data base by adding, changing and deleting records, as well as adding and deleting…
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-16
...; Comment Request; Election Assistance Commission's Voting System Test Laboratory Program Manual, Version 1... Paperwork Reduction Act of 1995, the U.S. Election Assistance Commission (EAC) invites the general public... information collection, EAC's Voting System Test Laboratory Program Manual, Version 1.0. Comments are invited...
NASA Technical Reports Server (NTRS)
Bjorklund, J. R.
1978-01-01
The cloud-rise preprocessor and multilayer diffusion computer programs were used by NASA in predicting concentrations and dosages downwind from normal and abnormal launches of rocket vehicles. These programs incorporated: (1) the latest data for the heat content and chemistry of rocket exhaust clouds; (2) provision for the automated calculation of surface water pH due to deposition of HCl from precipitation scavenging; (3) provision for automated calculation of concentration and dosage parameters at any level within the vertical grounds for which meteorological inputs have been specified; and (4) provision for execution of multiple cases of meteorological data. Procedures used to automatically calculate wind direction shear in a layer were updated.
MCNP Output Data Analysis with ROOT (MODAR)
NASA Astrophysics Data System (ADS)
Carasco, C.
2010-06-01
MCNP Output Data Analysis with ROOT (MODAR) is a tool based on CERN's ROOT software. MODAR has been designed to handle time-energy data issued by MCNP simulations of neutron inspection devices using the associated particle technique. MODAR exploits ROOT's Graphical User Interface and functionalities to visualize and process MCNP simulation results in a fast and user-friendly way. MODAR allows to take into account the detection system time resolution (which is not possible with MCNP) as well as detectors energy response function and counting statistics in a straightforward way. Program summaryProgram title: MODAR Catalogue identifier: AEGA_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGA_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 155 373 No. of bytes in distributed program, including test data, etc.: 14 815 461 Distribution format: tar.gz Programming language: C++ Computer: Most Unix workstations and PC Operating system: Most Unix systems, Linux and windows, provided the ROOT package has been installed. Examples where tested under Suse Linux and Windows XP. RAM: Depends on the size of the MCNP output file. The example presented in the article, which involves three two-dimensional 139×740 bins histograms, allocates about 60 MB. These data are running under ROOT and include consumption by ROOT itself. Classification: 17.6 External routines: ROOT version 5.24.00 ( http://root.cern.ch/drupal/) Nature of problem: The output of an MCNP simulation is an ASCII file. The data processing is usually performed by copying and pasting the relevant parts of the ASCII file into Microsoft Excel. Such an approach is satisfactory when the quantity of data is small but is not efficient when the size of the simulated data is large, for example when time-energy correlations are studied in detail such as in problems involving the associated particle technique. In addition, since the finite time resolution of the simulated detector cannot be modeled with MCNP, systems in which time-energy correlation is crucial cannot be described in a satisfactory way. Finally, realistic particle energy deposit in detectors is calculated with MCNP in a two-step process involving type-5 then type-8 tallies. In the first step, the photon flux energy spectrum associated to a time region is selected and serves as a source energy distribution for the second step. Thus, several files must be manipulated before getting the result, which can be time consuming if one needs to study several time regions or different detectors performances. In the same way, modeling counting statistics obtained in a limited acquisition time requires several steps and can also be time consuming. Solution method: In order to overcome the previous limitations, the MODAR C++ code has been written to make use of CERN's ROOT data analysis software. MCNP output data are read from the MCNP output file with dedicated routines. Two-dimensional histograms are filled and can be handled efficiently within the ROOT framework. To keep a user friendly analysis tool, all processing and data display can be done by means of ROOT Graphical User Interface. Specific routines have been written to include detectors finite time resolution and energy response function as well as counting statistics in a straightforward way. Additional comments: The possibility of adding tallies has also been incorporated in MODAR in order to describe systems in which the signal from several detectors can be summed. Moreover, MODAR can be adapted to handle other problems involving two-dimensional data. Running time: The CPU time needed to smear a two-dimensional histogram depends on the size of the histogram. In the presented example, the time-energy smearing of one of the 139×740 two-dimensional histograms takes 3 minutes with a DELL computer equipped with INTEL Core 2.
NASA Astrophysics Data System (ADS)
Michel, N.; Stoitsov, M. V.
2008-04-01
The fast computation of the Gauss hypergeometric function F12 with all its parameters complex is a difficult task. Although the F12 function verifies numerous analytical properties involving power series expansions whose implementation is apparently immediate, their use is thwarted by instabilities induced by cancellations between very large terms. Furthermore, small areas of the complex plane, in the vicinity of z=e, are inaccessible using F12 power series linear transformations. In order to solve these problems, a generalization of R.C. Forrey's transformation theory has been developed. The latter has been successful in treating the F12 function with real parameters. As in real case transformation theory, the large canceling terms occurring in F12 analytical formulas are rigorously dealt with, but by way of a new method, directly applicable to the complex plane. Taylor series expansions are employed to enter complex areas outside the domain of validity of power series analytical formulas. The proposed algorithm, however, becomes unstable in general when |a|, |b|, |c| are moderate or large. As a physical application, the calculation of the wave functions of the analytical Pöschl-Teller-Ginocchio potential involving F12 evaluations is considered. Program summaryProgram title: hyp_2F1, PTG_wf Catalogue identifier: AEAE_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAE_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 6839 No. of bytes in distributed program, including test data, etc.: 63 334 Distribution format: tar.gz Programming language: C++, Fortran 90 Computer: Intel i686 Operating system: Linux, Windows Word size: 64 bits Classification: 4.7 Nature of problem: The Gauss hypergeometric function F12, with all its parameters complex, is uniquely calculated in the frame of transformation theory with power series summations, thus providing a very fast algorithm. The evaluation of the wave functions of the analytical Pöschl-Teller-Ginocchio potential is treated as a physical application. Solution method: The Gauss hypergeometric function F12 verifies linear transformation formulas allowing consideration of arguments of a small modulus which then can be handled by a power series. They, however, give rise to indeterminate or numerically unstable cases, when b-a and c-a-b are equal or close to integers. They are properly dealt with through analytical manipulations of the Lanczos expression providing the Gamma function. The remaining zones of the complex plane uncovered by transformation formulas are dealt with Taylor expansions of the F12 function around complex points where linear transformations can be employed. The Pöschl-Teller-Ginocchio potential wave functions are calculated directly with F12 evaluations. Restrictions: The algorithm provides full numerical precision in almost all cases for |a|, |b|, and |c| of the order of one or smaller, but starts to be less precise or unstable when they increase, especially through a, b, and c imaginary parts. While it is possible to run the code for moderate or large |a|, |b|, and |c| and obtain satisfactory results for some specified values, the code is very likely to be unstable in this regime. Unusual features: Two different codes, one for the hypergeometric function and one for the Pöschl-Teller-Ginocchio potential wave functions, are provided in C++ and Fortran 90 versions. Running time: 20,000 F12 function evaluations take an average of one second.
ERIC Educational Resources Information Center
Çolak, Aysun; Tomris, Gözde; Diken, Ibrahim H.; Arikan, Arzu; Aksoy, Funda; Çelik, Seçil
2015-01-01
This study aims to describe the views of teachers, parents, and FSS-PSV counselors on the Preschool Version of First Step to Success Early Intervention Program (FSS-PSV) in preventing antisocial behaviors; in addition, the implementation process and contributions from the program will also be outlined. The study was conducted in six different…
Optics Program Modified for Multithreaded Parallel Computing
NASA Technical Reports Server (NTRS)
Lou, John; Bedding, Dave; Basinger, Scott
2006-01-01
A powerful high-performance computer program for simulating and analyzing adaptive and controlled optical systems has been developed by modifying the serial version of the Modeling and Analysis for Controlled Optical Systems (MACOS) program to impart capabilities for multithreaded parallel processing on computing systems ranging from supercomputers down to Symmetric Multiprocessing (SMP) personal computers. The modifications included the incorporation of OpenMP, a portable and widely supported application interface software, that can be used to explicitly add multithreaded parallelism to an application program under a shared-memory programming model. OpenMP was applied to parallelize ray-tracing calculations, one of the major computing components in MACOS. Multithreading is also used in the diffraction propagation of light in MACOS based on pthreads [POSIX Thread, (where "POSIX" signifies a portable operating system for UNIX)]. In tests of the parallelized version of MACOS, the speedup in ray-tracing calculations was found to be linear, or proportional to the number of processors, while the speedup in diffraction calculations ranged from 50 to 60 percent, depending on the type and number of processors. The parallelized version of MACOS is portable, and, to the user, its interface is basically the same as that of the original serial version of MACOS.
FINDS: A fault inferring nonlinear detection system programmers manual, version 3.0
NASA Technical Reports Server (NTRS)
Lancraft, R. E.
1985-01-01
Detailed software documentation of the digital computer program FINDS (Fault Inferring Nonlinear Detection System) Version 3.0 is provided. FINDS is a highly modular and extensible computer program designed to monitor and detect sensor failures, while at the same time providing reliable state estimates. In this version of the program the FINDS methodology is used to detect, isolate, and compensate for failures in simulated avionics sensors used by the Advanced Transport Operating Systems (ATOPS) Transport System Research Vehicle (TSRV) in a Microwave Landing System (MLS) environment. It is intended that this report serve as a programmers guide to aid in the maintenance, modification, and revision of the FINDS software.
Plummer, Niel; Jones, Blair F.; Truesdell, Alfred Hemingway
1976-01-01
WATEQF is a FORTRAN IV computer program that models the thermodynamic speciation of inorganic ions and complex species in solution for a given water analysis. The original version (WATEQ) was written in 1973 by A. H. Truesdell and B. F. Jones in Programming Language/one (PL/1.) With but a few exceptions, the thermochemical data, speciation, coefficients, and general calculation procedure of WATEQF is identical to the PL/1 version. This report notes the differences between WATEQF and WATEQ, demonstrates how to set up the input data to execute WATEQF, provides a test case for comparison, and makes available a listing of WATEQF. (Woodard-USGS)
The 1991 version of the plume impingement computer program. Volume 2: User's input guide
NASA Technical Reports Server (NTRS)
Bender, Robert L.; Somers, Richard E.; Prendergast, Maurice J.; Clayton, Joseph P.; Smith, Sheldon D.
1991-01-01
The Plume Impingement Program (PLIMP) is a computer code used to predict impact pressures, forces, moments, heating rates, and contamination on surfaces due to direct impingement flowfields. Typically, it has been used to analyze the effects of rocket exhaust plumes on nearby structures from ground level to the vacuum of space. The program normally uses flowfields generated by the MOC, RAMP2, SPF/2, or SFPGEN computer programs. It is capable of analyzing gaseous and gas/particle flows. A number of simple subshapes are available to model the surfaces of any structure. The original PLIMP program has been modified many times of the last 20 years. The theoretical bases for the referenced major changes, and additional undocumented changes and enhancements since 1988 are summarized in volume 1 of this report. This volume is the User's Input Guide and should be substituted for all previous guides when running the latest version of the program. This version can operate on VAX and UNIX machines with NCAR graphics ability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsugane, Keisuke; Boku, Taisuke; Murai, Hitoshi
Recently, the Partitioned Global Address Space (PGAS) parallel programming model has emerged as a usable distributed memory programming model. XcalableMP (XMP) is a PGAS parallel programming language that extends base languages such as C and Fortran with directives in OpenMP-like style. XMP supports a global-view model that allows programmers to define global data and to map them to a set of processors, which execute the distributed global data as a single thread. In XMP, the concept of a coarray is also employed for local-view programming. In this study, we port Gyrokinetic Toroidal Code - Princeton (GTC-P), which is a three-dimensionalmore » gyrokinetic PIC code developed at Princeton University to study the microturbulence phenomenon in magnetically confined fusion plasmas, to XMP as an example of hybrid memory model coding with the global-view and local-view programming models. In local-view programming, the coarray notation is simple and intuitive compared with Message Passing Interface (MPI) programming while the performance is comparable to that of the MPI version. Thus, because the global-view programming model is suitable for expressing the data parallelism for a field of grid space data, we implement a hybrid-view version using a global-view programming model to compute the field and a local-view programming model to compute the movement of particles. Finally, the performance is degraded by 20% compared with the original MPI version, but the hybrid-view version facilitates more natural data expression for static grid space data (in the global-view model) and dynamic particle data (in the local-view model), and it also increases the readability of the code for higher productivity.« less
Tsugane, Keisuke; Boku, Taisuke; Murai, Hitoshi; ...
2016-06-01
Recently, the Partitioned Global Address Space (PGAS) parallel programming model has emerged as a usable distributed memory programming model. XcalableMP (XMP) is a PGAS parallel programming language that extends base languages such as C and Fortran with directives in OpenMP-like style. XMP supports a global-view model that allows programmers to define global data and to map them to a set of processors, which execute the distributed global data as a single thread. In XMP, the concept of a coarray is also employed for local-view programming. In this study, we port Gyrokinetic Toroidal Code - Princeton (GTC-P), which is a three-dimensionalmore » gyrokinetic PIC code developed at Princeton University to study the microturbulence phenomenon in magnetically confined fusion plasmas, to XMP as an example of hybrid memory model coding with the global-view and local-view programming models. In local-view programming, the coarray notation is simple and intuitive compared with Message Passing Interface (MPI) programming while the performance is comparable to that of the MPI version. Thus, because the global-view programming model is suitable for expressing the data parallelism for a field of grid space data, we implement a hybrid-view version using a global-view programming model to compute the field and a local-view programming model to compute the movement of particles. Finally, the performance is degraded by 20% compared with the original MPI version, but the hybrid-view version facilitates more natural data expression for static grid space data (in the global-view model) and dynamic particle data (in the local-view model), and it also increases the readability of the code for higher productivity.« less
Comparison of In-Person vs. Digital Climate Education Program
NASA Astrophysics Data System (ADS)
Anderson, R. K.; Flora, J. A.; Saphir, M.
2017-12-01
In 2014, ACE (Alliance for Climate Education) evaluated the impact of its 45-minute live climate edutainment education program on the knowledge, attitudes and behavior of high school students with respect to climate change. The results showed gains in knowledge, increased engagement, as well as increased communication about climate change with number of students reporting talking about climate change with friends and family more than doubling. In 2016, ACE launched a digital version of its in-person edutainment education program, a 40-minute video version of the live program. This digital version, Our Climate Our Future (OCOF), has now been used by nearly 4,000 teachers nationwide and viewed by over 150,000 students. We experimentally tested the impact of the digital program (OCOF) compared to the live program and a control group. The experiment was conducted with 709 students in 27 classes at two North Carolina public high schools. Classes were assigned to one of three conditions: digital, live and control. In the digital version, students watched the 40-minute OCOF video featuring the same educator that presented the live program. In the live version, students received an identical 40-minute live presentation by an ACE staff educator The control group received neither treatment. When compared to controls, both programs were effective in positively increasing climate change knowledge, climate justice knowledge, perceived self-efficacy to make climate-friendly behavior changes, and beliefs about climate change (all statistically significant at or above P<.01). In the areas of hope that people can solve climate change and intent to change behavior, only the live program showed significant increases. In these two areas, it may be that an in-person experience is key to affecting change. In light of these positive results, ACE plans to increase the use of OCOF in schools across the country to assist teachers in their efforts to teach about climate change.
Comparison of In-Person vs. Digital Climate Education Program
NASA Astrophysics Data System (ADS)
Anbar, A. D.; Elkins-Tanton, L. T.; Klug Boonstra, S.; Ben-Naim, D.
2016-12-01
In 2014, ACE (Alliance for Climate Education) evaluated the impact of its 45-minute live climate edutainment education program on the knowledge, attitudes and behavior of high school students with respect to climate change. The results showed gains in knowledge, increased engagement, as well as increased communication about climate change with number of students reporting talking about climate change with friends and family more than doubling. In 2016, ACE launched a digital version of its in-person edutainment education program, a 40-minute video version of the live program. This digital version, Our Climate Our Future (OCOF), has now been used by nearly 4,000 teachers nationwide and viewed by over 150,000 students. We experimentally tested the impact of the digital program (OCOF) compared to the live program and a control group. The experiment was conducted with 709 students in 27 classes at two North Carolina public high schools. Classes were assigned to one of three conditions: digital, live and control. In the digital version, students watched the 40-minute OCOF video featuring the same educator that presented the live program. In the live version, students received an identical 40-minute live presentation by an ACE staff educator The control group received neither treatment. When compared to controls, both programs were effective in positively increasing climate change knowledge, climate justice knowledge, perceived self-efficacy to make climate-friendly behavior changes, and beliefs about climate change (all statistically significant at or above P<.01). In the areas of hope that people can solve climate change and intent to change behavior, only the live program showed significant increases. In these two areas, it may be that an in-person experience is key to affecting change. In light of these positive results, ACE plans to increase the use of OCOF in schools across the country to assist teachers in their efforts to teach about climate change.
GEMPAK 5.1 - A GENERAL METEOROLOGICAL PACKAGE (VAX VMS VERSION)
NASA Technical Reports Server (NTRS)
Des, Jardins M. L.
1994-01-01
GEMPAK is a general meteorological software package developed at NASA/Goddard Space Flight Center. It includes programs to analyze and display surface, upper-air, and gridded data, including model output. There are very general programs to list, edit, and plot data on maps, to display profiles and time series, to draw and fill contours, to draw streamlines, to plot symbols for clouds, sky cover, and pressure tendency, and draw cross sections in the case of gridded data and sounding data. In addition, there are Barnes objective analysis programs to grid surface and upper-air data. The programs include the capabilities to derive meteorological parameters from those found in the dataset, to perform vertical interpolations of sounding data to different coordinate systems, and to compute an extensive set of gridded diagnostic quantities by specifying various nested combinations of scalars and vector arithmetic, algebraic, and differential operators. The GEMPAK 5.1 graphics/transformation subsystem, GEMPLT, provides device-independent graphics. GEMPLT also has the capability to display output in a variety of map projections or overlaid on satellite imagery. GEMPAK 5.1 is written in FORTRAN 77 and C-language and has been implemented on VAX computers under VMS and on computers running the UNIX operating system. During installation and normal use, this package occupies approximately 100Mb of hard disk space. The UNIX version of GEMPAK includes drivers for several graphic output systems including MIT's X Window System (X11,R4), Sun GKS, PostScript (color and monochrome), Silicon Graphics, and others. The VMS version of GEMPAK also includes drivers for several graphic output systems including PostScript (color and monochrome). The VMS version is delivered with the object code for the Transportable Applications Environment (TAE) program, version 4.1 which serves as a user interface. A color monitor is recommended for displaying maps on video display devices. Data for rendering regional maps is included with this package. The standard distribution medium for the UNIX version of GEMPAK 5.1 is a .25 inch streaming magnetic tape cartridge in UNIX tar format. The standard distribution medium for the VMS version of GEMPAK 5.1 is a 6250 BPI 9-track magnetic tape in DEC VAX BACKUP format. The VMS version is also available on a TK50 tape cartridge in DEC VAX BACKUP format. This program was developed in 1985. The current version, GEMPAK 5.1, was released in 1992. The package is delivered with source code. An extensive collection of subroutine libraries allows users to format data for use by GEMPAK, to develop new programs, and to enhance existing ones.
GLoBES: General Long Baseline Experiment Simulator
NASA Astrophysics Data System (ADS)
Huber, Patrick; Kopp, Joachim; Lindner, Manfred; Rolinec, Mark; Winter, Walter
2007-09-01
GLoBES (General Long Baseline Experiment Simulator) is a flexible software package to simulate neutrino oscillation long baseline and reactor experiments. On the one hand, it contains a comprehensive abstract experiment definition language (AEDL), which allows to describe most classes of long baseline experiments at an abstract level. On the other hand, it provides a C-library to process the experiment information in order to obtain oscillation probabilities, rate vectors, and Δχ-values. Currently, GLoBES is available for GNU/Linux. Since the source code is included, the port to other operating systems is in principle possible. GLoBES is an open source code that has previously been described in Computer Physics Communications 167 (2005) 195 and in Ref. [7]). The source code and a comprehensive User Manual for GLoBES v3.0.8 is now available from the CPC Program Library as described in the Program Summary below. The home of GLobES is http://www.mpi-hd.mpg.de/~globes/. Program summaryProgram title: GLoBES version 3.0.8 Catalogue identifier: ADZI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZI_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 145 295 No. of bytes in distributed program, including test data, etc.: 1 811 892 Distribution format: tar.gz Programming language: C Computer: GLoBES builds and installs on 32bit and 64bit Linux systems Operating system: 32bit or 64bit Linux RAM: Typically a few MBs Classification: 11.1, 11.7, 11.10 External routines: GSL—The GNU Scientific Library, www.gnu.org/software/gsl/ Nature of problem: Neutrino oscillations are now established as the leading flavor transition mechanism for neutrinos. In a long history of many experiments, see, e.g., [1], two oscillation frequencies have been identified: The fast atmospheric and the slow solar oscillations, which are driven by the respective mass squared differences. In addition, there could be interference effects between these two oscillations, provided that the coupling given by the small mixing angle θ is large enough. Such interference effects include, for example, leptonic CP violation. In order to test the unknown oscillation parameters, i.e. the mixing angle θ, the leptonic CP phase, and the neutrino mass hierarchy, new long-baseline and reactor experiments are proposed. These experiments send an artificial neutrino beam to a detector, or detect the neutrinos produced by a nuclear fission reactor. However, the presence of multiple solutions which are intrinsic to neutrino oscillation probabilities [2-5] affect these measurements. Thus optimization strategies are required which maximally exploit complementarity between experiments. Therefore, a modern, complete experiment simulation and analysis tool does not only need to have a highly accurate beam and detector simulation, but also powerful means to analyze correlations and degeneracies, especially for the combination of several experiments. The GLoBES software package is such a tool [6,7]. Solution method: GLoBES is a flexible software tool to simulate and analyze neutrino oscillation long-baseline and reactor experiments using a complete three-flavor description. On the one hand, it contains a comprehensive abstract experiment definition language (AEDL), which makes it possible to describe most classes of long baseline and reactor experiments at an abstract level. On the other hand, it provides a C-library to process the experiment information in order to obtain oscillation probabilities, rate vectors, and Δχ-values. In addition, it provides a binary program to test experiment definitions very quickly, before they are used by the application software. Restrictions: Currently restricted to discrete sets of sources and detectors. For example, the simulation of an atmospheric neutrino flux is not supported. Unusual features: Clear separation between experiment description and the simulation software. Additional comments: To find information on the latest version of the software and user manual, please check the author's web site, http://www.mpi-hd.mpg.de/~globes Running time: The examples included in the distribution take only a few minutes to complete. More sophisticated problems can take up to several days. References [1] V. Barger, D. Marfatia, K. Whisnant, Int. J. Mod. Phys. E 12 (2003) 569, hep-ph/0308123, and references therein. [2] G.L. Fogli, E. Lisi, Phys. Rev. D 54 (1996) 3667, hep-ph/9604415. [3] J. Burguet-Castell, M.B. Gavela, J.J. Gomez-Cadenas, P. Hernandez, O. Mena, Nucl. Phys. B 608 (2001) 301, hep-ph/0103258. [4] H. Minakata, H. Nunokawa, JHEP 0110 (2001) 001, hep-ph/0108085. [5] V. Barger, D. Marfatia, K. Whisnant, Phys. Rev. D 65 (2002) 073023, hep-ph/0112119. [6] P. Huber, M. Lindner, W. Winter, Comput. Phys. Commun. 167 (2005) 195. [7] P. Huber, J. Kopp, M. Lindner, M. Rolinec, W. Winter, Comput. Phys. Commun. 177 (2007) 432.
BUCKY instruction manual, version 3.3
NASA Technical Reports Server (NTRS)
Smith, James P.
1994-01-01
The computer program BUCKY is a p-version finite element package for the solution of structural problems. The current version of BUCKY solves the 2-D plane stress, 3-D plane stress plasticity, 3-D axisymmetric, Mindlin and Kirchoff plate bending, and buckling problems. The p-version of the finite element method is a highly accurate version of the traditional finite element method. Example cases are presented to show the accuracy and application of BUCKY.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-16
... System Test Laboratories Program Manual, Version 2.0 AGENCY: United States Election Assistance Commission (EAC). ACTION: Notice; publication of Voting System Test Laboratories Program Manual, Version 2.0, for 60 day public comment period on EAC Web site. SUMMARY: The U.S. Election Assistance Commission (EAC...
Dr TIM: Ray-tracer TIM, with additional specialist scientific capabilities
NASA Astrophysics Data System (ADS)
Oxburgh, Stephen; Tyc, Tomáš; Courtial, Johannes
2014-03-01
We describe several extensions to TIM, a raytracing program for ray-optics research. These include relativistic raytracing; simulation of the external appearance of Eaton lenses, Luneburg lenses and generalised focusing gradient-index lens (GGRIN) lenses, which are types of perfect imaging devices; raytracing through interfaces between spaces with different optical metrics; and refraction with generalised confocal lenslet arrays, which are particularly versatile METATOYs. Catalogue identifier: AEKY_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKY_v2_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licencing provisions: GNU General Public License No. of lines in distributed program, including test data, etc.: 106905 No. of bytes in distributed program, including test data, etc.: 6327715 Distribution format: tar.gz Programming language: Java. Computer: Any computer capable of running the Java Virtual Machine (JVM) 1.6. Operating system: Any, developed under Mac OS X Version 10.6 and 10.8.3. RAM: Typically 130 MB (interactive version running under Mac OS X Version 10.8.3) Classification: 14, 18. Catalogue identifier of previous version: AEKY_v1_0 Journal reference of previous version: Comput. Phys. Comm. 183(2012)711 External routines: JAMA [1] (source code included) Does the new version supersede the previous version?: Yes Nature of problem: Visualisation of scenes that include scene objects that create wave-optically forbidden light-ray fields. Solution method: Ray tracing. Reasons for new version: Significant extension of the capabilities (see Summary of revisions), as demanded by our research. Summary of revisions: Added capabilities include the simulation of different types of camera moving at relativistic speeds relative to the scene; visualisation of the external appearance of generalised focusing gradient-index (GGRIN) lenses, including Maxwell fisheye, Eaton and Luneburg lenses; calculation of refraction at the interface between spaces with different optical metrics; and handling of generalised confocal lenslet arrays (gCLAs), a new type of METATOY. Unusual features: Specifically designed to visualise wave-optically forbidden light-ray fields; can visualise ray trajectories and geometric optic transformations; can simulate photos taken with different types of camera moving at relativistic speeds, interfaces between spaces with different optical metrics, the view through METATOYs and generalised focusing gradient-index lenses; can create anaglyphs (for viewing with coloured “3D glasses”), HDMI-1.4a standard 3D images, and random-dot autostereograms of the scene; integrable into web pages. Running time: Problem-dependent; typically seconds for a simple scene. References: [1] JAMA: A Java Matrix Package, http://math.nist.gov/javanumerics/jama/
[The Confusion Assessment Method: Transcultural adaptation of a French version].
Antoine, V; Belmin, J; Blain, H; Bonin-Guillaume, S; Goldsmith, L; Guerin, O; Kergoat, M-J; Landais, P; Mahmoudi, R; Morais, J A; Rataboul, P; Saber, A; Sirvain, S; Wolfklein, G; de Wazieres, B
2018-05-01
The Confusion Assessment Method (CAM) is a validated key tool in clinical practice and research programs to diagnose delirium and assess its severity. There is no validated French version of the CAM training manual and coding guide (Inouye SK). The aim of this study was to establish a consensual French version of the CAM and its manual. Cross-cultural adaptation to achieve equivalence between the original version and a French adapted version of the CAM manual. A rigorous process was conducted including control of cultural adequacy of the tool's components, double forward and back translations, reconciliation, expert committee review (including bilingual translators with different nationalities, a linguist, highly qualified clinicians, methodologists) and pretesting. A consensual French version of the CAM was achieved. Implementation of the CAM French version in daily clinical practice will enable optimal diagnosis of delirium diagnosis and enhance communication between health professionals in French speaking countries. Validity and psychometric properties are being tested in a French multicenter cohort, opening up new perspectives for improved quality of care and research programs in French speaking countries. Copyright © 2018 Elsevier Masson SAS. All rights reserved.
ERIC Educational Resources Information Center
Foubert, John D.; Newberry, Johnathan T.
2006-01-01
Fraternity men (N = 261) at a small to midsized public university saw one of two versions of a rape prevention program or were in a control group. Program participants reported significant increases in empathy toward rape survivors and significant declines in rape myth acceptance, likelihood of raping, and likelihood of committing sexual assault.…
ERIC Educational Resources Information Center
ROSEN, ELLEN F.; STOLUROW, LAWRENCE M.
IN ORDER TO FIND A GOOD PREDICTOR OF EMPIRICAL DIFFICULTY, AN OPERATIONAL DEFINITION OF STEP SIZE, TEN PROGRAMER-JUDGES RATED CHANGE IN COMPLEXITY IN TWO VERSIONS OF A MATHEMATICS PROGRAM, AND THESE RATINGS WERE THEN COMPARED WITH MEASURES OF EMPIRICAL DIFFICULTY OBTAINED FROM STUDENT RESPONSE DATA. THE TWO VERSIONS, A 54 FRAME BOOKLET AND A 35…
Epi info - present and future.
Su, Y; Yoon, S S
2003-01-01
Epi Info is a suite of public domain computer programs for public health professionals developed by the Centers for Disease Control and Prevention (CDC). Epi Info is used for rapid questionnaire design, data entry and validation, data analysis including mapping and graphing, and creation of reports. Epi Info was originally created in 1985 using Turbo Pascal. In 1998, the last version of Epi Info for DOS, version 6, was released. Epi Info for DOS is currently supported by CDC but is no longer updated. The current version, Epi Info 2002, is Windows-based software developed using Microsoft Visual Basic. Approximately 300,000 downloads of Epi Info software occurred in 2002 from approximately 130 countries. These numbers make Epi Info probably one of the most widely distributed and used public domain programs in the world. The DOS version of Epi Info was translated into 13 languages, and efforts are underway to translate the Windows version into other major languages. Versions already exist for Spanish, French, Portuguese, Chinese, Japanese, and Arabic.
The IAEA neutron coincidence counting (INCC) and the DEMING least-squares fitting programs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krick, M.S.; Harker, W.C.; Rinard, P.M.
1998-12-01
Two computer programs are described: (1) the INCC (IAEA or International Neutron Coincidence Counting) program and (2) the DEMING curve-fitting program. The INCC program is an IAEA version of the Los Alamos NCC (Neutron Coincidence Counting) code. The DEMING program is an upgrade of earlier Windows{reg_sign} and DOS codes with the same name. The versions described are INCC 3.00 and DEMING 1.11. The INCC and DEMING codes provide inspectors with the software support needed to perform calibration and verification measurements with all of the neutron coincidence counting systems used in IAEA inspections for the nondestructive assay of plutonium and uranium.
AMPS/PC - AUTOMATIC MANUFACTURING PROGRAMMING SYSTEM
NASA Technical Reports Server (NTRS)
Schroer, B. J.
1994-01-01
The AMPS/PC system is a simulation tool designed to aid the user in defining the specifications of a manufacturing environment and then automatically writing code for the target simulation language, GPSS/PC. The domain of problems that AMPS/PC can simulate are manufacturing assembly lines with subassembly lines and manufacturing cells. The user defines the problem domain by responding to the questions from the interface program. Based on the responses, the interface program creates an internal problem specification file. This file includes the manufacturing process network flow and the attributes for all stations, cells, and stock points. AMPS then uses the problem specification file as input for the automatic code generator program to produce a simulation program in the target language GPSS. The output of the generator program is the source code of the corresponding GPSS/PC simulation program. The system runs entirely on an IBM PC running PC DOS Version 2.0 or higher and is written in Turbo Pascal Version 4 requiring 640K memory and one 360K disk drive. To execute the GPSS program, the PC must have resident the GPSS/PC System Version 2.0 from Minuteman Software. The AMPS/PC program was developed in 1988.
GDF v2.0, an enhanced version of GDF
NASA Astrophysics Data System (ADS)
Tsoulos, Ioannis G.; Gavrilis, Dimitris; Dermatas, Evangelos
2007-12-01
An improved version of the function estimation program GDF is presented. The main enhancements of the new version include: multi-output function estimation, capability of defining custom functions in the grammar and selection of the error function. The new version has been evaluated on a series of classification and regression datasets, that are widely used for the evaluation of such methods. It is compared to two known neural networks and outperforms them in 5 (out of 10) datasets. Program summaryTitle of program: GDF v2.0 Catalogue identifier: ADXC_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXC_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 98 147 No. of bytes in distributed program, including test data, etc.: 2 040 684 Distribution format: tar.gz Programming language: GNU C++ Computer: The program is designed to be portable in all systems running the GNU C++ compiler Operating system: Linux, Solaris, FreeBSD RAM: 200000 bytes Classification: 4.9 Does the new version supersede the previous version?: Yes Nature of problem: The technique of function estimation tries to discover from a series of input data a functional form that best describes them. This can be performed with the use of parametric models, whose parameters can adapt according to the input data. Solution method: Functional forms are being created by genetic programming which are approximations for the symbolic regression problem. Reasons for new version: The GDF package was extended in order to be more flexible and user customizable than the old package. The user can extend the package by defining his own error functions and he can extend the grammar of the package by adding new functions to the function repertoire. Also, the new version can perform function estimation of multi-output functions and it can be used for classification problems. Summary of revisions: The following features have been added to the package GDF: Multi-output function approximation. The package can now approximate any function f:R→R. This feature gives also to the package the capability of performing classification and not only regression. User defined function can be added to the repertoire of the grammar, extending the regression capabilities of the package. This feature is limited to 3 functions, but easily this number can be increased. Capability of selecting the error function. The package offers now to the user apart from the mean square error other error functions such as: mean absolute square error, maximum square error. Also, user defined error functions can be added to the set of error functions. More verbose output. The main program displays more information to the user as well as the default values for the parameters. Also, the package gives to the user the capability to define an output file, where the output of the gdf program for the testing set will be stored after the termination of the process. Additional comments: A technical report describing the revisions, experiments and test runs is packaged with the source code. Running time: Depending on the train data.
NASA Technical Reports Server (NTRS)
Justus, C. G.; Alyea, F. N.; Cunnold, D. M.; Jeffries, W. R., III; Johnson, D. L.
1991-01-01
A new (1990) version of the NASA/MSFC Global Reference Atmospheric Model (GRAM-90) was completed and the program and key data base listing are presented. GRAM-90 incorporate extensive new data, mostly collected under the Middle Atmosphere Program, to produce a completely revised middle atmosphere model (20 to 120 km). At altitudes greater than 120 km, GRAM-90 uses the NASA Marshall Engineering Thermosphere model. Complete listings of all program and major data bases are presented. Also, a test case is included.
MSFC crack growth analysis computer program, version 2 (users manual)
NASA Technical Reports Server (NTRS)
Creager, M.
1976-01-01
An updated version of the George C. Marshall Space Flight Center Crack Growth Analysis Program is described. The updated computer program has significantly expanded capabilities over the original one. This increased capability includes an extensive expansion of the library of stress intensity factors, plotting capability, increased design iteration capability, and the capability of performing proof test logic analysis. The technical approaches used within the computer program are presented, and the input and output formats and options are described. Details of the stress intensity equations, example data, and example problems are presented.
A new version of code Java for 3D simulation of the CCA model
NASA Astrophysics Data System (ADS)
Zhang, Kebo; Xiong, Hailing; Li, Chao
2016-07-01
In this paper we present a new version of the program of CCA model. In order to benefit from the advantages involved in the latest technologies, we migrated the running environment from JDK1.6 to JDK1.7. And the old program was optimized into a new framework, so promoted extendibility.
ERIC Educational Resources Information Center
Kim, Scott Sungki
2013-01-01
The present research study investigated the effects of 8 versions of a computer-based vocabulary learning program on receptive and productive knowledge levels of college students. The participants were 106 male and 103 female Korean EFL students from Kyungsung University and Kwandong University in Korea. Students who participated in versions of…
MODIFIED N.R.C. VERSION OF THE U.S.G.S. SOLUTE TRANSPORT MODEL. VOLUME 1. MODIFICATIONS
The methods described in the report can be used with the modified N.R.C. version of the U.S.G.S. Solute Transport Model to predict the concentration of chemical parameters in a contaminant plume. The two volume report contains program documentation and user's manual. The program ...
Master Teachers as Professional Developers: Managing Conflicting Versions of Professionalism
ERIC Educational Resources Information Center
Montecinos, Carmen; Pino, Mauricio; Campos-Martinez, Javier; Domínguez, Rosario; Carreño, Claudia
2014-01-01
As education's main workforce, teachers have been the target of policies designed to shape and affirm new versions of professionalism. This paper examines this issue as it is exemplified by the Teachers of Teachers Network (TTN), a program developed by Chile's Ministry of Education. As a program designed to identify and reward high quality…
Markov chains for testing redundant software
NASA Technical Reports Server (NTRS)
White, Allan L.; Sjogren, Jon A.
1988-01-01
A preliminary design for a validation experiment has been developed that addresses several problems unique to assuring the extremely high quality of multiple-version programs in process-control software. The procedure uses Markov chains to model the error states of the multiple version programs. The programs are observed during simulated process-control testing, and estimates are obtained for the transition probabilities between the states of the Markov chain. The experimental Markov chain model is then expanded into a reliability model that takes into account the inertia of the system being controlled. The reliability of the multiple version software is computed from this reliability model at a given confidence level using confidence intervals obtained for the transition probabilities during the experiment. An example demonstrating the method is provided.
Star adaptation for two-algorithms used on serial computers
NASA Technical Reports Server (NTRS)
Howser, L. M.; Lambiotte, J. J., Jr.
1974-01-01
Two representative algorithms used on a serial computer and presently executed on the Control Data Corporation 6000 computer were adapted to execute efficiently on the Control Data STAR-100 computer. Gaussian elimination for the solution of simultaneous linear equations and the Gauss-Legendre quadrature formula for the approximation of an integral are the two algorithms discussed. A description is given of how the programs were adapted for STAR and why these adaptations were necessary to obtain an efficient STAR program. Some points to consider when adapting an algorithm for STAR are discussed. Program listings of the 6000 version coded in 6000 FORTRAN, the adapted STAR version coded in 6000 FORTRAN, and the STAR version coded in STAR FORTRAN are presented in the appendices.
New version of PLNoise: a package for exact numerical simulation of power-law noises
NASA Astrophysics Data System (ADS)
Milotti, Edoardo
2007-08-01
In a recent paper I have introduced a package for the exact simulation of power-law noises and other colored noises [E. Milotti, Comput. Phys. Comm. 175 (2006) 212]: in particular, the algorithm generates 1/f noises with 0<α⩽2. Here I extend the algorithm to generate 1/f noises with 2<α⩽4 (black noises). The method is exact in the sense that it produces a sampled process with a theoretically guaranteed range-limited power-law spectrum for any arbitrary sequence of sampling intervals, i.e. the sampling times may be unevenly spaced. Program summaryTitle of program: PLNoise Catalogue identifier:ADXV_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXV_v2_0.html Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Programming language used: ANSI C Computer: Any computer with an ANSI C compiler: the package has been tested with gcc version 3.2.3 on Red Hat Linux 3.2.3-52 and gcc version 4.0.0 and 4.0.1 on Apple Mac OS X-10.4 Operating system: All operating systems capable of running an ANSI C compiler RAM: The code of the test program is very compact (about 60 Kbytes), but the program works with list management and allocates memory dynamically; in a typical run with average list length 2ṡ10, the RAM taken by the list is 200 Kbytes External routines: The package needs external routines to generate uniform and exponential deviates. The implementation described here uses the random number generation library ranlib freely available from Netlib [B.W. Brown, J. Lovato, K. Russell: ranlib, available from Netlib, http://www.netlib.org/random/index.html, select the C version ranlib.c], but it has also been successfully tested with the random number routines in Numerical Recipes [W.H. Press, S.A. Teulkolsky, W.T. Vetterling, B.P. Flannery, Numerical Recipes in C: The Art of Scientific Computing, second ed., Cambridge Univ. Press., Cambridge, 1992, pp. 274-290]. Notice that ranlib requires a pair of routines from the linear algebra package LINPACK, and that the distribution of ranlib includes the C source of these routines, in case LINPACK is not installed on the target machine. No. of lines in distributed program, including test data, etc.:2975 No. of bytes in distributed program, including test data, etc.:194 588 Distribution format:tar.gz Catalogue identifier of previous version: ADXV_v1_0 Journal reference of previous version: Comput. Phys. Comm. 175 (2006) 212 Does the new version supersede the previous version?: Yes Nature of problem: Exact generation of different types of colored noise. Solution method: Random superposition of relaxation processes [E. Milotti, Phys. Rev. E 72 (2005) 056701], possibly followed by an integration step to produce noise with spectral index >2. Reasons for the new version: Extension to 1/f noises with spectral index 2<α⩽4: the new version generates both noises with spectral with spectral index 0<α⩽2 and with 2<α⩽4. Summary of revisions: Although the overall structure remains the same, one routine has been added and several changes have been made throughout the code to include the new integration step. Unusual features: The algorithm is theoretically guaranteed to be exact, and unlike all other existing generators it can generate samples with uneven spacing. Additional comments: The program requires an initialization step; for some parameter sets this may become rather heavy. Running time: Running time varies widely with different input parameters, however in a test run like the one in Section 3 in the long write-up, the generation routine took on average about 75 μs for each sample.
Eye movements as a function of response contingencies measured by blackout technique1
Doran, Judith; Holland, James G.
1971-01-01
A program may have a low error rate but, at the same time, require little of the student and teach him little. A measure to supplement error rate in evaluating a program has recently been developed. This measure, called the blackout ratio, is the percentage of material that may be deleted without increasing the error rate. In high blackout-ratio programs, obtaining a correct answer is contingent upon only a small portion of the item. The present study determined if such low response-contingent material is read less thoroughly than programmed material that is heavily response-contingent. Eye movements were compared for two versions of the same program that differed only in the choice of the omitted words. The alteration of the required responses resulted in a version with a higher blackout ratio than the original version, which had a low blackout ratio. Eighteen undergraduates received half their material from the high and half their material from the low blackout-ratio version. The order was counterbalanced. Location and duration of all eye fixations in each item were recorded by a Mackworth Eye Marker Camera. On high blackout-ratio material, subjects used fewer fixations, shorter fixation time, and shorter scanning time. High blackout-ratio material failed to evoke the students' attention. PMID:16795275
NASA Technical Reports Server (NTRS)
Saltsman, James F.
1992-01-01
This manual presents computer programs for characterizing and predicting fatigue and creep-fatigue resistance of metallic materials in the high-temperature, long-life regime for isothermal and nonisothermal fatigue. The programs use the total strain version of Strainrange Partitioning (TS-SRP). An extensive database has also been developed in a parallel effort. This database is probably the largest source of high-temperature, creep-fatigue test data available in the public domain and can be used with other life prediction methods as well. This users manual, software, and database are all in the public domain and are available through COSMIC (382 East Broad Street, Athens, GA 30602; (404) 542-3265, FAX (404) 542-4807). Two disks accompany this manual. The first disk contains the source code, executable files, and sample output from these programs. The second disk contains the creep-fatigue data in a format compatible with these programs.
MPI implementation of PHOENICS: A general purpose computational fluid dynamics code
NASA Astrophysics Data System (ADS)
Simunovic, S.; Zacharia, T.; Baltas, N.; Spalding, D. B.
1995-03-01
PHOENICS is a suite of computational analysis programs that are used for simulation of fluid flow, heat transfer, and dynamical reaction processes. The parallel version of the solver EARTH for the Computational Fluid Dynamics (CFD) program PHOENICS has been implemented using Message Passing Interface (MPI) standard. Implementation of MPI version of PHOENICS makes this computational tool portable to a wide range of parallel machines and enables the use of high performance computing for large scale computational simulations. MPI libraries are available on several parallel architectures making the program usable across different architectures as well as on heterogeneous computer networks. The Intel Paragon NX and MPI versions of the program have been developed and tested on massively parallel supercomputers Intel Paragon XP/S 5, XP/S 35, and Kendall Square Research, and on the multiprocessor SGI Onyx computer at Oak Ridge National Laboratory. The preliminary testing results of the developed program have shown scalable performance for reasonably sized computational domains.
MPI implementation of PHOENICS: A general purpose computational fluid dynamics code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simunovic, S.; Zacharia, T.; Baltas, N.
1995-04-01
PHOENICS is a suite of computational analysis programs that are used for simulation of fluid flow, heat transfer, and dynamical reaction processes. The parallel version of the solver EARTH for the Computational Fluid Dynamics (CFD) program PHOENICS has been implemented using Message Passing Interface (MPI) standard. Implementation of MPI version of PHOENICS makes this computational tool portable to a wide range of parallel machines and enables the use of high performance computing for large scale computational simulations. MPI libraries are available on several parallel architectures making the program usable across different architectures as well as on heterogeneous computer networks. Themore » Intel Paragon NX and MPI versions of the program have been developed and tested on massively parallel supercomputers Intel Paragon XP/S 5, XP/S 35, and Kendall Square Research, and on the multiprocessor SGI Onyx computer at Oak Ridge National Laboratory. The preliminary testing results of the developed program have shown scalable performance for reasonably sized computational domains.« less
NASA Technical Reports Server (NTRS)
Bradley, P. F.; Throckmorton, D. A.
1981-01-01
A study was completed to determine the sensitivity of computed convective heating rates to uncertainties in the thermal protection system thermal model. Those parameters considered were: density, thermal conductivity, and specific heat of both the reusable surface insulation and its coating; coating thickness and emittance; and temperature measurement uncertainty. The assessment used a modified version of the computer program to calculate heating rates from temperature time histories. The original version of the program solves the direct one dimensional heating problem and this modified version of The program is set up to solve the inverse problem. The modified program was used in thermocouple data reduction for shuttle flight data. Both nominal thermal models and altered thermal models were used to determine the necessity for accurate knowledge of thermal protection system's material thermal properties. For many thermal properties, the sensitivity (inaccuracies created in the calculation of convective heating rate by an altered property) was very low.
COMPPAP - COMPOSITE PLATE BUCKLING ANALYSIS PROGRAM (IBM PC VERSION)
NASA Technical Reports Server (NTRS)
Smith, J. P.
1994-01-01
The Composite Plate Buckling Analysis Program (COMPPAP) was written to help engineers determine buckling loads of orthotropic (or isotropic) irregularly shaped plates without requiring hand calculations from design curves or extensive finite element modeling. COMPPAP is a one element finite element program that utilizes high-order displacement functions. The high order of the displacement functions enables the user to produce results more accurate than traditional h-finite elements. This program uses these high-order displacement functions to perform a plane stress analysis of a general plate followed by a buckling calculation based on the stresses found in the plane stress solution. The current version assumes a flat plate (constant thickness) subject to a constant edge load (normal or shear) on one or more edges. COMPPAP uses the power method to find the eigenvalues of the buckling problem. The power method provides an efficient solution when only one eigenvalue is desired. Once the eigenvalue is found, the eigenvector, which corresponds to the plate buckling mode shape, results as a by-product. A positive feature of the power method is that the dominant eigenvalue is the first found, which is this case is the plate buckling load. The reported eigenvalue expresses a load factor to induce plate buckling. COMPPAP is written in ANSI FORTRAN 77. Two machine versions are available from COSMIC: a PC version (MSC-22428), which is for IBM PC 386 series and higher computers and compatibles running MS-DOS; and a UNIX version (MSC-22286). The distribution medium for both machine versions includes source code for both single and double precision versions of COMPPAP. The PC version includes source code which has been optimized for implementation within DOS memory constraints as well as sample executables for both the single and double precision versions of COMPPAP. The double precision versions of COMPPAP have been successfully implemented on an IBM PC 386 compatible running MS-DOS, a Sun4 series computer running SunOS, an HP-9000 series computer running HP-UX, and a CRAY X-MP series computer running UNICOS. COMPPAP requires 1Mb of RAM and the BLAS and LINPACK math libraries, which are included on the distribution medium. The COMPPAP documentation provides instructions for using the commercial post-processing package PATRAN for graphical interpretation of COMPPAP output. The UNIX version includes two electronic versions of the documentation: one in LaTex format and one in PostScript format. The standard distribution medium for the PC version (MSC-22428) is a 5.25 inch 1.2Mb MS-DOS format diskette. The standard distribution medium for the UNIX version (MSC-22286) is a .25 inch streaming magnetic tape cartridge (Sun QIC-24) in UNIX tar format. For the UNIX version, alternate distribution media and formats are available upon request. COMPPAP was developed in 1992.
COMPPAP - COMPOSITE PLATE BUCKLING ANALYSIS PROGRAM (UNIX VERSION)
NASA Technical Reports Server (NTRS)
Smith, J. P.
1994-01-01
The Composite Plate Buckling Analysis Program (COMPPAP) was written to help engineers determine buckling loads of orthotropic (or isotropic) irregularly shaped plates without requiring hand calculations from design curves or extensive finite element modeling. COMPPAP is a one element finite element program that utilizes high-order displacement functions. The high order of the displacement functions enables the user to produce results more accurate than traditional h-finite elements. This program uses these high-order displacement functions to perform a plane stress analysis of a general plate followed by a buckling calculation based on the stresses found in the plane stress solution. The current version assumes a flat plate (constant thickness) subject to a constant edge load (normal or shear) on one or more edges. COMPPAP uses the power method to find the eigenvalues of the buckling problem. The power method provides an efficient solution when only one eigenvalue is desired. Once the eigenvalue is found, the eigenvector, which corresponds to the plate buckling mode shape, results as a by-product. A positive feature of the power method is that the dominant eigenvalue is the first found, which is this case is the plate buckling load. The reported eigenvalue expresses a load factor to induce plate buckling. COMPPAP is written in ANSI FORTRAN 77. Two machine versions are available from COSMIC: a PC version (MSC-22428), which is for IBM PC 386 series and higher computers and compatibles running MS-DOS; and a UNIX version (MSC-22286). The distribution medium for both machine versions includes source code for both single and double precision versions of COMPPAP. The PC version includes source code which has been optimized for implementation within DOS memory constraints as well as sample executables for both the single and double precision versions of COMPPAP. The double precision versions of COMPPAP have been successfully implemented on an IBM PC 386 compatible running MS-DOS, a Sun4 series computer running SunOS, an HP-9000 series computer running HP-UX, and a CRAY X-MP series computer running UNICOS. COMPPAP requires 1Mb of RAM and the BLAS and LINPACK math libraries, which are included on the distribution medium. The COMPPAP documentation provides instructions for using the commercial post-processing package PATRAN for graphical interpretation of COMPPAP output. The UNIX version includes two electronic versions of the documentation: one in LaTex format and one in PostScript format. The standard distribution medium for the PC version (MSC-22428) is a 5.25 inch 1.2Mb MS-DOS format diskette. The standard distribution medium for the UNIX version (MSC-22286) is a .25 inch streaming magnetic tape cartridge (Sun QIC-24) in UNIX tar format. For the UNIX version, alternate distribution media and formats are available upon request. COMPPAP was developed in 1992.
TIMESERIESSTREAMING.VI: LabVIEW program for reliable data streaming of large analog time series
NASA Astrophysics Data System (ADS)
Czerwinski, Fabian; Oddershede, Lene B.
2011-02-01
With modern data acquisition devices that work fast and very precise, scientists often face the task of dealing with huge amounts of data. These need to be rapidly processed and stored onto a hard disk. We present a LabVIEW program which reliably streams analog time series of MHz sampling. Its run time has virtually no limitation. We explicitly show how to use the program to extract time series from two experiments: For a photodiode detection system that tracks the position of an optically trapped particle and for a measurement of ionic current through a glass capillary. The program is easy to use and versatile as the input can be any type of analog signal. Also, the data streaming software is simple, highly reliable, and can be easily customized to include, e.g., real-time power spectral analysis and Allan variance noise quantification. Program summaryProgram title: TimeSeriesStreaming.VI Catalogue identifier: AEHT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHT_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 250 No. of bytes in distributed program, including test data, etc.: 63 259 Distribution format: tar.gz Programming language: LabVIEW ( http://www.ni.com/labview/) Computer: Any machine running LabVIEW 8.6 or higher Operating system: Windows XP and Windows 7 RAM: 60-360 Mbyte Classification: 3 Nature of problem: For numerous scientific and engineering applications, it is highly desirable to have an efficient, reliable, and flexible program to perform data streaming of time series sampled with high frequencies and possibly for long time intervals. This type of data acquisition often produces very large amounts of data not easily streamed onto a computer hard disk using standard methods. Solution method: This LabVIEW program is developed to directly stream any kind of time series onto a hard disk. Due to optimized timing and usage of computational resources, such as multicores and protocols for memory usage, this program provides extremely reliable data acquisition. In particular, the program is optimized to deal with large amounts of data, e.g., taken with high sampling frequencies and over long time intervals. The program can be easily customized for time series analyses. Restrictions: Only tested in Windows-operating LabVIEW environments, must use TDMS format, acquisition cards must be LabVIEW compatible, driver DAQmx installed. Running time: As desirable: microseconds to hours
The pEst version 2.1 user's manual
NASA Technical Reports Server (NTRS)
Murray, James E.; Maine, Richard E.
1987-01-01
This report is a user's manual for version 2.1 of pEst, a FORTRAN 77 computer program for interactive parameter estimation in nonlinear dynamic systems. The pEst program allows the user complete generality in definig the nonlinear equations of motion used in the analysis. The equations of motion are specified by a set of FORTRAN subroutines; a set of routines for a general aircraft model is supplied with the program and is described in the report. The report also briefly discusses the scope of the parameter estimation problem the program addresses. The report gives detailed explanations of the purpose and usage of all available program commands and a description of the computational algorithms used in the program.
1988-01-04
Controller Routine .......... ........................ 405 -viii- ’ O, ...1 . • N SList of Illustrations i p List of Illustrations . Fig. 1: A...J------ - - - 6 - -- -w -- -w -r n . w ~ - P a CGCS Program Versions ~CGCS Program Versions This section describes the "evolution" of the...8217 ~- 134 - ,d" - 1’ , n "W , ’." " a 4 r P . ’ ,’ r t r 1 "."." , . L t * 5.1 CGCS Concept and Structure 5. The Czochralski Growth Control System Software
Park, Juyoung; Newman, David; McCaffrey, Ruth; Garrido, Jacinto J; Riccio, Mary Lou; Liehr, Patricia
Chair yoga (CY), a mind-body therapy, is a safe nonpharmacological approach for managing osteoarthritis (OA) in older adults who cannot participate in standing exercise. However, there is no linguistically tailored CY program for those with limited English proficiency (LEP). This 2-arm randomized controlled trial compared the effects of a linguistically tailored yoga program (English and Spanish versions) on the outcomes of pain, physical function, and psychosocial factors compared to the effects of a linguistically tailored Health Education Program (HEP; English and Spanish versions). Participants with lower-extremity OA, recruited from 2 community sites, completed the Spanish (n = 40) or English (n = 60) version of twice-weekly 45-min CY or HEP sessions for 8 weeks. Data were collected at baseline, 4 weeks, 8 weeks, and 1- and 3-month follow-ups. English and Spanish CY groups (but neither HEP language group) showed significant decreases in pain interference. Measures of OA symptoms, balance, depression, and social activities were not significantly different between English and Spanish versions of CY and English and Spanish versions of HEP. It was concluded that the Spanish and English versions of CY and HEP were equivalent. Linguistically tailored CY could be implemented in aging-serving communities for persons with LEP.
NWChem: A comprehensive and scalable open-source solution for large scale molecular simulations
NASA Astrophysics Data System (ADS)
Valiev, M.; Bylaska, E. J.; Govind, N.; Kowalski, K.; Straatsma, T. P.; Van Dam, H. J. J.; Wang, D.; Nieplocha, J.; Apra, E.; Windus, T. L.; de Jong, W. A.
2010-09-01
The latest release of NWChem delivers an open-source computational chemistry package with extensive capabilities for large scale simulations of chemical and biological systems. Utilizing a common computational framework, diverse theoretical descriptions can be used to provide the best solution for a given scientific problem. Scalable parallel implementations and modular software design enable efficient utilization of current computational architectures. This paper provides an overview of NWChem focusing primarily on the core theoretical modules provided by the code and their parallel performance. Program summaryProgram title: NWChem Catalogue identifier: AEGI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGI_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Open Source Educational Community License No. of lines in distributed program, including test data, etc.: 11 709 543 No. of bytes in distributed program, including test data, etc.: 680 696 106 Distribution format: tar.gz Programming language: Fortran 77, C Computer: all Linux based workstations and parallel supercomputers, Windows and Apple machines Operating system: Linux, OS X, Windows Has the code been vectorised or parallelized?: Code is parallelized Classification: 2.1, 2.2, 3, 7.3, 7.7, 16.1, 16.2, 16.3, 16.10, 16.13 Nature of problem: Large-scale atomistic simulations of chemical and biological systems require efficient and reliable methods for ground and excited solutions of many-electron Hamiltonian, analysis of the potential energy surface, and dynamics. Solution method: Ground and excited solutions of many-electron Hamiltonian are obtained utilizing density-functional theory, many-body perturbation approach, and coupled cluster expansion. These solutions or a combination thereof with classical descriptions are then used to analyze potential energy surface and perform dynamical simulations. Additional comments: Full documentation is provided in the distribution file. This includes an INSTALL file giving details of how to build the package. A set of test runs is provided in the examples directory. The distribution file for this program is over 90 Mbytes and therefore is not delivered directly when download or Email is requested. Instead a html file giving details of how the program can be obtained is sent. Running time: Running time depends on the size of the chemical system, complexity of the method, number of cpu's and the computational task. It ranges from several seconds for serial DFT energy calculations on a few atoms to several hours for parallel coupled cluster energy calculations on tens of atoms or ab-initio molecular dynamics simulation on hundreds of atoms.
Augmentation of Teaching Tools: Outsourcing the HSD Computing for SPSS Application
ERIC Educational Resources Information Center
Wang, Jianjun
2010-01-01
The widely-used Tukey's HSD index is not produced in the current version of SPSS (i.e., PASW Statistics, version 18), and a computer program named "HSD Calculator" has been chosen to amend this problem. In comparison to hand calculation, this program application does not require table checking, which eliminates potential concern on the size of a…
Automated Test Assembly Using lp_Solve Version 5.5 in R
ERIC Educational Resources Information Center
Diao, Qi; van der Linden, Wim J.
2011-01-01
This article reviews the use of the software program lp_solve version 5.5 for solving mixed-integer automated test assembly (ATA) problems. The program is freely available under Lesser General Public License 2 (LGPL2). It can be called from the statistical language R using the lpSolveAPI interface. Three empirical problems are presented to…
An Alternative to QUERY: Batch-Searching of the ERIC Information Collections.
ERIC Educational Resources Information Center
Krahmer, Edward; Horne, Kent
A manual describing the RIC computer search program for retrieval of information from ERIC, CIJE, and other collections is presented. It is pointed out that two versions of this program have been developed. The first is for an IBM 360/370 computer. This version has been operational on a production basis for nearly a year. Four installations of…
PVAST Propeller Vibration and Strength Analysis Program Version 7.3 User’s Manual
2001-03-01
Copy No: ___ _ PVAST Propeller Vibration and Strength Analysis Program Version 7.3 User’s Manual Koko ,T S, Palmeter, M F, Chernuka, M.W. MARTEC...34St name, rruddle mittal If rruhtary, show rank, e g. Doe, Maj. John E.) Koko ,T.S., Palmeter, M.F., Chernuka, M.W. DATE OF PUBLICATION (month and
DFMSPH14: A C-code for the double folding interaction potential of two spherical nuclei
NASA Astrophysics Data System (ADS)
Gontchar, I. I.; Chushnyakova, M. V.
2016-09-01
This is a new version of the DFMSPH code designed to obtain the nucleus-nucleus potential by using the double folding model (DFM) and in particular to find the Coulomb barrier. The new version uses the charge, proton, and neutron density distributions provided by the user. Also we added an option for fitting the DFM potential by the Gross-Kalinowski profile. The main functionalities of the original code (e.g. the nucleus-nucleus potential as a function of the distance between the centers of mass of colliding nuclei, the Coulomb barrier characteristics, etc.) have not been modified. Catalog identifier: AEFH_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFH_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland. Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 7211 No. of bytes in distributed program, including test data, etc.: 114404 Distribution format: tar.gz Programming language: C Computer: PC and Mac Operation system: Windows XP and higher, MacOS, Unix/Linux Memory required to execute with typical data: below 10 Mbyte Classification: 17.9 Catalog identifier of previous version: AEFH_v1_0 Journal reference of previous version: Comp. Phys. Comm. 181 (2010) 168 Does the new version supersede the previous version?: Yes Nature of physical problem: The code calculates in a semimicroscopic way the bare interaction potential between two colliding spherical nuclei as a function of the center of mass distance. The height and the position of the Coulomb barrier are found. The calculated potential is approximated by an analytical profile (Woods-Saxon or Gross-Kalinowski) near the barrier. Dependence of the barrier parameters upon the characteristics of the effective NN forces (like, e.g. the range of the exchange part of the nuclear term) can be investigated. Method of solution: The nucleus-nucleus potential is calculated using the double folding model with the Coulomb and the effective M3Y NN interactions. For the direct parts of the Coulomb and the nuclear terms, the Fourier transform method is used. In order to calculate the exchange parts, the density matrix expansion method is applied. Typical running time: less than 1 minute. Reason for new version: Many users asked us how to implement their own density distributions in the DFMSPH. Now this option has been added. Also we found that the calculated Double-Folding Potential (DFP) is approximated more accurately by the Gross-Kalinowski (GK) profile. This option has been also added.
SMMP v. 3.0—Simulating proteins and protein interactions in Python and Fortran
NASA Astrophysics Data System (ADS)
Meinke, Jan H.; Mohanty, Sandipan; Eisenmenger, Frank; Hansmann, Ulrich H. E.
2008-03-01
We describe a revised and updated version of the program package SMMP. SMMP is an open-source FORTRAN package for molecular simulation of proteins within the standard geometry model. It is designed as a simple and inexpensive tool for researchers and students to become familiar with protein simulation techniques. SMMP 3.0 sports a revised API increasing its flexibility, an implementation of the Lund force field, multi-molecule simulations, a parallel implementation of the energy function, Python bindings, and more. Program summaryTitle of program:SMMP Catalogue identifier:ADOJ_v3_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADOJ_v3_0.html Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing provisions:Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html Programming language used:FORTRAN, Python No. of lines in distributed program, including test data, etc.:52 105 No. of bytes in distributed program, including test data, etc.:599 150 Distribution format:tar.gz Computer:Platform independent Operating system:OS independent RAM:2 Mbytes Classification:3 Does the new version supersede the previous version?:Yes Nature of problem:Molecular mechanics computations and Monte Carlo simulation of proteins. Solution method:Utilizes ECEPP2/3, FLEX, and Lund potentials. Includes Monte Carlo simulation algorithms for canonical, as well as for generalized ensembles. Reasons for new version:API changes and increased functionality. Summary of revisions:Added Lund potential; parameters used in subroutines are now passed as arguments; multi-molecule simulations; parallelized energy calculation for ECEPP; Python bindings. Restrictions:The consumed CPU time increases with the size of protein molecule. Running time:Depends on the size of the simulated molecule.
Assessment of radionuclide databases in CAP88 mainframe version 1.0 and Windows-based version 3.0.
LaBone, Elizabeth D; Farfán, Eduardo B; Lee, Patricia L; Jannik, G Timothy; Donnelly, Elizabeth H; Foley, Trevor Q
2009-09-01
In this study the radionuclide databases for two versions of the Clean Air Act Assessment Package-1988 (CAP88) computer model were assessed in detail. CAP88 estimates radiation dose and the risk of health effects to human populations from radionuclide emissions to air. This program is used by several U.S. Department of Energy (DOE) facilities to comply with National Emission Standards for Hazardous Air Pollutants regulations. CAP88 Mainframe, referred to as version 1.0 on the U.S. Environmental Protection Agency Web site (http://www.epa.gov/radiation/assessment/CAP88/), was the very first CAP88 version released in 1988. Some DOE facilities including the Savannah River Site still employ this version (1.0) while others use the more user-friendly personal computer Windows-based version 3.0 released in December 2007. Version 1.0 uses the program RADRISK based on International Commission on Radiological Protection Publication 30 as its radionuclide database. Version 3.0 uses half-life, dose, and risk factor values based on Federal Guidance Report 13. Differences in these values could cause different results for the same input exposure data (same scenario), depending on which version of CAP88 is used. Consequently, the differences between the two versions are being assessed in detail at Savannah River National Laboratory. The version 1.0 and 3.0 database files contain 496 and 838 radionuclides, respectively, and though one would expect the newer version to include all the 496 radionuclides, 35 radionuclides are listed in version 1.0 that are not included in version 3.0. The majority of these has either extremely short or long half-lives or is no longer in production; however, some of the short-lived radionuclides might produce progeny of great interest at DOE sites. In addition, 122 radionuclides were found to have different half-lives in the two versions, with 21 over 3 percent different and 12 over 10 percent different.
A finite difference Hartree-Fock program for atoms and diatomic molecules
NASA Astrophysics Data System (ADS)
Kobus, Jacek
2013-03-01
The newest version of the two-dimensional finite difference Hartree-Fock program for atoms and diatomic molecules is presented. This is an updated and extended version of the program published in this journal in 1996. It can be used to obtain reference, Hartree-Fock limit values of total energies and multipole moments for a wide range of diatomic molecules and their ions in order to calibrate existing and develop new basis sets, calculate (hyper)polarizabilities (αzz, βzzz, γzzzz, Az,zz, Bzz,zz) of atoms, homonuclear and heteronuclear diatomic molecules and their ions via the finite field method, perform DFT-type calculations using LDA or B88 exchange functionals and LYP or VWN correlations ones or the self-consistent multiplicative constant method, perform one-particle calculations with (smooth) Coulomb and Krammers-Henneberger potentials and take account of finite nucleus models. The program is easy to install and compile (tarball+configure+make) and can be used to perform calculations within double- or quadruple-precision arithmetic. Catalogue identifier: ADEB_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADEB_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License version 2 No. of lines in distributed program, including test data, etc.: 171196 No. of bytes in distributed program, including test data, etc.: 9481802 Distribution format: tar.gz Programming language: Fortran 77, C. Computer: any 32- or 64-bit platform. Operating system: Unix/Linux. RAM: Case dependent, from few MB to many GB Classification: 16.1. Catalogue identifier of previous version: ADEB_v1_0 Journal reference of previous version: Comput. Phys. Comm. 98(1996)346 Does the new version supersede the previous version?: Yes Nature of problem: The program finds virtually exact solutions of the Hartree-Fock and density functional theory type equations for atoms, diatomic molecules and their ions. The lowest energy eigenstates of a given irreducible representation and spin can be obtained. The program can be used to perform one-particle calculations with (smooth) Coulomb and Krammers-Henneberger potentials and also DFT-type calculations using LDA or B88 exchange functionals and LYP or VWN correlations ones or the self-consistent multiplicative constant method. Solution method: Single-particle two-dimensional numerical functions (orbitals) are used to construct an antisymmetric many-electron wave function of the restricted open-shell Hartree-Fock model. The orbitals are obtained by solving the Hartree-Fock equations as coupled two-dimensional second-order (elliptic) partial differential equations (PDEs). The Coulomb and exchange potentials are obtained as solutions of the corresponding Poisson equations. The PDEs are discretized by the eighth-order central difference stencil on a two-dimensional single grid, and the resulting large and sparse system of linear equations is solved by the (multicolour) successive overrelaxation ((MC)SOR) method. The self-consistent-field iterations are interwoven with the (MC)SOR ones and orbital energies and normalization factors are used to monitor the convergence. The accuracy of solutions depends mainly on the grid and the system under consideration, which means that within double precision arithmetic one can obtain orbitals and energies having up to 12 significant figures. If more accurate results are needed, quadruple-precision floating-point arithmetic can be used. Reasons for new version: Additional features, many modifications and corrections, improved convergence rate, overhauled code and documentation. Summary of revisions: see ChangeLog found in tar.gz archive Restrictions: The present version of the program is restricted to 60 orbitals. The maximum grid size is determined at compilation time. Unusual features: The program uses two C routines for allocating and deallocating memory. Several BLAS (Basic Linear Algebra System) routines are emulated by the program. When possible they should be replaced by their library equivalents. Additional comments: automake and autoconf tools are required to build and compile the program; checked with f77, gfortran and ifort compilers Running time: Very case dependent - from a few CPU seconds for the H2 defined on a small grid up to several weeks for the Hartree-Fock-limit calculations for 40-50 electron molecules.
User guide for MODPATH Version 7—A particle-tracking model for MODFLOW
Pollock, David W.
2016-09-26
MODPATH is a particle-tracking post-processing program designed to work with MODFLOW, the U.S. Geological Survey (USGS) finite-difference groundwater flow model. MODPATH version 7 is the fourth major release since its original publication. Previous versions were documented in USGS Open-File Reports 89–381 and 94–464 and in USGS Techniques and Methods 6–A41.MODPATH version 7 works with MODFLOW-2005 and MODFLOW–USG. Support for unstructured grids in MODFLOW–USG is limited to smoothed, rectangular-based quadtree and quadpatch grids.A software distribution package containing the computer program and supporting documentation, such as input instructions, output file descriptions, and example problems, is available from the USGS over the Internet (http://water.usgs.gov/ogw/modpath/).
SMP: A solid modeling program version 2.0
NASA Technical Reports Server (NTRS)
Randall, D. P.; Jones, K. H.; Vonofenheim, W. H.; Gates, R. L.; Matthews, C. G.
1986-01-01
The Solid Modeling Program (SMP) provides the capability to model complex solid objects through the composition of primitive geometric entities. In addition to the construction of solid models, SMP has extensive facilities for model editing, display, and analysis. The geometric model produced by the software system can be output in a format compatible with existing analysis programs such as PATRAN-G. The present version of the SMP software supports six primitives: boxes, cones, spheres, paraboloids, tori, and trusses. The details for creating each of the major primitive types is presented. The analysis capabilities of SMP, including interfaces to existing analysis programs, are discussed.
Multistage Planetary Power Transmissions
NASA Technical Reports Server (NTRS)
Hadden, G. B.; Dyba, G. J.; Ragen, M. A.; Kleckner, R. J.; Sheynin, L.
1986-01-01
PLANETSYS simulates thermomechanical performance of multistage planetary performance of multistage planetary power transmission. Two versions of code developed, SKF version and NASA version. Major function of program: compute performance characteristics of planet bearing for any of six kinematic inversions. PLANETSYS solves heat-balance equations for either steadystate or transient thermal conditions, and produces temperature maps for mechanical system.
GR@PPA 2.8: Initial-state jet matching for weak-boson production processes at hadron collisions
NASA Astrophysics Data System (ADS)
Odaka, Shigeru; Kurihara, Yoshimasa
2012-04-01
The initial-state jet matching method introduced in our previous studies has been applied to the event generation of single W and Z production processes and diboson (WW, WZ and ZZ) production processes at hadron collisions in the framework of the GR@PPA event generator. The generated events reproduce the transverse momentum spectra of weak bosons continuously in the entire kinematical region. The matrix elements (ME) for hard interactions are still at the tree level. As in previous versions, the decays of weak bosons are included in the matrix elements. Therefore, spin correlations and phase-space effects in the decay of weak bosons are exact at the tree level. The program package includes custom-made parton shower programs as well as ME-based hard interaction generators in order to achieve self-consistent jet matching. The generated events can be passed to general-purpose event generators to make the simulation proceed down to the hadron level. Catalogue identifier: ADRH_v3_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADRH_v3_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 112 146 No. of bytes in distributed program, including test data, etc.: 596 667 Distribution format: tar.gz Programming language: Fortran; with some included libraries coded in C and C++ Computer: All Operating system: Any UNIX-like system RAM: 1.6 Mega bytes at minimum Classification: 11.2 Catalogue identifier of previous version: ADRH_v2_0 Journal reference of previous version: Comput. Phys. Comm. 175 (2006) 665 External routines: Bash and Perl for the setup, and CERNLIB, ROOT, LHAPDF, PYTHIA according to the user's choice. Does the new version supersede the previous version?: No, this version supports only a part of the processes included in the previous versions. Nature of problem: We need to combine those processes including 0 jet and 1 jet in the matrix elements using an appropriate matching method, in order to simulate weak-boson production processes in the entire kinematical region. Solution method: The leading logarithmic components to be included in parton distribution functions and parton showers are subtracted from 1-jet matrix elements. Custom-made parton shower programs are provided to ensure satisfactory performance of the matching method. Reasons for new version: An initial-state jet matching method has been implemented. Summary of revisions: Weak-boson production processes associated with 0 jet and 1 jet can be consistently merged using the matching method. Restrictions: The built-in parton showers are not compatible with the PYTHIA new PS and the HERWIG PS. Unusual features: A large number of particles may be produced by the parton showers and passed to general-purpose event generators. Running time: About 10 min for initialization plus 25 s for every 1k-event generation for W production in the LHC condition, on a 3.0-GHz Intel Xeon processor with the default setting.
Simulation of electron spin resonance spectroscopy in diverse environments: An integrated approach
NASA Astrophysics Data System (ADS)
Zerbetto, Mirco; Polimeno, Antonino; Barone, Vincenzo
2009-12-01
We discuss in this work a new software tool, named E-SpiReS (Electron Spin Resonance Simulations), aimed at the interpretation of dynamical properties of molecules in fluids from electron spin resonance (ESR) measurements. The code implements an integrated computational approach (ICA) for the calculation of relevant molecular properties that are needed in order to obtain spectral lines. The protocol encompasses information from atomistic level (quantum mechanical) to coarse grained level (hydrodynamical), and evaluates ESR spectra for rigid or flexible single or multi-labeled paramagnetic molecules in isotropic and ordered phases, based on a numerical solution of a stochastic Liouville equation. E-SpiReS automatically interfaces all the computational methodologies scheduled in the ICA in a way completely transparent for the user, who controls the whole calculation flow via a graphical interface. Parallelized algorithms are employed in order to allow running on calculation clusters, and a web applet Java has been developed with which it is possible to work from any operating system, avoiding the problems of recompilation. E-SpiReS has been used in the study of a number of different systems and two relevant cases are reported to underline the promising applicability of the ICA to complex systems and the importance of similar software tools in handling a laborious protocol. Program summaryProgram title: E-SpiReS Catalogue identifier: AEEM_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEM_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GPL v2.0 No. of lines in distributed program, including test data, etc.: 311 761 No. of bytes in distributed program, including test data, etc.: 10 039 531 Distribution format: tar.gz Programming language: C (core programs) and Java (graphical interface) Computer: PC and Macintosh Operating system: Unix and Windows Has the code been vectorized or parallelized?: Yes RAM: 2 048 000 000 Classification: 7.2 External routines: Babel-1.1, CLAPACK, BLAS, CBLAS, SPARSEBLAS, CQUADPACK, LEVMAR Nature of problem:Ab initio simulation of cw-ESR spectra of radicals in solution Solution method: E-SpiReS uses an hydrodynamic approach to calculate the diffusion tensor of the molecule, DFT methodologies to evaluate magnetic tensors and linear algebra techniques to solve numerically the stochastic Liouville equation to obtain an ESR spectrum. Running time: Variable depending on the task. It takes seconds for small molecules in the fast motional regime to hours for big molecules in viscous and/or ordered media.
NASA Technical Reports Server (NTRS)
Saltsman, J. F.
1994-01-01
TS-SRP/PACK is a set of computer programs for characterizing and predicting fatigue and creep-fatigue resistance of metallic materials in the high-temperature, long-life regime for isothermal and nonisothermal fatigue. The programs use the total strain version of the Strainrange Partitioning (TS-SRP). The user should be thoroughly familiar with the TS-SRP method before attempting to use any of these programs. The document for this program includes a theory manual as well as a detailed user's manual with a tutorial to guide the user in the proper use of TS-SRP. An extensive database has also been developed in a parallel effort. This database is an excellent source of high-temperature, creep-fatigue test data and can be used with other life-prediction methods as well. Five programs are included in TS-SRP/PACK along with the alloy database. The TABLE program is used to print the datasets, which are in NAMELIST format, in a reader friendly format. INDATA is used to create new datasets or add to existing ones. The FAIL program is used to characterize the failure behavior of an alloy as given by the constants in the strainrange-life relations used by the total strain version of SRP (TS-SRP) and the inelastic strainrange-based version of SRP. The program FLOW is used to characterize the flow behavior (the constitutive response) of an alloy as given by the constants in the flow equations used by TS-SRP. Finally, LIFE is used to predict the life of a specified cycle, using the constants characterizing failure and flow behavior determined by FAIL and FLOW. LIFE is written in interpretive BASIC to avoid compiling and linking every time the equation constants are changed. Four out of five programs in this package are written in FORTRAN 77 for IBM PC series and compatible computers running MS-DOS and are designed to read data using the NAMELIST format statement. The fifth is written in BASIC version 3.0 for IBM PC series and compatible computers running MS-DOS version 3.10. The executables require at least 239K of memory and DOS 3.1 or higher. To compile the source, a Lahey FORTRAN compiler is required. Source code modifications will be necessary if the compiler to be used does not support NAMELIST input. Probably the easiest revision to make is to use a list-directed READ statement. The standard distribution medium for this program is a set of two 5.25 inch 360K MS-DOS format diskettes. The contents of the diskettes are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. TS-SRP/PACK was developed in 1992.
A new version of the helicopter aural detection program, ICHIN
NASA Technical Reports Server (NTRS)
Mueller, A. W.; Smith, C. D.; Shepherd, K. P.; Sullivan, B. M.
1986-01-01
NASA Langley Research Center personnel have conducted an evaluation of the helicopter aural detection program I Can Hear It Now (ICHIN version-5). This was accomplished using flight noise data of five helicopters, obtained from a joint NASA and U.S. Army acoustics measurement program. The evaluation consisted of presenting the noise data to a jury of 20 subjects and to the ICHIN-5 program. A comparative study was then made of the detection distances determined by the jury and predicted by ICHIN-5. This report presents the changes made in the ICHIN-5 program as a result of this comparative study. The changes represent current psychoacoustics and propagation knowledge.
Design and Implementation of a Distributed Version of the NASA Engine Performance Program
NASA Technical Reports Server (NTRS)
Cours, Jeffrey T.
1994-01-01
Distributed NEPP is a new version of the NASA Engine Performance Program that runs in parallel on a collection of Unix workstations connected through a network. The program is fault-tolerant, efficient, and shows significant speed-up in a multi-user, heterogeneous environment. This report describes the issues involved in designing distributed NEPP, the algorithms the program uses, and the performance distributed NEPP achieves. It develops an analytical model to predict and measure the performance of the simple distribution, multiple distribution, and fault-tolerant distribution algorithms that distributed NEPP incorporates. Finally, the appendices explain how to use distributed NEPP and document the organization of the program's source code.
NASA Astrophysics Data System (ADS)
Berger, Hana
This dissertation is concerned with the design and study of an evidence-based approach to the professional development of high-school physics teachers responding to the need to develop effective continuing professional development programs (CPD) in domains that require genuine changes in teachers' views, knowledge, and practice. The goals of the thesis were to design an evidence-based model for the CPD program, to implement it with teachers, and to study its influence on teachers' knowledge, views, and practice, as well as its impact on students' learning. The program was developed in three consecutive versions: a pilot, first, and second versions. Based on the pilot version (that was not part of this study), we developed the first version of the program in which we studied difficulties in employing the evidence-based and blended-learning approaches. According to our findings, we modified the strategies for enacting these approaches in the second version of the program. The influence of the program on the teachers and students was studied during the enactment of the second version of the program. The model implemented in the second version of the program was characterized by four main design principles: 1. The KI and evidence aspects are acquired simultaneously in an integrated manner. 2. The guidance of the teachers follows the principles of cognitive apprenticeship both in the evidence and the KI aspects. 3. The teachers experience the innovative activities as learners. 4. The program promotes continuity of teachers' learning through a structured "blended learning" approach. The results of our study show that this version of the program achieved its goals; throughout the program the teachers progressed in their knowledge, views, and practice concerning the knowledge integration, and in the evidence and learner-centered aspects. The results also indicated that students improved their knowledge of physics and knowledge integration skills that were developed throughout the program. More specifically, analysis of the teachers' discourse during the second version revealed that the program led to significant changes in teachers' knowledge about their students' knowledge and in teachers' views about the following: 1. the advantages of the KIRs' innovative teaching tool, 2. the "evidence" as a useful resource for evaluating the contribution of the KIRs to students' learning, and more generally, as a powerful tool for investigating students' learning, and for improving practice, and 3. several "learner-centered" pedagogical aspects: the importance and legitimacy of learning from peers, the need to listen carefully to students' ideas and reflections, and the need to investigate students' knowledge using a variety of methods, and to plan the teaching accordingly. Our analysis of the students' worksheets verified the teachers' findings about their students' initial state of knowledge and the improvement of this knowledge as a result of advancing through the KIR phases. When we extended the sample and examined worksheets of additional classes, we found similar findings. We also found that the students were aware of the improvement in their knowledge and attributed this improvement to their working with the KIRs. Two major recommendations emerge from this study: 1. We recommend that KIRs be routinely incorporated into physics teaching. The results show that the KIRs contribute to teachers' practice and to students' learning and support the teachers in becoming more learner-centered in their teaching. 2. We recommend incorporating an evidence-based approach in long-term programs aimed at bringing about a significant change in the teachers' practice. In order to engage the teachers with the evidence endeavor, it is recommended to introduce them an innovative teaching tool that is considered by them important and to evoke their curiosity to find out empirically about the influence of the tool on their students' learning. It is also recommended to engage the teachers in ongoing interactions about their experience in implementing the innovative tools in their classes through an online platform by which special, simple online tools are enacted. The present study has several limitations that suggest directions for further study, some of which can be based on the present set of data, but others require additional study. These directions include a detailed study of individual teachers' professional development, studies of ways to up-scale the evidence-based approach and use it effectively in less intensive courses, an extensive study of how students study with the improved versions of the KIRs that resulted from this study, and further investigation into how the new computerized tools can be utilized in professional development courses. (Abstract shortened by UMI.)
BehavePlus fire modeling system, version 5.0: Variables
Patricia L. Andrews
2009-01-01
This publication has been revised to reflect updates to version 4.0 of the BehavePlus software. It was originally published as the BehavePlus fire modeling system, version 4.0: Variables in July, 2008.The BehavePlus fire modeling system is a computer program based on mathematical models that describe wildland fire behavior and effects and the...
Spacecraft Orbit Design and Analysis (SODA). Version 2.0: User's guide
NASA Technical Reports Server (NTRS)
Stallcup, Scott S.; Davis, John S.; Zsoldos, Jeffrey S.
1991-01-01
The Spacecraft Orbit Design and Analysis (SODA) computer program, Version 2.0, is discussed. SODA is a spaceflight mission planning system that consists of six program modules integrated around a common database and user interface. SODA runs on a VAX/VMS computer with an Evans and Sutherland PS300 graphics workstation. In the current version, three program modules produce an interactive three dimensional animation of one or more satellites in planetary orbit. Satellite visibility and sensor coverage capabilities are also provided. Circular and rectangular, off nadir, fixed and scanning sensors are supported. One module produces an interactive three dimensional animation of the solar system. Another module calculates cumulative satellite sensor coverage and revisit time for one or more satellites. Currently, Earth, Moon, and Mars systems are supported for all modules except the solar system module.
Simulating electron energy loss spectroscopy with the MNPBEM toolbox
NASA Astrophysics Data System (ADS)
Hohenester, Ulrich
2014-03-01
Within the MNPBEM toolbox, we show how to simulate electron energy loss spectroscopy (EELS) of plasmonic nanoparticles using a boundary element method approach. The methodology underlying our approach closely follows the concepts developed by García de Abajo and coworkers (Garcia de Abajo, 2010). We introduce two classes eelsret and eelsstat that allow in combination with our recently developed MNPBEM toolbox for a simple, robust, and efficient computation of EEL spectra and maps. The classes are accompanied by a number of demo programs for EELS simulation of metallic nanospheres, nanodisks, and nanotriangles, and for electron trajectories passing by or penetrating through the metallic nanoparticles. We also discuss how to compute electric fields induced by the electron beam and cathodoluminescence. Catalogue identifier: AEKJ_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKJ_v2_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 38886 No. of bytes in distributed program, including test data, etc.: 1222650 Distribution format: tar.gz Programming language: Matlab 7.11.0 (R2010b). Computer: Any which supports Matlab 7.11.0 (R2010b). Operating system: Any which supports Matlab 7.11.0 (R2010b). RAM:≥1 GB Classification: 18. Catalogue identifier of previous version: AEKJ_v1_0 Journal reference of previous version: Comput. Phys. Comm. 183 (2012) 370 External routines: MESH2D available at www.mathworks.com Does the new version supersede the previous version?: Yes Nature of problem: Simulation of electron energy loss spectroscopy (EELS) for plasmonic nanoparticles. Solution method: Boundary element method using electromagnetic potentials. Reasons for new version: The new version of the toolbox includes two additional classes for the simulation of electron energy loss spectroscopy (EELS) of plasmonic nanoparticles, and corrects a few minor bugs and inconsistencies. Summary of revisions: New classes “eelsstat” and “eelsret” for the simulation of electron energy loss spectroscopy (EELS) of plasmonic nanoparticles have been added. A few minor errors in the implementation of dipole excitation have been corrected. Running time: Depending on surface discretization between seconds and hours.
ASTROP2 Users Manual: A Program for Aeroelastic Stability Analysis of Propfans
NASA Technical Reports Server (NTRS)
Reddy, T. S. R.; Lucero, John M.
1996-01-01
This manual describes the input data required for using the second version of the ASTROP2 (Aeroelastic STability and Response Of Propulsion systems - 2 dimensional analysis) computer code. In ASTROP2, version 2.0, the program is divided into two modules: 2DSTRIP, which calculates the structural dynamic information; and 2DASTROP, which calculates the unsteady aerodynamic force coefficients from which the aeroelastic stability can be determined. In the original version of ASTROP2, these two aspects were performed in a single program. The improvements to version 2.0 include an option to account for counter rotation, improved numerical integration, accommodation for non-uniform inflow distribution, and an iterative scheme to flutter frequency convergence. ASTROP2 can be used for flutter analysis of multi-bladed structures such as those found in compressors, turbines, counter rotating propellers or propfans. The analysis combines a two-dimensional, unsteady cascade aerodynamics model and a three dimensional, normal mode structural model using strip theory. The flutter analysis is formulated in the frequency domain resulting in an eigenvalue determinant. The flutter frequency and damping can be inferred from the eigenvalues.
NASA Technical Reports Server (NTRS)
Pearson, Don; Hamm, Dustin; Kubena, Brian; Weaver, Jonathan K.
2010-01-01
An updated version of the Platform Independent Software Components for the Exploration of Space (PISCES) software library is available. A previous version was reported in Library for Developing Spacecraft-Mission-Planning Software (MSC-22983), NASA Tech Briefs, Vol. 25, No. 7 (July 2001), page 52. To recapitulate: This software provides for Web-based, collaborative development of computer programs for planning trajectories and trajectory- related aspects of spacecraft-mission design. The library was built using state-of-the-art object-oriented concepts and software-development methodologies. The components of PISCES include Java-language application programs arranged in a hierarchy of classes that facilitates the reuse of the components. As its full name suggests, the PISCES library affords platform-independence: The Java language makes it possible to use the classes and application programs with a Java virtual machine, which is available in most Web-browser programs. Another advantage is expandability: Object orientation facilitates expansion of the library through creation of a new class. Improvements in the library since the previous version include development of orbital-maneuver- planning and rendezvous-launch-window application programs, enhancement of capabilities for propagation of orbits, and development of a desktop user interface.
Effect of formal specifications on program complexity and reliability: An experimental study
NASA Technical Reports Server (NTRS)
Goel, Amrit L.; Sahoo, Swarupa N.
1990-01-01
The results are presented of an experimental study undertaken to assess the improvement in program quality by using formal specifications. Specifications in the Z notation were developed for a simple but realistic antimissile system. These specifications were then used to develop 2 versions in C by 2 programmers. Another set of 3 versions in Ada were independently developed from informal specifications in English. A comparison of the reliability and complexity of the resulting programs suggests the advantages of using formal specifications in terms of number of errors detected and fault avoidance.
NASA Astrophysics Data System (ADS)
Press, William H.; Teukolsky, Saul A.; Vettering, William T.; Flannery, Brian P.
2003-05-01
The two Numerical Recipes books are marvellous. The principal book, The Art of Scientific Computing, contains program listings for almost every conceivable requirement, and it also contains a well written discussion of the algorithms and the numerical methods involved. The Example Book provides a complete driving program, with helpful notes, for nearly all the routines in the principal book. The first edition of Numerical Recipes: The Art of Scientific Computing was published in 1986 in two versions, one with programs in Fortran, the other with programs in Pascal. There were subsequent versions with programs in BASIC and in C. The second, enlarged edition was published in 1992, again in two versions, one with programs in Fortran (NR(F)), the other with programs in C (NR(C)). In 1996 the authors produced Numerical Recipes in Fortran 90: The Art of Parallel Scientific Computing as a supplement, called Volume 2, with the original (Fortran) version referred to as Volume 1. Numerical Recipes in C++ (NR(C++)) is another version of the 1992 edition. The numerical recipes are also available on a CD ROM: if you want to use any of the recipes, I would strongly advise you to buy the CD ROM. The CD ROM contains the programs in all the languages. When the first edition was published I bought it, and have also bought copies of the other editions as they have appeared. Anyone involved in scientific computing ought to have a copy of at least one version of Numerical Recipes, and there also ought to be copies in every library. If you already have NR(F), should you buy the NR(C++) and, if not, which version should you buy? In the preface to Volume 2 of NR(F), the authors say 'C and C++ programmers have not been far from our minds as we have written this volume, and we think that you will find that time spent in absorbing its principal lessons will be amply repaid in the future as C and C++ eventually develop standard parallel extensions'. In the preface and introduction to NR(C++), the authors point out some of the problems in the use of C++ in scientific computing. I have not found any mention of parallel computing in NR(C++). Fortran has quite a lot going for it. As someone who has used it in most of its versions from Fortran II, I have seen it develop and leave behind other languages promoted by various enthusiasts: who now uses Algol or Pascal? I think it unlikely that C++ will disappear: it was devised as a systems language, and can also be used for other purposes such as scientific computing. It is possible that Fortran will disappear, but Fortran has the strengths that it can develop, that there are extensive Fortran subroutine libraries, and that it has been developed for parallel computing. To argue with programmers as to which is the best language to use is sterile. If you wish to use C++, then buy NR(C++), but you should also look at volume 2 of NR(F). If you are a Fortran programmer, then make sure you have NR(F), volumes 1 and 2. But whichever language you use, make sure you have one version or the other, and the CD ROM. The Example Book provides listings of complete programs to run nearly all the routines in NR, frequently based on cases where an anlytical solution is available. It is helpful when developing a new program incorporating an unfamiliar routine to see that routine actually working, and this is what the programs in the Example Book achieve. I started teaching computational physics before Numerical Recipes was published. If I were starting again, I would make heavy use of both The Art of Scientific Computing and of the Example Book. Every computational physics teaching laboratory should have both volumes: the programs in the Example Book are included on the CD ROM, but the extra commentary in the book itself is of considerable value. P Borcherds
MOLECULAR DESIGNER: an interactive program for the display of protein structure on the IBM-PC.
Hannon, G J; Jentoft, J E
1985-09-01
A BASIC interactive graphics program has been developed for the IBM-PC which utilizes the graphics capabilities of that computer to display and manipulate protein structure from coordinates. Structures may be generated from typed files, or from Brookhaven National Laboratories' Protein Data Bank data tapes. Once displayed, images may be rotated, translated and expanded to any desired size. Figures may be viewed as ball-and-stick or space-filling models. Calculated multiple-point perspective may also be added to the display. Docking manipulations are possible since more than a single figure may be displayed and manipulated simultaneously. Further, stereo images and red/blue three-dimensional images may be generated using the accompanying DESIPLOT program and an HP-7475A plotter. A version of the program is also currently available for the Apple Macintosh. Full implementation on the Macintosh requires 512 K and at least one disk drive. Otherwise this version is essentially identical to the IBM-PC version described herein.
Huang, Kuo -Ling; Mehrotra, Sanjay
2016-11-08
We present a homogeneous algorithm equipped with a modified potential function for the monotone complementarity problem. We show that this potential function is reduced by at least a constant amount if a scaled Lipschitz condition (SLC) is satisfied. A practical algorithm based on this potential function is implemented in a software package named iOptimize. The implementation in iOptimize maintains global linear and polynomial time convergence properties, while achieving practical performance. It either successfully solves the problem, or concludes that the SLC is not satisfied. When compared with the mature software package MOSEK (barrier solver version 6.0.0.106), iOptimize solves convex quadraticmore » programming problems, convex quadratically constrained quadratic programming problems, and general convex programming problems in fewer iterations. Moreover, several problems for which MOSEK fails are solved to optimality. In addition, we also find that iOptimize detects infeasibility more reliably than the general nonlinear solvers Ipopt (version 3.9.2) and Knitro (version 8.0).« less
ESDAPT - APT PROGRAMMING EDITOR AND INTERPRETER
NASA Technical Reports Server (NTRS)
Premack, T.
1994-01-01
ESDAPT is a graphical programming environment for developing APT (Automatically Programmed Tool) programs for controlling numerically controlled machine tools. ESDAPT has a graphical user interface that provides the user with an APT syntax sensitive text editor and windows for displaying geometry and tool paths. APT geometry statement can also be created using menus and screen picks. ESDAPT interprets APT geometry statements and displays the results in its view windows. Tool paths are generated by batching the APT source to an APT processor (COSMIC P-APT recommended). The tool paths are then displayed in the view windows. Hardcopy output of the view windows is in color PostScript format. ESDAPT is written in C-language, yacc, lex, and XView for use on Sun4 series computers running SunOS. ESDAPT requires 4Mb of disk space, 7Mb of RAM, and MIT's X Window System, Version 11 Release 4, or OpenWindows version 3 for execution. Program documentation in PostScript format and an executable for OpenWindows version 3 are provided on the distribution media. The standard distribution medium for ESDAPT is a .25 inch streaming magnetic tape cartridge (Sun QIC-24) in UNIX tar format. This program was developed in 1992.
Generating heavy particles with energy and momentum conservation
NASA Astrophysics Data System (ADS)
Mereš, Michal; Melo, Ivan; Tomášik, Boris; Balek, Vladimír; Černý, Vladimír
2011-12-01
We propose a novel algorithm, called REGGAE, for the generation of momenta of a given sample of particle masses, evenly distributed in Lorentz-invariant phase space and obeying energy and momentum conservation. In comparison to other existing algorithms, REGGAE is designed for the use in multiparticle production in hadronic and nuclear collisions where many hadrons are produced and a large part of the available energy is stored in the form of their masses. The algorithm uses a loop simulating multiple collisions which lead to production of configurations with reasonably large weights. Program summaryProgram title: REGGAE (REscattering-after-Genbod GenerAtor of Events) Catalogue identifier: AEJR_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJR_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 1523 No. of bytes in distributed program, including test data, etc.: 9608 Distribution format: tar.gz Programming language: C++ Computer: PC Pentium 4, though no particular tuning for this machine was performed. Operating system: Originally designed on Linux PC with g++, but it has been compiled and ran successfully on OS X with g++ and MS Windows with Microsoft Visual C++ 2008 Express Edition, as well. RAM: This depends on the number of particles which are generated. For 10 particles like in the attached example it requires about 120 kB. Classification: 11.2 Nature of problem: The task is to generate momenta of a sample of particles with given masses which obey energy and momentum conservation. Generated samples should be evenly distributed in the available Lorentz-invariant phase space. Solution method: In general, the algorithm works in two steps. First, all momenta are generated with the GENBOD algorithm. There, particle production is modeled as a sequence of two-body decays of heavy resonances. After all momenta are generated this way, they are reshuffled. Each particle undergoes a collision with some other partner such that in the pair center of mass system the new directions of momenta are distributed isotropically. After each particle collides only a few times, the momenta are distributed evenly across the whole available phase space. Starting with GENBOD is not essential for the procedure but it improves the performance. Running time: This depends on the number of particles and number of events one wants to generate. On a LINUX PC with 2 GHz processor, generation of 1000 events with 10 particles each takes about 3 s.
Line-by-line spectroscopic simulations on graphics processing units
NASA Astrophysics Data System (ADS)
Collange, Sylvain; Daumas, Marc; Defour, David
2008-01-01
We report here on software that performs line-by-line spectroscopic simulations on gases. Elaborate models (such as narrow band and correlated-K) are accurate and efficient for bands where various components are not simultaneously and significantly active. Line-by-line is probably the most accurate model in the infrared for blends of gases that contain high proportions of H 2O and CO 2 as this was the case for our prototype simulation. Our implementation on graphics processing units sustains a speedup close to 330 on computation-intensive tasks and 12 on memory intensive tasks compared to implementations on one core of high-end processors. This speedup is due to data parallelism, efficient memory access for specific patterns and some dedicated hardware operators only available in graphics processing units. It is obtained leaving most of processor resources available and it would scale linearly with the number of graphics processing units in parallel machines. Line-by-line simulation coupled with simulation of fluid dynamics was long believed to be economically intractable but our work shows that it could be done with some affordable additional resources compared to what is necessary to perform simulations on fluid dynamics alone. Program summaryProgram title: GPU4RE Catalogue identifier: ADZY_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZY_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 62 776 No. of bytes in distributed program, including test data, etc.: 1 513 247 Distribution format: tar.gz Programming language: C++ Computer: x86 PC Operating system: Linux, Microsoft Windows. Compilation requires either gcc/g++ under Linux or Visual C++ 2003/2005 and Cygwin under Windows. It has been tested using gcc 4.1.2 under Ubuntu Linux 7.04 and using Visual C++ 2005 with Cygwin 1.5.24 under Windows XP. RAM: 1 gigabyte Classification: 21.2 External routines: OpenGL ( http://www.opengl.org) Nature of problem: Simulating radiative transfer on high-temperature high-pressure gases. Solution method: Line-by-line Monte-Carlo ray-tracing. Unusual features: Parallel computations are moved to the GPU. Additional comments: nVidia GeForce 7000 or ATI Radeon X1000 series graphics processing unit is required. Running time: A few minutes.
ALCBEAM - Neutral beam formation and propagation code for beam-based plasma diagnostics
NASA Astrophysics Data System (ADS)
Bespamyatnov, I. O.; Rowan, W. L.; Liao, K. T.
2012-03-01
ALCBEAM is a new three-dimensional neutral beam formation and propagation code. It was developed to support the beam-based diagnostics installed on the Alcator C-Mod tokamak. The purpose of the code is to provide reliable estimates of the local beam equilibrium parameters: such as beam energy fractions, density profiles and excitation populations. The code effectively unifies the ion beam formation, extraction and neutralization processes with beam attenuation and excitation in plasma and neutral gas and beam stopping by the beam apertures. This paper describes the physical processes interpreted and utilized by the code, along with exploited computational methods. The description is concluded by an example simulation of beam penetration into plasma of Alcator C-Mod. The code is successfully being used in Alcator C-Mod tokamak and expected to be valuable in the support of beam-based diagnostics in most other tokamak environments. Program summaryProgram title: ALCBEAM Catalogue identifier: AEKU_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKU_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 66 459 No. of bytes in distributed program, including test data, etc.: 7 841 051 Distribution format: tar.gz Programming language: IDL Computer: Workstation, PC Operating system: Linux RAM: 1 GB Classification: 19.2 Nature of problem: Neutral beams are commonly used to heat and/or diagnose high-temperature magnetically-confined laboratory plasmas. An accurate neutral beam characterization is required for beam-based measurements of plasma properties. Beam parameters such as density distribution, energy composition, and atomic excited populations of the beam atoms need to be known. Solution method: A neutral beam is initially formed as an ion beam which is extracted from the ion source by high voltage applied to the extraction and accelerating grids. The current distribution of a single beamlet emitted from a single pore of IOS depends on the shape of the plasma boundary in the emission region. Total beam extracted by IOS is calculated at every point of 3D mesh as sum of all contributions from each grid pore. The code effectively unifies the ion beam formation, extraction and neutralization processes with neutral beam attenuation and excitation in plasma and neutral gas and beam stopping by the beam apertures. Running time: 10 min for a standard run.
NASA Technical Reports Server (NTRS)
Arya, Vinod K.; Halford, Gary R. (Technical Monitor)
2003-01-01
This manual presents computer programs FLAPS for characterizing and predicting fatigue and creep-fatigue resistance of metallic materials in the high-temperature, long-life regime for isothermal and nonisothermal fatigue. The programs use the Total Strain version of Strainrange Partitioning (TS-SRP), and several other life prediction methods described in this manual. The user should be thoroughly familiar with the TS-SRP and these life prediction methods before attempting to use any of these programs. Improper understanding can lead to incorrect use of the method and erroneous life predictions. An extensive database has also been developed in a parallel effort. The database is probably the largest source of high-temperature, creep-fatigue test data available in the public domain and can be used with other life-prediction methods as well. This users' manual, software, and database are all in the public domain and can be obtained by contacting the author. The Compact Disk (CD) accompanying this manual contains an executable file for the FLAPS program, two datasets required for the example problems in the manual, and the creep-fatigue data in a format compatible with these programs.
NASA Technical Reports Server (NTRS)
Cullimore, B.
1994-01-01
SINDA, the Systems Improved Numerical Differencing Analyzer, is a software system for solving lumped parameter representations of physical problems governed by diffusion-type equations. SINDA was originally designed for analyzing thermal systems represented in electrical analog, lumped parameter form, although its use may be extended to include other classes of physical systems which can be modeled in this form. As a thermal analyzer, SINDA can handle such interrelated phenomena as sublimation, diffuse radiation within enclosures, transport delay effects, and sensitivity analysis. FLUINT, the FLUid INTegrator, is an advanced one-dimensional fluid analysis program that solves arbitrary fluid flow networks. The working fluids can be single phase vapor, single phase liquid, or two phase. The SINDA'85/FLUINT system permits the mutual influences of thermal and fluid problems to be analyzed. The SINDA system consists of a programming language, a preprocessor, and a subroutine library. The SINDA language is designed for working with lumped parameter representations and finite difference solution techniques. The preprocessor accepts programs written in the SINDA language and converts them into standard FORTRAN. The SINDA library consists of a large number of FORTRAN subroutines that perform a variety of commonly needed actions. The use of these subroutines can greatly reduce the programming effort required to solve many problems. A complete run of a SINDA'85/FLUINT model is a four step process. First, the user's desired model is run through the preprocessor which writes out data files for the processor to read and translates the user's program code. Second, the translated code is compiled. The third step requires linking the user's code with the processor library. Finally, the processor is executed. SINDA'85/FLUINT program features include 20,000 nodes, 100,000 conductors, 100 thermal submodels, and 10 fluid submodels. SINDA'85/FLUINT can also model two phase flow, capillary devices, user defined fluids, gravity and acceleration body forces on a fluid, and variable volumes. SINDA'85/FLUINT offers the following numerical solution techniques. The Finite difference formulation of the explicit method is the Forward-difference explicit approximation. The formulation of the implicit method is the Crank-Nicolson approximation. The program allows simulation of non-uniform heating and facilitates modeling thin-walled heat exchangers. The ability to model non-equilibrium behavior within two-phase volumes is included. Recent improvements to the program were made in modeling real evaporator-pumps and other capillary-assist evaporators. SINDA'85/FLUINT is available by license for a period of ten (10) years to approved licensees. The licensed program product includes the source code and one copy of the supporting documentation. Additional copies of the documentation may be purchased separately at any time. SINDA'85/FLUINT is written in FORTRAN 77. Version 2.3 has been implemented on Cray series computers running UNICOS, CONVEX computers running CONVEX OS, and DEC RISC computers running ULTRIX. Binaries are included with the Cray version only. The Cray version of SINDA'85/FLUINT also contains SINGE, an additional graphics program developed at Johnson Space Flight Center. Both source and executable code are provided for SINGE. Users wishing to create their own SINGE executable will also need the NASA Device Independent Graphics Library (NASADIG, previously known as SMDDIG; UNIX version, MSC-22001). The Cray and CONVEX versions of SINDA'85/FLUINT are available on 9-track 1600 BPI UNIX tar format magnetic tapes. The CONVEX version is also available on a .25 inch streaming magnetic tape cartridge in UNIX tar format. The DEC RISC ULTRIX version is available on a TK50 magnetic tape cartridge in UNIX tar format. SINDA was developed in 1971, and first had fluid capability added in 1975. SINDA'85/FLUINT version 2.3 was released in 1990.
Sound Medication Therapy Management Programs, Version 2.0 with validation study.
2008-01-01
The Academy of Managed Care Pharmacy (AMCP, the Academy) contracted with the National Committee for Quality Assurance (NCQA) to conduct a field study to validate and assess the 2006 Sound Medication Therapy Management Programs, Version 1.0 document. Version 1.0 posits several principles of sound medication therapy management (MTM) programs: they (1) recruit patients whose data show they may need assistance with managing medications; (2) have health professionals who intervene with patients and their physicians to improve medication regimens; and (3) measure their results. The validation study determined the extent to which the principles identified in version 1.0 are incorporated in MTM programs. The method was designed to determine to what extent the important features and operational elements of sound MTM programs as described in version 1.0 are (1) acceptable and seen as comprehensive to users, (2) incorporated into MTM programs in the field, (3) reflective of the consensus group's intentions, and (4) in need of modification or updating. NCQA first conducted Phase One, in which NCQA gathered perspectives on the principles in the consensus document from a mixed group of stakeholders representing both providers and users of MTM programs. Phase Two involved a deeper analysis of existing programs related to the consensus document, in which NCQA conducted a Web-based survey of 20 varied MTM programs and conducted in-depth site visits with 5 programs. NCQA selected programs offered by a range of MTM-providing organizations -- health plans, pharmacy benefit management companies, disease management organizations, and stand-alone MTM providers. NCQA analyzed the results of both phases. The Phase Two survey asked specific questions of the programs and found that some programs perform beyond the principles listed in version 1.0. NCQA found that none of the elements of the consensus document should be eliminated because programs cannot perform them, although NCQA suggested some areas where the document could be more expansive or more specific, given the state of MTM operations in the field. The important features and operational elements in the document were categorized into the following 3 overall categories, which NCQA used to structure the survey and conduct the site visits in Phase Two: (1) eligibility and enrollment, (2) operations, and (3) quality management. NCQA found that the original consensus document was realistic in identifying the elements of sound MTM. In the current project, NCQA's purpose was not to make judgments about the effectiveness of MTM programs in general or any individual program in particular. NCQA recommended that the consensus document could be made stronger and more specific in 3 areas: (1) specifically state that the Patient Identification and Recruitment section advocates use of various eligibility criteria that may include, but are not limited to, Medicare-defined MTM eligibility criteria; (2) reframe or remove the statement in Appendix A of the consensus document that the preferred modality for MTM is face-to-face interaction between patient and pharmacist, unless there are comparative data to support it as currently written; and (3) specifically recommend that programs measure performance across the entire populations in their plans in addition to measuring results for those patients selected into MTM. This will make benchmarking among programs possible and will lead to substantiated best practices in this growing field.
Torak, L.J.
1993-01-01
A MODular Finite-Element, digital-computer program (MODFE) was developed to simulate steady or unsteady-state, two-dimensional or axisymmetric ground-water-flow. The modular structure of MODFE places the computationally independent tasks that are performed routinely by digital-computer programs simulating ground-water flow into separate subroutines, which are executed from the main program by control statements. Each subroutine consists of complete sets of computations, or modules, which are identified by comment statements, and can be modified by the user without affecting unrelated computations elsewhere in the program. Simulation capabilities can be added or modified by either adding or modifying subroutines that perform specific computational tasks, and the modular-program structure allows the user to create versions of MODFE that contain only the simulation capabilities that pertain to the ground-water problem of interest. MODFE is written in a Fortran programming language that makes it virtually device independent and compatible with desk-top personal computers and large mainframes. MODFE uses computer storage and execution time efficiently by taking advantage of symmetry and sparseness within the coefficient matrices of the finite-element equations. Parts of the matrix coefficients are computed and stored as single-subscripted variables, which are assembled into a complete coefficient just prior to solution. Computer storage is reused during simulation to decrease storage requirements. Descriptions of subroutines that execute the computational steps of the modular-program structure are given in tables that cross reference the subroutines with particular versions of MODFE. Programming details of linear and nonlinear hydrologic terms are provided. Structure diagrams for the main programs show the order in which subroutines are executed for each version and illustrate some of the linear and nonlinear versions of MODFE that are possible. Computational aspects of changing stresses and boundary conditions with time and of mass-balance and error terms are given for each hydrologic feature. Program variables are listed and defined according to their occurrence in the main programs and in subroutines. Listings of the main programs and subroutines are given.
NASA Technical Reports Server (NTRS)
Brauer, G. L.; Cornick, D. E.; Stevenson, R.
1977-01-01
The capabilities and applications of the three-degree-of-freedom (3DOF) version and the six-degree-of-freedom (6DOF) version of the Program to Optimize Simulated Trajectories (POST) are summarized. The document supplements the detailed program manuals by providing additional information that motivates and clarifies basic capabilities, input procedures, applications and computer requirements of these programs. The information will enable prospective users to evaluate the programs, and to determine if they are applicable to their problems. Enough information is given to enable managerial personnel to evaluate the capabilities of the programs and describes the POST structure, formulation, input and output procedures, sample cases, and computer requirements. The report also provides answers to basic questions concerning planet and vehicle modeling, simulation accuracy, optimization capabilities, and general input rules. Several sample cases are presented.
UPEML Version 3.0: A machine-portable CDC update emulator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mehlhorn, T.A.; Haill, T.A.
1992-04-01
UPEML is a machine-portable program that emulates a subset of the functions of the standard CDC Update. Machine-portability has been achieved by conforming to ANSI standards for Fortran-77. UPEML is compact and fairly efficient; however, it only allows a restricted syntax as compared with the CDC Update. This program was written primarily to facilitate the use of CDC-based scientific packages on alternate computer systems such as the VAX/VMS mainframes and UNIX workstations. UPEML has also been successfully used on the multiprocessor ELXSI, on CRAYs under both UNICOS and CTSS operating systems, and on Sun, HP, Stardent and IBM workstations. UPEMLmore » was originally released with the ITS electron/photon Monte Carlo transport package, which was developed on a CDC-7600 and makes extensive use of conditional file structure to combine several problem geometry and machine options into a single program file. UPEML 3.0 is an enhanced version of the original code and is being independently released for use at any installation or with any code package. Version 3.0 includes enhanced error checking, full ASCII character support, a program library audit capability, and a partial update option in which only selected or modified decks are written to the complete file. Version 3.0 also checks for overlapping corrections, allows processing of pested calls to common decks, and allows the use of alternate files in READ and ADDFILE commands. Finally, UPEML Version 3.0 allows the assignment of input and output files at runtime on the control line.« less
UPEML Version 3. 0: A machine-portable CDC update emulator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mehlhorn, T.A.; Haill, T.A.
1992-04-01
UPEML is a machine-portable program that emulates a subset of the functions of the standard CDC Update. Machine-portability has been achieved by conforming to ANSI standards for Fortran-77. UPEML is compact and fairly efficient; however, it only allows a restricted syntax as compared with the CDC Update. This program was written primarily to facilitate the use of CDC-based scientific packages on alternate computer systems such as the VAX/VMS mainframes and UNIX workstations. UPEML has also been successfully used on the multiprocessor ELXSI, on CRAYs under both UNICOS and CTSS operating systems, and on Sun, HP, Stardent and IBM workstations. UPEMLmore » was originally released with the ITS electron/photon Monte Carlo transport package, which was developed on a CDC-7600 and makes extensive use of conditional file structure to combine several problem geometry and machine options into a single program file. UPEML 3.0 is an enhanced version of the original code and is being independently released for use at any installation or with any code package. Version 3.0 includes enhanced error checking, full ASCII character support, a program library audit capability, and a partial update option in which only selected or modified decks are written to the complete file. Version 3.0 also checks for overlapping corrections, allows processing of pested calls to common decks, and allows the use of alternate files in READ and ADDFILE commands. Finally, UPEML Version 3.0 allows the assignment of input and output files at runtime on the control line.« less
XTALOPT version r11: An open-source evolutionary algorithm for crystal structure prediction
NASA Astrophysics Data System (ADS)
Avery, Patrick; Falls, Zackary; Zurek, Eva
2018-01-01
Version 11 of XTALOPT, an evolutionary algorithm for crystal structure prediction, has now been made available for download from the CPC library or the XTALOPT website, http://xtalopt.github.io. Whereas the previous versions of XTALOPT were published under the Gnu Public License (GPL), the current version is made available under the 3-Clause BSD License, which is an open source license that is recognized by the Open Source Initiative. Importantly, the new version can be executed via a command line interface (i.e., it does not require the use of a Graphical User Interface). Moreover, the new version is written as a stand-alone program, rather than an extension to AVOGADRO.
76 FR 65428 - Classification and Program Review
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-21
... review will be an abbreviated program review meant to focus on an inmate's programming activities. This... during his/her incarceration. The plan will ordinarily include work and programming activities to help... an inmate's programming activities. This shortened version of the more thorough program review will...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-31
...We are giving notice of changes to the Program Standards for the chronic wasting disease (CWD) herd certification program. The CWD herd certification program is a voluntary, cooperative program that establishes minimum requirements for the interstate movement of farmed or captive cervids, provisions for participating States to administer Approved State CWD Herd Certification Programs, and provisions for participating herds to become certified as having a low risk of being infected with CWD. The Program Standards provide optional guidance, explanation, and clarification on how to meet the requirements for interstate movement and for the Herd Certification Programs. Recently, we convened a group of State, laboratory, and industry representatives to discuss possible changes to the current Program Standards. The revised Program Standards reflect these discussions, and we believe the revised version will improve understanding of the program among State and industry cooperators. We are making the revised version of the Program Standards available for review and comment.
Monte Carlo Shower Counter Studies
NASA Technical Reports Server (NTRS)
Snyder, H. David
1991-01-01
Activities and accomplishments related to the Monte Carlo shower counter studies are summarized. A tape of the VMS version of the GEANT software was obtained and installed on the central computer at Gallaudet University. Due to difficulties encountered in updating this VMS version, a decision was made to switch to the UNIX version of the package. This version was installed and used to generate the set of data files currently accessed by various analysis programs. The GEANT software was used to write files of data for positron and proton showers. Showers were simulated for a detector consisting of 50 alternating layers of lead and scintillator. Each file consisted of 1000 events at each of the following energies: 0.1, 0.5, 2.0, 10, 44, and 200 GeV. Data analysis activities related to clustering, chi square, and likelihood analyses are summarized. Source code for the GEANT user subprograms and data analysis programs are provided along with example data plots.
NASA Astrophysics Data System (ADS)
Ward, A. J.; Pendry, J. B.
2000-06-01
In this paper we present an updated version of our ONYX program for calculating photonic band structures using a non-orthogonal finite difference time domain method. This new version employs the same transparent formalism as the first version with the same capabilities for calculating photonic band structures or causal Green's functions but also includes extra subroutines for the calculation of transmission and reflection coefficients. Both the electric and magnetic fields are placed onto a discrete lattice by approximating the spacial and temporal derivatives with finite differences. This results in discrete versions of Maxwell's equations which can be used to integrate the fields forwards in time. The time required for a calculation using this method scales linearly with the number of real space points used in the discretization so the technique is ideally suited to handling systems with large and complicated unit cells.
A second generation experiment in fault-tolerant software
NASA Technical Reports Server (NTRS)
Knight, J. C.
1986-01-01
Information was collected on the efficacy of fault-tolerant software by conducting two large-scale controlled experiments. In the first, an empirical study of multi-version software (MVS) was conducted. The second experiment is an empirical evaluation of self testing as a method of error detection (STED). The purpose ot the MVS experiment was to obtain empirical measurement of the performance of multi-version systems. Twenty versions of a program were prepared at four different sites under reasonably realistic development conditions from the same specifications. The purpose of the STED experiment was to obtain empirical measurements of the performance of assertions in error detection. Eight versions of a program were modified to include assertions at two different sites under controlled conditions. The overall structure of the testing environment for the MVS experiment and its status are described. Work to date in the STED experiment is also presented.
NASA Technical Reports Server (NTRS)
1998-01-01
In 1966, MacNeal-Schwendler Corporation (MSC) was awarded a contract by NASA to develop a general purpose structural analysis program dubbed NASTRAN (NASA structural analysis). The first operational version was delivered in 1969. In 1982, MSC procured the rights to market their subsequent version of NASTRAN to industry as a problem solver for applications ranging from acoustics to heat transfer. Known today as MSC/NASTRAN, the program has thousands of users worldwide. NASTRAN is also distributed through COSMIC.
PCACE-Personal-Computer-Aided Cabling Engineering
NASA Technical Reports Server (NTRS)
Billitti, Joseph W.
1987-01-01
PCACE computer program developed to provide inexpensive, interactive system for learning and using engineering approach to interconnection systems. Basically database system that stores information as files of individual connectors and handles wiring information in circuit groups stored as records. Directly emulates typical manual engineering methods of handling data, thus making interface between user and program very natural. Apple version written in P-Code Pascal and IBM PC version of PCACE written in TURBO Pascal 3.0
Users guide for ENVSTD program Version 2. 0 and LTGSTD program Version 2. 0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crawley, D.B.; Riesen, P.K.; Briggs, R.S.
1989-02-01
On January 30, 1989, the US Department of Energy (DOE) promulgated 10 CFR Part 435, Subpart A, an Interim Rule entitled ''Energy Conservation Voluntary Performance Standards for New Commercial and Multi-Family High Rise Residential Buildings; Mandatory for New Federal Buildings.'' As a consequence, federal agencies must design all future federal commercial and multifamily high rise residential buildings in accordance with the Standards, or show that their current standards already meet or exceed the energy-efficiency requirements of the Standards. Although these newly enacted Standards do not regulate the design of nonfederal buildings, DOE recommends that all design professionals use the Standardsmore » as guidelines for designing energy-conserving buildings. To encourage private sector use, the Standards were presented in the January 30, 1989, Federal Register in the format typical of commercial standards rather than a federal regulation. As a further help, DOE supported the development of various microcomputer programs to ease the use of the Standards. Two of these programs/emdash/ENVSTD (Version 2.0) and LTGSTD (Version 2.0)/emdash/are detailed in this users guide and provided on the accompanying diskette. This package, developed by Pacific Northwest Laboratory (PNL), is intended to facilitate the designer's use of the Standards dealing specifically with a building's envelope and lighting system designs. Using these programs will greatly simplify the designer's task of performing the sometimes complex calculations needed to determine a design's compliance with the Standards. 3 refs., 6 figs.« less
A computer program (MACPUMP) for interactive aquifer-test analysis
Day-Lewis, F. D.; Person, M.A.; Konikow, Leonard F.
1995-01-01
This report introduces MACPUMP (Version 1.0), an aquifer-test-analysis package for use with Macintosh4 computers. The report outlines the input- data format, describes the solutions encoded in the program, explains the menu-items, and offers a tutorial illustrating the use of the program. The package reads list-directed aquifer-test data from a file, plots the data to the screen, generates and plots type curves for several different test conditions, and allows mouse-controlled curve matching. MACPUMP features pull-down menus, a simple text viewer for displaying data-files, and optional on-line help windows. This version includes the analytical solutions for nonleaky and leaky confined aquifers, using both type curves and straight-line methods, and for the analysis of single-well slug tests using type curves. An executable version of the code and sample input data sets are included on an accompanying floppy disk.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Kuo -Ling; Mehrotra, Sanjay
We present a homogeneous algorithm equipped with a modified potential function for the monotone complementarity problem. We show that this potential function is reduced by at least a constant amount if a scaled Lipschitz condition (SLC) is satisfied. A practical algorithm based on this potential function is implemented in a software package named iOptimize. The implementation in iOptimize maintains global linear and polynomial time convergence properties, while achieving practical performance. It either successfully solves the problem, or concludes that the SLC is not satisfied. When compared with the mature software package MOSEK (barrier solver version 6.0.0.106), iOptimize solves convex quadraticmore » programming problems, convex quadratically constrained quadratic programming problems, and general convex programming problems in fewer iterations. Moreover, several problems for which MOSEK fails are solved to optimality. In addition, we also find that iOptimize detects infeasibility more reliably than the general nonlinear solvers Ipopt (version 3.9.2) and Knitro (version 8.0).« less
A formative evaluation of the SWITCH® obesity prevention program: print versus online programming.
Welk, Gregory J; Chen, Senlin; Nam, Yoon Ho; Weber, Tara E
2015-01-01
SWITCH® is an evidence-based childhood obesity prevention program that works through schools to impact parenting practices. The present study was designed as a formative evaluation to test whether an online version of SWITCH® would work equivalently as the established print version. Ten elementary schools were matched by socio-economic status and randomly assigned to receive either the print (n = 5) or online (n = 5) version. A total of 211 children from 22, 3(rd) grade classrooms were guided through the 4 month program by a team of program leaders working in cooperation with the classroom teachers. Children were tasked with completing weekly SWITCH® Trackers with their parents to monitor goal setting efforts in showing positive Do (≥60 minutes of moderate-to-vigorous physical activity), View (≤2 hours of screen time), and Chew (≥5 servings of fruits and vegetables) behaviors on each day. A total of 91 parents completed a brief survey to assess project-specific interactions with their child and the impact on their behaviors. The majority of parents (93.2%) reported satisfactory experiences with either the online or print SWITCH® program. The return rate for the SWITCH® Trackers was higher (42.5% ± 11%) from the print schools compared to the online schools (27.4% ± 10.9%). District program managers rated the level of teacher engagement in regards to program facilitation and the results showed a higher Trackers return rate in the highly engaged schools (38.5% ± 13.3%) than the lowly engaged schools (28.6 ± 11.9%). No significant differences were observed in parent/child interactions or reported behavior change (ps > .05) suggesting the equivalence in intervention effect for print and online versions of the SWITCH® program. The findings support the utility of the online SWITCH® platform but school-based modules are needed to facilitate broader school engagement by classroom teachers and PE teachers.
Chemical Education from Programs for Learning, Inc.
ERIC Educational Resources Information Center
Petrich, James A.
1981-01-01
This software review focuses on five concept-related packages of programs in the Apple version and are viewed as well-written in terms of both educational sophistication and programing expertise. (MP)
The use of self checks and voting in software error detection - An empirical study
NASA Technical Reports Server (NTRS)
Leveson, Nancy G.; Cha, Stephen S.; Knight, John C.; Shimeall, Timothy J.
1990-01-01
The results of an empirical study of software error detection using self checks and N-version voting are presented. Working independently, each of 24 programmers first prepared a set of self checks using just the requirements specification of an aerospace application, and then each added self checks to an existing implementation of that specification. The modified programs were executed to measure the error-detection performance of the checks and to compare this with error detection using simple voting among multiple versions. The analysis of the checks revealed that there are great differences in the ability of individual programmers to design effective checks. It was found that some checks that might have been effective failed to detect an error because they were badly placed, and there were numerous instances of checks signaling nonexistent errors. In general, specification-based checks alone were not as effective as specification-based checks combined with code-based checks. Self checks made it possible to identify faults that had not been detected previously by voting 28 versions of the program over a million randomly generated inputs. This appeared to result from the fact that the self checks could examine the internal state of the executing program, whereas voting examines only final results of computations. If internal states had to be identical in N-version voting systems, then there would be no reason to write multiple versions.
MAGNA (Materially and Geometrically Nonlinear Analysis). Part I. Finite Element Analysis Manual.
1982-12-01
provided for operating the program, modifying storage caoacity, preparing input data, estimating computer run times , and interpreting the output...7.1.3 Reserved File Names 7.1.16 7.1.4 Typical Execution Times on CDC Computers 7.1.18 7.2 CRAY PROGRAM VERSION 7.2.1 7.2.1 Job Control Language 7.2.1...7.2.2 Modification of Storage Capacity 7.2.8 7.2.3 Execution Times on the CRAY-I Computer 7.2.12 7.3 VAX PROGRAM VERSION 7.3.1 8 INPUT DATA 8.0.1 8.1
Program Processes Thermocouple Readings
NASA Technical Reports Server (NTRS)
Quave, Christine A.; Nail, William, III
1995-01-01
Digital Signal Processor for Thermocouples (DART) computer program implements precise and fast method of converting voltage to temperature for large-temperature-range thermocouple applications. Written using LabVIEW software. DART available only as object code for use on Macintosh II FX or higher-series computers running System 7.0 or later and IBM PC-series and compatible computers running Microsoft Windows 3.1. Macintosh version of DART (SSC-00032) requires LabVIEW 2.2.1 or 3.0 for execution. IBM PC version (SSC-00031) requires LabVIEW 3.0 for Windows 3.1. LabVIEW software product of National Instruments and not included with program.
Program Models A Laser Beam Focused In An Aerosol Spray
NASA Technical Reports Server (NTRS)
Barton, J. P.
1996-01-01
Monte Carlo analysis performed on packets of light. Program for Analysis of Laser Beam Focused Within Aerosol Spray (FLSPRY) developed for theoretical analysis of propagation of laser pulse optically focused within aerosol spray. Applied for example, to analyze laser ignition arrangement in which focused laser pulse used to ignite liquid aerosol fuel spray. Scattering and absorption of laser light by individual aerosol droplets evaluated by use of electromagnetic Lorenz-Mie theory. Written in FORTRAN 77 for both UNIX-based computers and DEC VAX-series computers. VAX version of program (LEW-16051). UNIX version (LEW-16065).
THERMINATOR: THERMal heavy-IoN generATOR
NASA Astrophysics Data System (ADS)
Kisiel, Adam; Tałuć, Tomasz; Broniowski, Wojciech; Florkowski, Wojciech
2006-04-01
THERMINATOR is a Monte Carlo event generator designed for studying of particle production in relativistic heavy-ion collisions performed at such experimental facilities as the SPS, RHIC, or LHC. The program implements thermal models of particle production with single freeze-out. It performs the following tasks: (1) generation of stable particles and unstable resonances at the chosen freeze-out hypersurface with the local phase-space density of particles given by the statistical distribution factors, (2) subsequent space-time evolution and decays of hadronic resonances in cascades, (3) calculation of the transverse-momentum spectra and numerous other observables related to the space-time evolution. The geometry of the freeze-out hypersurface and the collective velocity of expansion may be chosen from two successful models, the Cracow single-freeze-out model and the Blast-Wave model. All particles from the Particle Data Tables are used. The code is written in the object-oriented c++ language and complies to the standards of the ROOT environment. Program summaryProgram title:THERMINATOR Catalogue identifier:ADXL_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXL_v1_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland RAM required to execute with typical data:50 Mbytes Number of processors used:1 Computer(s) for which the program has been designed: PC, Pentium III, IV, or Athlon, 512 MB RAM not hardware dependent (any computer with the c++ compiler and the ROOT environment [R. Brun, F. Rademakers, Nucl. Instrum. Methods A 389 (1997) 81, http://root.cern.ch] Operating system(s) for which the program has been designed:Linux: Mandrake 9.0, Debian 3.0, SuSE 9.0, Red Hat FEDORA 3, etc., Windows XP with Cygwin ver. 1.5.13-1 and gcc ver. 3.3.3 (cygwin special)—not system dependent External routines/libraries used: ROOT ver. 4.02.00 Programming language:c++ Size of the package: (324 KB directory 40 KB compressed distribution archive), without the ROOT libraries (see http://root.cern.ch for details on the ROOT [R. Brun, F. Rademakers, Nucl. Instrum. Methods A 389 (1997) 81, http://root.cern.ch] requirements). The output files created by the code need 1.1 GB for each 500 events. Distribution format: tar gzip file Number of lines in distributed program, including test data, etc.: 6534 Number of bytes in ditribution program, including test data, etc.:41 828 Nature of the physical problem: Statistical models have proved to be very useful in the description of soft physics in relativistic heavy-ion collisions [P. Braun-Munzinger, K. Redlich, J. Stachel, 2003, nucl-th/0304013. [2
A distributed version of the NASA Engine Performance Program
NASA Technical Reports Server (NTRS)
Cours, Jeffrey T.; Curlett, Brian P.
1993-01-01
Distributed NEPP, a version of the NASA Engine Performance Program, uses the original NEPP code but executes it in a distributed computer environment. Multiple workstations connected by a network increase the program's speed and, more importantly, the complexity of the cases it can handle in a reasonable time. Distributed NEPP uses the public domain software package, called Parallel Virtual Machine, allowing it to execute on clusters of machines containing many different architectures. It includes the capability to link with other computers, allowing them to process NEPP jobs in parallel. This paper discusses the design issues and granularity considerations that entered into programming Distributed NEPP and presents the results of timing runs.
HIPPO Unit Commitment Version 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
2017-01-17
Developed for the Midcontinent Independent System Operator, Inc. (MISO), HIPPO-Unit Commitment Version 1 is for solving security constrained unit commitment problem. The model was developed to solve MISO's cases. This version of codes includes I/O module to read in MISO's csv files, modules to create a state-based mixed integer programming formulation for solving MIP, and modules to test basic procedures to solve MIP via HPC.
FHWA traffic noise model, version 1.0 : user's guide
DOT National Transportation Integrated Search
1998-01-01
This User's Guide is for the Federal Highway Administration's Traffic Noise Model (FHWA TNM), Version 1.0 -- the FHWAs computer program for highway traffic noise prediction and analysis. Two companion reports, a Technical Manual and a data repor...
FHWA Traffic Noise Model, version 1.0 technical manual
DOT National Transportation Integrated Search
1998-02-01
This Technical Manual is for the Federal Highway Administrations Traffic Noise Model (FHWA TNM), Version 1.0 -- the FHWAs computer program for highway traffic noise prediction and analysis. Two companion reports, a Users Guide and a data r...
Conceptual modeling of coincident failures in multiversion software
NASA Technical Reports Server (NTRS)
Littlewood, Bev; Miller, Douglas R.
1989-01-01
Recent work by Eckhardt and Lee (1985) shows that independently developed program versions fail dependently (specifically, simultaneous failure of several is greater than would be the case under true independence). The present authors show there is a precise duality between input choice and program choice in this model and consider a generalization in which different versions can be developed using diverse methodologies. The use of diverse methodologies is shown to decrease the probability of the simultaneous failure of several versions. Indeed, it is theoretically possible to obtain versions which exhibit better than independent failure behavior. The authors try to formalize the notion of methodological diversity by considering the sequence of decision outcomes that constitute a methodology. They show that diversity of decision implies likely diversity of behavior for the different verions developed under such forced diversity. For certain one-out-of-n systems the authors obtain an optimal method for allocating diversity between versions. For two-out-of-three systems there seem to be no simple optimality results which do not depend on constraints which cannot be verified in practice.
NASA Technical Reports Server (NTRS)
Pfister, Robin; McMahon, Joe
2006-01-01
Power User Interface 5.0 (PUI) is a system of middleware, written for expert users in the Earth-science community, PUI enables expedited ordering of data granules on the basis of specific granule-identifying information that the users already know or can assemble. PUI also enables expert users to perform quick searches for orderablegranule information for use in preparing orders. PUI 5.0 is available in two versions (note: PUI 6.0 has command-line mode only): a Web-based application program and a UNIX command-line- mode client program. Both versions include modules that perform data-granule-ordering functions in conjunction with external systems. The Web-based version works with Earth Observing System Clearing House (ECHO) metadata catalog and order-entry services and with an open-source order-service broker server component, called the Mercury Shopping Cart, that is provided separately by Oak Ridge National Laboratory through the Department of Energy. The command-line version works with the ECHO metadata and order-entry process service. Both versions of PUI ultimately use ECHO to process an order to be sent to a data provider. Ordered data are provided through means outside the PUI software system.
Additions to Mars Global Reference Atmospheric Model (MARS-GRAM)
NASA Technical Reports Server (NTRS)
Justus, C. G.; James, Bonnie
1992-01-01
Three major additions or modifications were made to the Mars Global Reference Atmospheric Model (Mars-GRAM): (1) in addition to the interactive version, a new batch version is available, which uses NAMELIST input, and is completely modular, so that the main driver program can easily be replaced by any calling program, such as a trajectory simulation program; (2) both the interactive and batch versions now have an option for treating local-scale dust storm effects, rather than just the global-scale dust storms in the original Mars-GRAM; and (3) the Zurek wave perturbation model was added, to simulate the effects of tidal perturbations, in addition to the random (mountain wave) perturbation model of the original Mars-GRAM. A minor modification was also made which allows heights to go 'below' local terrain height and return 'realistic' pressure, density, and temperature, and not the surface values, as returned by the original Mars-GRAM. This feature will allow simulations of Mars rover paths which might go into local 'valley' areas which lie below the average height of the present, rather coarse-resolution, terrain height data used by Mars-GRAM. Sample input and output of both the interactive and batch versions of Mars-GRAM are presented.
Additions to Mars Global Reference Atmospheric Model (Mars-GRAM)
NASA Technical Reports Server (NTRS)
Justus, C. G.
1991-01-01
Three major additions or modifications were made to the Mars Global Reference Atmospheric Model (Mars-GRAM): (1) in addition to the interactive version, a new batch version is available, which uses NAMELIST input, and is completely modular, so that the main driver program can easily be replaced by any calling program, such as a trajectory simulation program; (2) both the interactive and batch versions now have an option for treating local-scale dust storm effects, rather than just the global-scale dust storms in the original Mars-GRAM; and (3) the Zurek wave perturbation model was added, to simulate the effects of tidal perturbations, in addition to the random (mountain wave) perturbation model of the original Mars-GRAM. A minor modification has also been made which allows heights to go below local terrain height and return realistic pressure, density, and temperature (not the surface values) as returned by the original Mars-GRAM. This feature will allow simulations of Mars rover paths which might go into local valley areas which lie below the average height of the present, rather coarse-resolution, terrain height data used by Mars-GRAM. Sample input and output of both the interactive and batch version of Mars-GRAM are presented.
Maxis-A rezoning and remapping code in two dimensional cylindrical geometry
NASA Astrophysics Data System (ADS)
Lin, Zhiwei; Jiang, Shaoen; Zhang, Lu; Kuang, Longyu; Li, Hang
2018-06-01
This paper presents the new version of our code Maxis (Lin et al., 2011). Maxis is a local rezoning and remapping code in two dimensional cylindrical geometry, which can be employed to address the grid distortion problem of unstructured meshes. The new version of Maxis is mostly programmed in the C language which considerably improves its computational efficiency with respect to the former Matlab version. A new algorithm for determining the intersection of two arbitrary convex polygons is also incorporated into the new version. Some additional linking functions are further provided in the new version for the purpose of combining Maxis and MULTI2D.
Ackermann, Hans D.; Pankratz, Leroy W.; Dansereau, Danny A.
1983-01-01
The computer programs published in Open-File Report 82-1065, A comprehensive system for interpreting seismic-refraction arrival-time data using interactive computer methods (Ackermann, Pankratz, and Dansereau, 1982), have been modified to run on a mini-computer. The new version uses approximately 1/10 of the memory of the initial version, is more efficient and gives the same results.
Analysis of the Performance of Mixed Finite Element Methods
1988-11-28
with constant h is used and p is increased for accuracy. The h-p version combines the two approaches. We have studied varioius theoretical and...program PROBE IA101, developed by Noetic Tech, St. Louis [with a first release in 1985 and a second one in 19861. This program implements these versions...Vol. 606, Springer-Verlag, Berlin, 1977. A 10. B. A. Szabo, PROBE: Theoretical Manual, NOETIC Technologies, St. Louis, 1985. 4. Chronological list of
Engineering and programming manual: Two-dimensional kinetic reference computer program (TDK)
NASA Technical Reports Server (NTRS)
Nickerson, G. R.; Dang, L. D.; Coats, D. E.
1985-01-01
The Two Dimensional Kinetics (TDK) computer program is a primary tool in applying the JANNAF liquid rocket thrust chamber performance prediction methodology. The development of a methodology that includes all aspects of rocket engine performance from analytical calculation to test measurements, that is physically accurate and consistent, and that serves as an industry and government reference is presented. Recent interest in rocket engines that operate at high expansion ratio, such as most Orbit Transfer Vehicle (OTV) engine designs, has required an extension of the analytical methods used by the TDK computer program. Thus, the version of TDK that is described in this manual is in many respects different from the 1973 version of the program. This new material reflects the new capabilities of the TDK computer program, the most important of which are described.
ADS: A FORTRAN program for automated design synthesis: Version 1.10
NASA Technical Reports Server (NTRS)
Vanderplaats, G. N.
1985-01-01
A new general-purpose optimization program for engineering design is described. ADS (Automated Design Synthesis - Version 1.10) is a FORTRAN program for solution of nonlinear constrained optimization problems. The program is segmented into three levels: strategy, optimizer, and one-dimensional search. At each level, several options are available so that a total of over 100 possible combinations can be created. Examples of available strategies are sequential unconstrained minimization, the Augmented Lagrange Multiplier method, and Sequential Linear Programming. Available optimizers include variable metric methods and the Method of Feasible Directions as examples, and one-dimensional search options include polynomial interpolation and the Golden Section method as examples. Emphasis is placed on ease of use of the program. All information is transferred via a single parameter list. Default values are provided for all internal program parameters such as convergence criteria, and the user is given a simple means to over-ride these, if desired.
NASA Astrophysics Data System (ADS)
Avellar, J.; Duarte, L. G. S.; da Mota, L. A. C. P.
2012-10-01
We present a set of software routines in Maple 14 for solving first order ordinary differential equations (FOODEs). The package implements the Prelle-Singer method in its original form together with its extension to include integrating factors in terms of elementary functions. The package also presents a theoretical extension to deal with all FOODEs presenting Liouvillian solutions. Applications to ODEs taken from standard references show that it solves ODEs which remain unsolved using Maple's standard ODE solution routines. New version program summary Program title: PSsolver Catalogue identifier: ADPR_v2_0 Program summary URL: http://cpc.cs.qub.ac.uk/summaries/ADPR_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 2302 No. of bytes in distributed program, including test data, etc.: 31962 Distribution format: tar.gz Programming language: Maple 14 (also tested using Maple 15 and 16). Computer: Intel Pentium Processor P6000, 1.86 GHz. Operating system: Windows 7. RAM: 4 GB DDR3 Memory Classification: 4.3. Catalogue identifier of previous version: ADPR_v1_0 Journal reference of previous version: Comput. Phys. Comm. 144 (2002) 46 Does the new version supersede the previous version?: Yes Nature of problem: Symbolic solution of first order differential equations via the Prelle-Singer method. Solution method: The method of solution is based on the standard Prelle-Singer method, with extensions for the cases when the FOODE contains elementary functions. Additionally, an extension of our own which solves FOODEs with Liouvillian solutions is included. Reasons for new version: The program was not running anymore due to changes in the latest versions of Maple. Additionally, we corrected/changed some bugs/details that were hampering the smoother functioning of the routines. Summary of revisions: • As time went by, many commands in Maple were deprecated. So, in order to make the program able to run with the newer versions, we have checked and changed some of those. For instance, the command sum had changed, and some program lines were substituted so that the package works properly. • In the old version we must supply the degree of the Darboux polynomials we want to determine. In the present version the user can set the degree by typing Deg = number in the command call (e.g., PSsolve(ode, Deg =3); telling the command PSsolve that it must use Darboux polynomials of degree up to three). If the user does not specify the degree, the routines use, as default, the degree 1. Restrictions: If the integrating factor for the FOODE under consideration has factors of high degree in the dependent and independent variables and in the elementary functions appearing in the FOODE, the package may spend a long time finding the solution. Also, when dealing with FOODEs containing elementary functions, it is essential that the algebraic dependency between them is recognized. If that does not happen, our program can miss some solutions. Unusual features: Our implementation of the Prelle-Singer approach not only solves FOODEs, but can also be used as a research tool that allows the user to follow all the steps of the procedure. For example, the Darboux polynomials (eigenpolynomials) of the D-operator associated with a FOODE (see Section 4) can be calculated. In addition, our package is successful in solving FOODEs that were not solved by some of the most commonly available solvers. Finally, our package implements a theoretical extension (for details, see [1,2]) to the original Prelle-Singer approach that enhances its scope, allowing it to tackle some FOODEs whose solutions involve non-elementary Liouvillian functions. Running time: This depends strongly on the FOODE, but usually under 2 seconds when running our 'arena' test file: The non linear FOODEs presented in the book by Kamke [3]. These times were obtained using an Intel Pentium Processor P6000, 1.86 GHz, with 4 GB RAM. References: [1] M. Singer, Liouvillian first integrals of differential equations, Trans. Amer. Math. Soc. 333 (1992) 673-688. [2] L.G.S. Duarte, S.E.S. Duarte, L.A.C.P. da Mota, J.E.F. Skea, A method to tackle first order ordinary differential equations with Liouvilian functions in the solution, J. Phys. A: Math. Gen. Inglaterra 35 (17) (2002) 3899-3910. [3] E. Kamke, Differentialgleichungen: Lösungsmethoden und Lösungen, Chelsea Publishing Co., New York, 1959.
Revised and extended UTILITIES for the RATIP package
NASA Astrophysics Data System (ADS)
Nikkinen, J.; Fritzsche, S.; Heinäsmäki, S.
2006-09-01
During the last years, the RATIP package has been found useful for calculating the excitation and decay properties of free atoms. Based on the (relativistic) multiconfiguration Dirac-Fock method, this program is used to obtain accurate predictions of atomic properties and to analyze many recent experiments. The daily work with this package made an extension of its UTILITIES [S. Fritzsche, Comput. Phys. Comm. 141 (2001) 163] desirable in order to facilitate the data handling and interpretation of complex spectra. For this purpose, we make available an enlarged version of the UTILITIES which mainly supports the comparison with experiment as well as large Auger computations. Altogether 13 additional tasks have been appended to the program together with a new menu structure to improve the interactive control of the program. Program summaryTitle of program: RATIP Catalogue identifier: ADPD_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADPD_v2_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing provisions: none Reference in CPC to previous version: S. Fritzsche, Comput. Phys. Comm. 141 (2001) 163 Catalogue identifier of previous version: ADPD Authors of previous version: S. Fritzsche, Department of Physics, University of Kassel, Heinrich-Plett-Strasse 40, D-34132 Kassel, Germany Does the new version supersede the original program?: yes Computer for which the new version is designed and others on which it has been tested: IBM RS 6000, PC Pentium II-IV Installations: University of Kassel (Germany), University of Oulu (Finland) Operating systems: IBM AIX, Linux, Unix Program language used in the new version: ANSI standard Fortran 90/95 Memory required to execute with typical data: 300 kB No. of bits in a word: All real variables are parameterized by a selected kind parameter and, thus, can be adapted to any required precision if supported by the compiler. Currently, the kind parameter is set to double precision (two 32-bit words) as used also for other components of the RATIP package [S. Fritzsche, C.F. Fischer, C.Z. Dong, Comput. Phys. Comm. 124 (2000) 341; G. Gaigalas, S. Fritzsche, Comput. Phys. Comm. 134 (2001) 86; S. Fritzsche, Comput. Phys. Comm. 141 (2001) 163; S. Fritzsche, J. Elec. Spec. Rel. Phen. 114-116 (2001) 1155] No. of lines in distributed program, including test data, etc.:231 813 No. of bytes in distributed program, including test data, etc.: 3 977 387 Distribution format: tar.gzip file Nature of the physical problem: In order to describe atomic excitation and decay properties also quantitatively, large-scale computations are often needed. In the framework of the RATIP package, the UTILITIES support a variety of (small) tasks. For example, these tasks facilitate the file and data handling in large-scale applications or in the interpretation of complex spectra. Method of solution: The revised UTILITIES now support a total of 29 subtasks which are mainly concerned with the manipulation of output data as obtained from other components of the RATIP package. Each of these tasks are realized by one or several subprocedures which have access to the corresponding modules of the main components. While the main menu defines seven groups of subtasks for data manipulations and computations, a particular task is selected from one of these group menus. This allows to enlarge the program later if technical support for further tasks will become necessary. For each selected task, an interactive dialog about the required input and output data as well as a few additional information are printed during the execution of the program. Reasons for the new version: The requirement for enlarging the previous version of the UTILITIES [S. Fritzsche, Comput. Phys. Comm. 141 (2001) 163] arose from the recent application of the RATIP package for large-scale radiative and Auger computations. A number of new subtasks now refer to the handling of Auger amplitudes and their proper combination in order to facilitate the interpretation of complex spectra. A few further tasks, such as the direct access to the one-electron matrix elements for some given set of orbital functions, have been found useful also in the analysis of data. Summary of revisions: extraction and handling of atomic data within the framework of RATIP. With the revised version, we now 'add' another 13 tasks which refer to the manipulation of data files, the generation and interpretation of Auger spectra, the computation of various one- and two-electron matrix elements as well as the evaluation of momentum densities and grid parameters. Owing to the rather large number of subtasks, the main menu has been divided into seven groups from which the individual tasks can be selected very similarly as before. Typical running time: The program responds promptly for most of the tasks. The responding time for some tasks, such as the generation of a relativistic momentum density, strongly depends on the size of the corresponding data files and the number of grid points. Unusual features of the program: A total of 29 different tasks are supported by the program. Starting from the main menu, the user is guided interactively through the program by a dialog and a few additional explanations. For each task, a short summary about its function is displayed before the program prompts for all the required input data.
FHWA Traffic Noise Model user's guide (version 2.0 addendum).
DOT National Transportation Integrated Search
2002-03-01
In March 1998, the Federal Highway Administration (FHWA) Office of Natural : Environment, released the FHWA Traffic Noise Model (FHWA TNM) Version 1.0, a : state-of-the-art computer program for highway traffic noise prediction and : analysis. Since t...
Carbon monoxide screen for signalized intersections : COSIM, version 4.0 - technical documentation.
DOT National Transportation Integrated Search
2013-06-01
Illinois Carbon Monoxide Screen for Intersection Modeling (COSIM) Version 3.0 is a Windows-based computer : program currently used by the Illinois Department of Transportation (IDOT) to estimate worst-case carbon : monoxide (CO) concentrations near s...
Detection of faults and software reliability analysis
NASA Technical Reports Server (NTRS)
Knight, J. C.
1986-01-01
Multiversion or N-version programming was proposed as a method of providing fault tolerance in software. The approach requires the separate, independent preparation of multiple versions of a piece of software for some application. Specific topics addressed are: failure probabilities in N-version systems, consistent comparison in N-version systems, descriptions of the faults found in the Knight and Leveson experiment, analytic models of comparison testing, characteristics of the input regions that trigger faults, fault tolerance through data diversity, and the relationship between failures caused by automatically seeded faults.
Parkhurst, David L.; Appelo, C.A.J.
1999-01-01
PHREEQC version 2 is a computer program written in the C programming language that is designed to perform a wide variety of low-temperature aqueous geochemical calculations. PHREEQC is based on an ion-association aqueous model and has capabilities for (1) speciation and saturation-index calculations; (2) batch-reaction and one-dimensional (1D) transport calculations involving reversible reactions, which include aqueous, mineral, gas, solid-solution, surface-complexation, and ion-exchange equilibria, and irreversible reactions, which include specified mole transfers of reactants, kinetically controlled reactions, mixing of solutions, and temperature changes; and (3) inverse modeling, which finds sets of mineral and gas mole transfers that account for differences in composition between waters, within specified compositional uncertainty limits.New features in PHREEQC version 2 relative to version 1 include capabilities to simulate dispersion (or diffusion) and stagnant zones in 1D-transport calculations, to model kinetic reactions with user-defined rate expressions, to model the formation or dissolution of ideal, multicomponent or nonideal, binary solid solutions, to model fixed-volume gas phases in addition to fixed-pressure gas phases, to allow the number of surface or exchange sites to vary with the dissolution or precipitation of minerals or kinetic reactants, to include isotope mole balances in inverse modeling calculations, to automatically use multiple sets of convergence parameters, to print user-defined quantities to the primary output file and (or) to a file suitable for importation into a spreadsheet, and to define solution compositions in a format more compatible with spreadsheet programs. This report presents the equations that are the basis for chemical equilibrium, kinetic, transport, and inverse-modeling calculations in PHREEQC; describes the input for the program; and presents examples that demonstrate most of the program's capabilities.
ERIC Educational Resources Information Center
Mathews, Anna E.; Werch, Chudley (CHAD); Michniewicz, Mara; Bian, Hui
2007-01-01
The purpose of the study was to evaluate the immediate impact of two new versions of the Project SPORT program, a brief one-on-one tailored consult addressing alcohol use and physical activity for adolescents. One new version was a brief interactive CD-ROM (Study one) and a second was a brief small group consultation (Study two). In study one,…
Barcroft, Joe; Sommers, Mitchell S; Tye-Murray, Nancy; Mauzé, Elizabeth; Schroy, Catherine; Spehar, Brent
2011-11-01
Our long-term objective is to develop an auditory training program that will enhance speech recognition in those situations where patients most want improvement. As a first step, the current investigation trained participants using either a single talker or multiple talkers to determine if auditory training leads to transfer-appropriate gains. The experiment implemented a 2 × 2 × 2 mixed design, with training condition as a between-participants variable and testing interval and test version as repeated-measures variables. Participants completed a computerized six-week auditory training program wherein they heard either the speech of a single talker or the speech of six talkers. Training gains were assessed with single-talker and multi-talker versions of the Four-choice discrimination test. Participants in both groups were tested on both versions. Sixty-nine adult hearing-aid users were randomly assigned to either single-talker or multi-talker auditory training. Both groups showed significant gains on both test versions. Participants who trained with multiple talkers showed greater improvement on the multi-talker version whereas participants who trained with a single talker showed greater improvement on the single-talker version. Transfer-appropriate gains occurred following auditory training, suggesting that auditory training can be designed to target specific patient needs.
NASA Technical Reports Server (NTRS)
Muravyov, Alexander A.; Turner, Travis L.; Robinson, Jay H.; Rizzi, Stephen A.
1999-01-01
In this paper, the problem of random vibration of geometrically nonlinear MDOF structures is considered. The solutions obtained by application of two different versions of a stochastic linearization method are compared with exact (F-P-K) solutions. The formulation of a relatively new version of the stochastic linearization method (energy-based version) is generalized to the MDOF system case. Also, a new method for determination of nonlinear sti ness coefficients for MDOF structures is demonstrated. This method in combination with the equivalent linearization technique is implemented in a new computer program. Results in terms of root-mean-square (RMS) displacements obtained by using the new program and an existing in-house code are compared for two examples of beam-like structures.
The Portals 4.0 network programming interface.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barrett, Brian W.; Brightwell, Ronald Brian; Pedretti, Kevin
2012-11-01
This report presents a specification for the Portals 4.0 network programming interface. Portals 4.0 is intended to allow scalable, high-performance network communication between nodes of a parallel computing system. Portals 4.0 is well suited to massively parallel processing and embedded systems. Portals 4.0 represents an adaption of the data movement layer developed for massively parallel processing platforms, such as the 4500-node Intel TeraFLOPS machine. Sandias Cplant cluster project motivated the development of Version 3.0, which was later extended to Version 3.3 as part of the Cray Red Storm machine and XT line. Version 4.0 is targeted to the next generationmore » of machines employing advanced network interface architectures that support enhanced offload capabilities.« less
45 CFR 170.299 - Incorporation by reference.
Code of Federal Regulations, 2012 CFR
2012-10-01
... Drug Programs Prescriber/Pharmacist Interface SCRIPT Standard, Implementation Guide, Version 8, Release.... (1) National Council for Prescription Drug Programs Prescriber/Pharmacist Interface SCRIPT Standard...
FHWA Traffic Noise Model version 1.1 user's guide (Addendum)
DOT National Transportation Integrated Search
2000-09-30
In March 1998, the Federal Highway Administration (FHWA) Office of Natural Environment, released the FHWA Traffic Noise Model (FHWA TNM) Version 1.0, a state-of-the-art computer program for highway traffic noise prediction and analysis. Since then, t...
45 CFR 170.455 - Testing and certification to newer versions of certain standards.
Code of Federal Regulations, 2010 CFR
2010-10-01
... INFORMATION TECHNOLOGY HEALTH INFORMATION TECHNOLOGY STANDARDS, IMPLEMENTATION SPECIFICATIONS, AND CERTIFICATION CRITERIA AND CERTIFICATION PROGRAMS FOR HEALTH INFORMATION TECHNOLOGY Temporary Certification... Technology may be upgraded to comply with newer versions of an adopted minimum standard accepted by the...
45 CFR 170.455 - Testing and certification to newer versions of certain standards.
Code of Federal Regulations, 2012 CFR
2012-10-01
... INFORMATION TECHNOLOGY HEALTH INFORMATION TECHNOLOGY STANDARDS, IMPLEMENTATION SPECIFICATIONS, AND CERTIFICATION CRITERIA AND CERTIFICATION PROGRAMS FOR HEALTH INFORMATION TECHNOLOGY Temporary Certification... Technology may be upgraded to comply with newer versions of an adopted minimum standard accepted by the...
45 CFR 170.455 - Testing and certification to newer versions of certain standards.
Code of Federal Regulations, 2013 CFR
2013-10-01
... INFORMATION TECHNOLOGY HEALTH INFORMATION TECHNOLOGY STANDARDS, IMPLEMENTATION SPECIFICATIONS, AND CERTIFICATION CRITERIA AND CERTIFICATION PROGRAMS FOR HEALTH INFORMATION TECHNOLOGY Temporary Certification... Technology may be upgraded to comply with newer versions of an adopted minimum standard accepted by the...
45 CFR 170.455 - Testing and certification to newer versions of certain standards.
Code of Federal Regulations, 2011 CFR
2011-10-01
... INFORMATION TECHNOLOGY HEALTH INFORMATION TECHNOLOGY STANDARDS, IMPLEMENTATION SPECIFICATIONS, AND CERTIFICATION CRITERIA AND CERTIFICATION PROGRAMS FOR HEALTH INFORMATION TECHNOLOGY Temporary Certification... Technology may be upgraded to comply with newer versions of an adopted minimum standard accepted by the...
45 CFR 170.455 - Testing and certification to newer versions of certain standards.
Code of Federal Regulations, 2014 CFR
2014-10-01
... INFORMATION TECHNOLOGY HEALTH INFORMATION TECHNOLOGY STANDARDS, IMPLEMENTATION SPECIFICATIONS, AND CERTIFICATION CRITERIA AND CERTIFICATION PROGRAMS FOR HEALTH INFORMATION TECHNOLOGY Temporary Certification... Technology may be upgraded to comply with newer versions of an adopted minimum standard accepted by the...
NASA Technical Reports Server (NTRS)
Justus, C. G.; Alyea, F. N.; Cunnold, D. M.; Jeffries, W. R., III; Johnson, D. L.
1991-01-01
A technical description of the NASA/MSFC Global Reference Atmospheric Model 1990 version (GRAM-90) is presented with emphasis on the additions and new user's manual descriptions of the program operation aspects of the revised model. Some sample results for the new middle atmosphere section and comparisons with results from a three dimensional circulation model are provided. A programmer's manual with more details for those wishing to make their own GRAM program adaptations is also presented.
Ada technology support for NASA-GSFC
NASA Technical Reports Server (NTRS)
1986-01-01
Utilization of the Ada programming language and environments to perform directorate functions was reviewed. The Mission and Data Operations Directorate Network (MNET) conversion effort was chosen as the first task for evaluation and assistance. The MNET project required the rewriting of the existing Network Control Program (NCP) in the Ada programming language. The DEC Ada compiler running on the VAX under WMS was used for the initial development efforts. Stress tests on the newly delivered version of the DEC Ada compiler were performed. The new Alsys Ada compiler was purchased for the IBM PC AT. A prevalidated version of the compiler was obtained. The compiler was then validated.
NASA Technical Reports Server (NTRS)
Mcbride, Bonnie J.; Reno, Martin A.; Gordon, Sanford
1994-01-01
The NASA Lewis chemical equilibrium program with applications continues to be improved and updated. The latest version is CET93. This code, with smaller arrays, has been compiled for use on an IBM or IBM-compatible personal computer and is called CETPC. This report is intended to be primarily a users manual for CET93 and CETPC. It does not repeat the more complete documentation of earlier reports on the equilibrium program. Most of the discussion covers input and output files, two new options (ONLY and comments), example problems, and implementation of CETPC.
FFT-split-operator code for solving the Dirac equation in 2+1 dimensions
NASA Astrophysics Data System (ADS)
Mocken, Guido R.; Keitel, Christoph H.
2008-06-01
The main part of the code presented in this work represents an implementation of the split-operator method [J.A. Fleck, J.R. Morris, M.D. Feit, Appl. Phys. 10 (1976) 129-160; R. Heather, Comput. Phys. Comm. 63 (1991) 446] for calculating the time-evolution of Dirac wave functions. It allows to study the dynamics of electronic Dirac wave packets under the influence of any number of laser pulses and its interaction with any number of charged ion potentials. The initial wave function can be either a free Gaussian wave packet or an arbitrary discretized spinor function that is loaded from a file provided by the user. The latter option includes Dirac bound state wave functions. The code itself contains the necessary tools for constructing such wave functions for a single-electron ion. With the help of self-adaptive numerical grids, we are able to study the electron dynamics for various problems in 2+1 dimensions at high spatial and temporal resolutions that are otherwise unachievable. Along with the position and momentum space probability density distributions, various physical observables, such as the expectation values of position and momentum, can be recorded in a time-dependent way. The electromagnetic spectrum that is emitted by the evolving particle can also be calculated with this code. Finally, for planning and comparison purposes, both the time-evolution and the emission spectrum can also be treated in an entirely classical relativistic way. Besides the implementation of the above-mentioned algorithms, the program also contains a large C++ class library to model the geometric algebra representation of spinors that we use for representing the Dirac wave function. This is why the code is called "Dirac++". Program summaryProgram title: Dirac++ or (abbreviated) d++ Catalogue identifier: AEAS_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAS_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 474 937 No. of bytes in distributed program, including test data, etc.: 4 128 347 Distribution format: tar.gz Programming language: C++ Computer: Any, but SMP systems are preferred Operating system: Linux and MacOS X are actively supported by the current version. Earlier versions were also tested successfully on IRIX and AIX Number of processors used: Generally unlimited, but best scaling with 2-4 processors for typical problems RAM: 160 Megabytes minimum for the examples given here Classification: 2.7 External routines: FFTW Library [3,4], Gnu Scientific Library [5], bzip2, bunzip2 Nature of problem: The relativistic time evolution of wave functions according to the Dirac equation is a challenging numerical task. Especially for an electron in the presence of high intensity laser beams and/or highly charged ions, this type of problem is of considerable interest to atomic physicists. Solution method: The code employs the split-operator method [1,2], combined with fast Fourier transforms (FFT) for calculating any occurring spatial derivatives, to solve the given problem. An autocorrelation spectral method [6] is provided to generate a bound state for use as the initial wave function of further dynamical studies. Restrictions: The code in its current form is restricted to problems in two spatial dimensions. Otherwise it is only limited by CPU time and memory that one can afford to spend on a particular problem. Unusual features: The code features dynamically adapting position and momentum space grids to keep execution time and memory requirements as small as possible. It employs an object-oriented approach, and it relies on a Clifford algebra class library to represent the mathematical objects of the Dirac formalism which we employ. Besides that it includes a feature (typically called "checkpointing") which allows the resumption of an interrupted calculation. Additional comments: Along with the program's source code, we provide several sample configuration files, a pre-calculated bound state wave function, and template files for the analysis of the results with both MatLab and Igor Pro. Running time: Running time ranges from a few minutes for simple tests up to several days, even weeks for real-world physical problems that require very large grids or very small time steps. References:J.A. Fleck, J.R. Morris, M.D. Feit, Time-dependent propagation of high energy laser beams through the atmosphere, Appl. Phys. 10 (1976) 129-160. R. Heather, An asymptotic wavefunction splitting procedure for propagating spatially extended wavefunctions: Application to intense field photodissociation of H +2, Comput. Phys. Comm. 63 (1991) 446. M. Frigo, S.G. Johnson, FFTW: An adaptive software architecture for the FFT, in: Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, vol. 3, IEEE, 1998, pp. 1381-1384. M. Frigo, S.G. Johnson, The design and implementation of FFTW3, in: Proceedings of the IEEE, vol. 93, IEEE, 2005, pp. 216-231. URL: http://www.fftw.org/. M. Galassi, J. Davies, J. Theiler, B. Gough, G. Jungman, M. Booth, F. Rossi, GNU Scientific Library Reference Manual, second ed., Network Theory Limited, 2006. URL: http://www.gnu.org/software/gsl/. M.D. Feit, J.A. Fleck, A. Steiger, Solution of the Schrödinger equation by a spectral method, J. Comput. Phys. 47 (1982) 412-433.
NASA Astrophysics Data System (ADS)
Curtis, Joseph E.; Raghunandan, Sindhu; Nanda, Hirsh; Krueger, Susan
2012-02-01
A program to construct ensembles of biomolecular structures that are consistent with experimental scattering data are described. Specifically, we generate an ensemble of biomolecular structures by varying sets of backbone dihedral angles that are then filtered using experimentally determined restraints to rapidly determine structures that have scattering profiles that are consistent with scattering data. We discuss an application of these tools to predict a set of structures for the HIV-1 Gag protein, an intrinsically disordered protein, that are consistent with small-angle neutron scattering experimental data. We have assembled these algorithms into a program called SASSIE for structure generation, visualization, and analysis of intrinsically disordered proteins and other macromolecular ensembles using neutron and X-ray scattering restraints. Program summaryProgram title: SASSIE Catalogue identifier: AEKL_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKL_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License v3 No. of lines in distributed program, including test data, etc.: 3 991 624 No. of bytes in distributed program, including test data, etc.: 826 Distribution format: tar.gz Programming language: Python, C/C++, Fortran Computer: PC/Mac Operating system: 32- and 64-bit Linux (Ubuntu 10.04, Centos 5.6) and Mac OS X (10.6.6) RAM: 1 GB Classification: 3 External routines: Python 2.6.5, numpy 1.4.0, swig 1.3.40, scipy 0.8.0, Gnuplot-py-1.8, Tcl 8.5, Tk 8.5, Mac installation requires aquaterm 1.0 (or X window system) and Xcode 3 development tools. Nature of problem: Open source software to generate structures of disordered biological molecules that subsequently allow for the comparison of computational and experimental results is limiting the use of scattering resources. Solution method: Starting with an all atom model of a protein, for example, users can input regions to vary dihedral angles, ensembles of structures can be generated. Additionally, simple two-body rigid-body rotations are supported with and without disordered regions. Generated structures can then be used to calculate small-angle scattering profiles which can then be filtered against experimentally determined data. Filtered structures can be visualized individually or as an ensemble using density plots. In the modular and expandable program framework the user can easily access our subroutines and structural coordinates can be easily obtained for study using other computational physics methods. Additional comments: The distribution file for this program is over 159 Mbytes and therefore is not delivered directly when download or Email is requested. Instead an html file giving details of how the program can be obtained is sent. Running time: Varies depending on application. Typically 10 minutes to 24 hours depending on the number of generated structures.
DOT National Transportation Integrated Search
1992-02-01
HWNOISE is a VNTSC-developed user friendly program written in : Microsoft Fortran version 4.01 for the IBM PC/AT and : compatibles to analyze acoustic data. This program is an : integral part of the Federal Highway Administration's Mobile : Noise Dat...
DOT National Transportation Integrated Search
1992-02-01
HWINPUT is a VNTSC-developed user friendly program written in : Microsoft Fortran version 4.01 for the IBM PC/AT. This program : is an integral part of the Federal Highway Administration's : Mobile Noise Data Gathering and Analysis Laboratory and is ...
Interactive graphics for the Macintosh: software review of FlexiGraphs.
Antonak, R F
1990-01-01
While this product is clearly unique, its usefulness to individuals outside small business environments is somewhat limited. FlexiGraphs is, however, a reasonable first attempt to design a microcomputer software package that controls data through interactive editing within a graph. Although the graphics capabilities of mainframe programs such as MINITAB (Ryan, Joiner, & Ryan, 1981) and the graphic manipulations available through exploratory data analysis (e.g., Velleman & Hoaglin, 1981) will not be surpassed anytime soon by this program, a researcher may want to add this program to a software library containing other Macintosh statistics, drawing, and graphics programs if only to obtain the easy-to-obtain curve fitting and line smoothing options. I welcome the opportunity to review the enhanced "scientific" version of FlexiGraphs that the author of the program indicates is currently under development. An MS-DOS version of the program should be available within the year.
Hybrid Applications Of Artificial Intelligence
NASA Technical Reports Server (NTRS)
Borchardt, Gary C.
1988-01-01
STAR, Simple Tool for Automated Reasoning, is interactive, interpreted programming language for development and operation of artificial-intelligence application systems. Couples symbolic processing with compiled-language functions and data structures. Written in C language and currently available in UNIX version (NPO-16832), and VMS version (NPO-16965).
ERIC Educational Resources Information Center
Baltaci, Serdal; Yildiz, Avni
2015-01-01
Each new version of the GeoGebra dynamic mathematics software goes through updates and innovations. One of these innovations is the GeoGebra 5.0 version. This version aims to facilitate 3D instruction by offering opportunities for students to analyze 3D objects. While scanning the previous studies of GeoGebra 3D, it is seen that they mainly focus…
Hua, Zhi-Gang; Lin, Yan; Yuan, Ya-Zhou; Yang, De-Chang; Wei, Wen; Guo, Feng-Biao
2015-01-01
In 2003, we developed an ab initio program, ZCURVE 1.0, to find genes in bacterial and archaeal genomes. In this work, we present the updated version (i.e. ZCURVE 3.0). Using 422 prokaryotic genomes, the average accuracy was 93.7% with the updated version, compared with 88.7% with the original version. Such results also demonstrate that ZCURVE 3.0 is comparable with Glimmer 3.02 and may provide complementary predictions to it. In fact, the joint application of the two programs generated better results by correctly finding more annotated genes while also containing fewer false-positive predictions. As the exclusive function, ZCURVE 3.0 contains one post-processing program that can identify essential genes with high accuracy (generally >90%). We hope ZCURVE 3.0 will receive wide use with the web-based running mode. The updated ZCURVE can be freely accessed from http://cefg.uestc.edu.cn/zcurve/ or http://tubic.tju.edu.cn/zcurveb/ without any restrictions. PMID:25977299
Núñez-Batalla, Faustino; Antuña-León, Eva; González-Trelles, Teresa; Carro-Fernández, Pilar
2009-01-01
Although measuring parent satisfaction has been recommended as one of the important outcome measures in assessing the effectiveness of neonatal hearing screening programs, there are few published studies investigating this issue. To validate the Spanish version of the Parent Satisfaction Questionnaire with Neonatal Hearing Screening Program (PSQ-NHSP). 112 parents whose children had received hearing screening participated in this study. High levels of satisfaction were reported with more than 90% of parents satisfied with all aspects of the program. The psychometric properties of the Spanish version of the PSQ-NHSP were analyzed and demonstrated good internal consistency (alpha=0.75). Construct validity was indicated by a significant positive relationship between overall satisfaction and the three specific dimensions in the questionnaire. The development of a valid and reliable parent satisfaction questionnaire is important for improving hearing screening programs.
Software reliability experiments data analysis and investigation
NASA Technical Reports Server (NTRS)
Walker, J. Leslie; Caglayan, Alper K.
1991-01-01
The objectives are to investigate the fundamental reasons which cause independently developed software programs to fail dependently, and to examine fault tolerant software structures which maximize reliability gain in the presence of such dependent failure behavior. The authors used 20 redundant programs from a software reliability experiment to analyze the software errors causing coincident failures, to compare the reliability of N-version and recovery block structures composed of these programs, and to examine the impact of diversity on software reliability using subpopulations of these programs. The results indicate that both conceptually related and unrelated errors can cause coincident failures and that recovery block structures offer more reliability gain than N-version structures if acceptance checks that fail independently from the software components are available. The authors present a theory of general program checkers that have potential application for acceptance tests.
TIERRAS: A package to simulate high energy cosmic ray showers underground, underwater and under-ice
NASA Astrophysics Data System (ADS)
Tueros, Matías; Sciutto, Sergio
2010-02-01
In this paper we present TIERRAS, a Monte Carlo simulation program based on the well-known AIRES air shower simulations system that enables the propagation of particle cascades underground, providing a tool to study particles arriving underground from a primary cosmic ray on the atmosphere or to initiate cascades directly underground and propagate them, exiting into the atmosphere if necessary. We show several cross-checks of its results against CORSIKA, FLUKA, GEANT and ZHS simulations and we make some considerations regarding its possible use and limitations. The first results of full underground shower simulations are presented, as an example of the package capabilities. Program summaryProgram title: TIERRAS for AIRES Catalogue identifier: AEFO_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFO_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 36 489 No. of bytes in distributed program, including test data, etc.: 3 261 669 Distribution format: tar.gz Programming language: Fortran 77 and C Computer: PC, Alpha, IBM, HP, Silicon Graphics and Sun workstations Operating system: Linux, DEC Unix, AIX, SunOS, Unix System V RAM: 22 Mb bytes Classification: 1.1 External routines: TIERRAS requires AIRES 2.8.4 to be installed on the system. AIRES 2.8.4 can be downloaded from http://www.fisica.unlp.edu.ar/auger/aires/eg_AiresDownload.html. Nature of problem: Simulation of high and ultra high energy underground particle showers. Solution method: Modification of the AIRES 2.8.4 code to accommodate underground conditions. Restrictions: In AIRES some processes that are not statistically significant on the atmosphere are not simulated. In particular, it does not include muon photonuclear processes. This imposes a limitation on the application of this package to a depth of 1 km of standard rock (or 2.5 km of water equivalent). Neutrinos are not tracked on the simulation, but their energy is taken into account in decays. Running time: A TIERRAS for AIRES run of a 10 eV shower with statistical sampling (thinning) below 10 eV and 0.2 weight factor (see [1]) uses approximately 1 h of CPU time on an Intel Core 2 Quad Q6600 at 2.4 GHz. It uses only one core, so 4 simultaneous simulations can be run on this computer. Aires includes a spooling system to run several simultaneous jobs of any type. References:S. Sciutto, AIRES 2.6 User Manual, http://www.fisica.unlp.edu.ar/auger/aires/.
Hubble Space Telescope: The GO and GTO Observing Programs. Version 1.0
NASA Technical Reports Server (NTRS)
Saha, Abhijit
1990-01-01
Selected information from the current Hubble Space Telescope (HST) science programs for the Guaranteed Time Observers (GTO's) and General Observers (GO's) is presented. Included are program abstracts, detailed listings of specific targets, and exposure information.
The portals 4.0.1 network programming interface.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barrett, Brian W.; Brightwell, Ronald Brian; Pedretti, Kevin
2013-04-01
This report presents a specification for the Portals 4.0 network programming interface. Portals 4.0 is intended to allow scalable, high-performance network communication between nodes of a parallel computing system. Portals 4.0 is well suited to massively parallel processing and embedded systems. Portals 4.0 represents an adaption of the data movement layer developed for massively parallel processing platforms, such as the 4500-node Intel TeraFLOPS machine. Sandias Cplant cluster project motivated the development of Version 3.0, which was later extended to Version 3.3 as part of the Cray Red Storm machine and XT line. Version 4.0 is targeted to the next generationmore » of machines employing advanced network interface architectures that support enhanced offload capabilities. 3« less
AIRPOL-4A : an introduction and user's guide.
DOT National Transportation Integrated Search
1976-01-01
This report details the mechanics of implementing the computer program AIRPOL-4A. AIRPOL-4A supersedes AIRPOL-4. The upgrade from version 4 to version 4A is a result of the new emissions guidelines contained in Supplement 5 to AP-42, April 1975, by t...
Reference manual for generation and analysis of Habitat Time Series: version II
Milhous, Robert T.; Bartholow, John M.; Updike, Marlys A.; Moos, Alan R.
1990-01-01
The selection of an instream flow requirement for water resource management often requires the review of how the physical habitat changes through time. This review is referred to as 'Time Series Analysis." The Tune Series Library (fSLIB) is a group of programs to enter, transform, analyze, and display time series data for use in stream habitat assessment. A time series may be defined as a sequence of data recorded or calculated over time. Examples might be historical monthly flow, predicted monthly weighted usable area, daily electrical power generation, annual irrigation diversion, and so forth. The time series can be analyzed, both descriptively and analytically, to understand the importance of the variation in the events over time. This is especially useful in the development of instream flow needs based on habitat availability. The TSLIB group of programs assumes that you have an adequate study plan to guide you in your analysis. You need to already have knowledge about such things as time period and time step, species and life stages to consider, and appropriate comparisons or statistics to be produced and displayed or tabulated. Knowing your destination, you must first evaluate whether TSLIB can get you there. Remember, data are not answers. This publication is a reference manual to TSLIB and is intended to be a guide to the process of using the various programs in TSLIB. This manual is essentially limited to the hands-on use of the various programs. a TSLIB use interface program (called RTSM) has been developed to provide an integrated working environment where the use has a brief on-line description of each TSLIB program with the capability to run the TSLIB program while in the user interface. For information on the RTSM program, refer to Appendix F. Before applying the computer models described herein, it is recommended that the user enroll in the short course "Problem Solving with the Instream Flow Incremental Methodology (IFIM)." This course is offered by the Aquatic Systems Branch of the National Ecology Research Center. For more information about the TSLIB software, refer to the Memorandum of Understanding. Chapter 1 provides a brief introduction to the Instream Flow Incremental Methodology and TSLIB. Other chapters in this manual provide information on the different aspects of using the models. The information contained in the other chapters includes (2) acquisition, entry, manipulation, and listing of streamflow data; (3) entry, manipulation, and listing of the habitat-versus-streamflow function; (4) transferring streamflow data; (5) water resources systems analysis; (6) generation and analysis of daily streamflow and habitat values; (7) generation of the time series of monthly habitats; (8) manipulation, analysis, and display of month time series data; and (9) generation, analysis, and display of annual time series data. Each section includes documentation for the programs therein with at least one page of information for each program, including a program description, instructions for running the program, and sample output. The Appendixes contain the following: (A) sample file formats; (B) descriptions of default filenames; (C) alphabetical summary of batch-procedure files; (D) installing and running TSLIB on a microcomputer; (E) running TSLIB on a CDC Cyber computer; (F) using the TSLIB user interface program (RTSM); and (G) running WATSTORE on the USGS Amdahl mainframe computer. The number for this version of TSLIB--Version II-- is somewhat arbitrary, as the TSLIB programs were collected into a library some time ago; but operators tended to use and manage them as individual programs. Therefore, we will consider the group of programs from the past that were only on the CDC Cyber computer as Version 0; the programs from the past that were on both the Cyber and the IBM-compatible microcomputer as Version I; and the programs contained in this reference manual as Version II.
1993-07-01
version tree is formed that permits users to go back to any previous version. There are methods for traversing the version tree of a particular...workspace. Workspace objects are linked (or nested) hierarchically into a workspace tree . Applications can set the access privileges to parts of this...workspace tree to control access (and hence change). There must be a default global workspace. Workspace objects are then allocated within the context
NextGen Weather Plan, Version 1.1
2009-09-17
values of weather parameters at a station or over an area. In this paper, we often refer to aeronautical climatology, which is the application of the data...Joint Planning and Development Office NEXTGEN Weather Plan Version 1.1 Version 1.1 i September 17, 2009 Report Documentation Page Form...COVERED 00-00-2009 to 00-00-2009 4. TITLE AND SUBTITLE NextGen Weather Plan 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6
Improvements to the adaptive maneuvering logic program
NASA Technical Reports Server (NTRS)
Burgin, George H.
1986-01-01
The Adaptive Maneuvering Logic (AML) computer program simulates close-in, one-on-one air-to-air combat between two fighter aircraft. Three important improvements are described. First, the previously available versions of AML were examined for their suitability as a baseline program. The selected program was then revised to eliminate some programming bugs which were uncovered over the years. A listing of this baseline program is included. Second, the equations governing the motion of the aircraft were completely revised. This resulted in a model with substantially higher fidelity than the original equations of motion provided. It also completely eliminated the over-the-top problem, which occurred in the older versions when the AML-driven aircraft attempted a vertical or near vertical loop. Third, the requirements for a versatile generic, yet realistic, aircraft model were studied and implemented in the program. The report contains detailed tables which make the generic aircraft to be either a modern, high performance aircraft, an older high performance aircraft, or a previous generation jet fighter.
International Translation of Project EX: A Teen Tobacco Use Cessation Program.
Sussman, Steve
2012-10-01
There are relatively few documented teen tobacco use cessation efforts outside the United States (U.S.). Project EX is an evidence-based program that consists of eight sessions, as a school-based clinic tobacco cessation-only version and a classroom-based prevention and cessation version. This paper provides a 'snapshot' of progress on international translation of ProjectEXpilot study work in eight countries that have been approached thus far. The program was implemented in Wuhan, China; Israel and partners; Bashkortostan, Russia; and Elche, Spain. Implementation is planned for Vienna, Austria; Mumbai, India; and Bangkok, Thailand. This work will lead eventually to a greater understanding regarding preference for type of programming (e.g., clinic versus classroom modality), challenges in recruitment and retention, program receptivity, and short-term (approximately 3-month post-program) quit rates. Convenience samples are being recruited based on previous contacts with each location. A protocol was sent to each location, proposing a controlled design, in which subjects enter cessation groups or become a wait-list control, with an immediate pretest, posttest, and 3-month follow-up. Language translation of program materials was completed in seven of the eight locations. Several variations in design and implementation were demanded though. For example, youth fear of reporting tobacco publicly mandated to researchers that the prevention/cessation classroom version be implemented in some locations (Israel and partners, and India). Program effects are suggested across countries. Ongoing partnerships with parties actively involved in tobacco control facilitate pilot testing of teen tobacco use cessation programming. The Project EX curriculum appears quite translatable, though having flexibility in implementation modality eased being able to pilot test the program. Research on this cognitive-behavioral, motivation enhancement approach continues.
International Translation of Project EX: A Teen Tobacco Use Cessation Program
Sussman, Steve
2013-01-01
Aims There are relatively few documented teen tobacco use cessation efforts outside the United States (U.S.). Project EX is an evidence-based program that consists of eight sessions, as a school-based clinic tobacco cessation-only version and a classroom-based prevention and cessation version. This paper provides a ‘snapshot’ of progress on international translation of ProjectEXpilot study work in eight countries that have been approached thus far. The program was implemented in Wuhan, China; Israel and partners; Bashkortostan, Russia; and Elche, Spain. Implementation is planned for Vienna, Austria; Mumbai, India; and Bangkok, Thailand. This work will lead eventually to a greater understanding regarding preference for type of programming (e.g., clinic versus classroom modality), challenges in recruitment and retention, program receptivity, and short-term (approximately 3-month post-program) quit rates. Protocol and Interim Results of International Translation of Project EX Convenience samples are being recruited based on previous contacts with each location. A protocol was sent to each location, proposing a controlled design, in which subjects enter cessation groups or become a wait-list control, with an immediate pretest, posttest, and 3-month follow-up. Language translation of program materials was completed in seven of the eight locations. Several variations in design and implementation were demanded though. For example, youth fear of reporting tobacco publicly mandated to researchers that the prevention/cessation classroom version be implemented in some locations (Israel and partners, and India). Program effects are suggested across countries. Conclusions Ongoing partnerships with parties actively involved in tobacco control facilitate pilot testing of teen tobacco use cessation programming. The Project EX curriculum appears quite translatable, though having flexibility in implementation modality eased being able to pilot test the program. Research on this cognitive-behavioral, motivation enhancement approach continues. PMID:23885135
NASA Technical Reports Server (NTRS)
Lu, Yun-Chi; Chang, Hyo Duck; Krupp, Brian; Kumar, Ravindra; Swaroop, Anand
1992-01-01
Information on Earth Observing System (EOS) output data products and input data requirements that has been compiled by the Science Processing Support Office (SPSO) at GSFC is presented. Since Version 1.0 of the SPSO Report was released in August 1991, there have been significant changes in the EOS program. In anticipation of a likely budget cut for the EOS Project, NASA HQ restructured the EOS program. An initial program consisting of two large platforms was replaced by plans for multiple, smaller platforms, and some EOS instruments were either deselected or descoped. Updated payload information reflecting the restructured EOS program superseding the August 1991 version of the SPSO report is included. This report has been expanded to cover information on non-EOS data products, and consists of three volumes (Volumes 1, 2, and 3). Volume 1 provides information on instrument outputs and input requirements. Volume 2 is devoted to Interdisciplinary Science (IDS) outputs and input requirements, including the 'best' and 'alternative' match analysis. Volume 3 provides information about retrieval algorithms, non-EOS input requirements of instrument teams and IDS investigators, and availability of non-EOS data products at seven primary Distributed Active Archive Centers (DAAC's).
GENXICC2.1: An improved version of GENXICC for hadronic production of doubly heavy baryons
NASA Astrophysics Data System (ADS)
Wang, Xian-You; Wu, Xing-Gang
2013-03-01
We present an improved version of GENXICC, which is a generator for hadronic production of the doubly heavy baryons Ξcc, Ξbc and Ξbb and has been introduced by C.H. Chang, J.X. Wang and X.G. Wu [Comput. Phys. Commun. 177 (2007) 467; Comput. Phys. Commun. 181 (2010) 1144]. In comparison with the previous GENXICC versions, we update the program in order to generate the unweighted baryon events more effectively under various simulation environments, whose distributions are now generated according to the probability proportional to the integrand. One Les Houches Event (LHE) common block has been added to produce a standard LHE data file that contains useful information of the doubly heavy baryon and its accompanying partons. Such LHE data can be conveniently imported into PYTHIA to do further hadronization and decay simulation, especially, the color-flow problem can be solved with PYTHIA8.0. NEW VERSION PROGRAM SUMMARYTitle of program: GENXICC2.1 Program obtained from: CPC Program Library Reference to original program: GENXICC Reference in CPC: Comput. Phys. Commun. 177, 467 (2007); Comput. Phys. Commun. 181, 1144 (2010) Does the new version supersede the old program: No Computer: Any LINUX based on PC with FORTRAN 77 or FORTRAN 90 and GNU C compiler as well Operating systems: LINUX Programming language used: FORTRAN 77/90 Memory required to execute with typical data: About 2.0 MB No. of bytes in distributed program: About 2 MB, including PYTHIA6.4 Distribution format: .tar.gz Nature of physical problem: Hadronic production of doubly heavy baryons Ξcc, Ξbc and Ξbb. Method of solution: The upgraded version with a proper interface to PYTHIA can generate full production and decay events, either weighted or unweighted, conveniently and effectively. Especially, the unweighted events are generated by using an improved hit-and-miss approach. Reasons for new version: Responding to the feedback from users of CMS and LHCb groups at the Large Hadron Collider, and based on the recent improvements of PYTHIA on the color-flow problem, we improve the efficiency for generating the unweighted events, and also improve the color-flow part for further hadronization. Especially, an interface has been added to import the output production events into a suitable form for PYTHIA8.0 simulation, in which the color-flow during the simulation can be correctly set. Typical running time: It depends on which option is chosen to match PYTHIA when generating the full events and also on which mechanism is chosen to generate the events. Typically, for the dominant gluon-gluon fusion mechanism to generate the mixed events via the intermediate diquarks in (cc)[3S1]3¯ and (cc)[1S0]6 states, setting IDWTUP=3 and unwght =.true., it takes 30 min to generate 105 unweighted events on a 2.27 GHz Intel Xeon E5520 processor machine; setting IDWTUP=3 and unwght =.false. or IDWTUP=1 and IGENERATE=0, it only needs 2 min to generate the 105 baryon events (the fastest way, for theoretical purposes only). As a comparison, for previous GENXICC versions, if setting IDWTUP=1 and IGENERATE=1, it takes about 22 hours to generate 1000 unweighted events. Keywords: Event generator; Doubly heavy baryons; Hadronic production. Summary of the changes (improvements): (1) The scheme for generating unweighted events has been improved; (2) One Les Houches Event (LHE) common block has been added to record the standard LHE data in order to be the correct input for PYTHIA8.0 for later simulation; (3) We present the code for connecting GENXICC to PYTHIA8.0, where three color-flows have to be correctly set for later simulation. More specifically, we present the changes together with their detailed explanations in the following:
DOT National Transportation Integrated Search
1981-12-01
This report (Volume 2 of three volumes) provides detailed descriptions of all program materials employed with the recommended version of a child pedestrian safety program. Volume 1 of this report describes the conduct and results of the evaluation of...
DOT National Transportation Integrated Search
1981-12-01
This report (Volume 3 of three volumes) provides detailed descriptions of additional program materials suggested for use with the recommended version of a child pedestrian safety program. Volume 1 of this report describes the conduct and results of t...
Experimental field test of proposed anti-dart-out training programs. Volume 1, Conduct and results
DOT National Transportation Integrated Search
1981-12-01
This report describes the conduct and results of an evaluation of a child pedestrian anti-dart-out training program. Two versions were tested: A film program and a film/simulator program. Before/after accident and street crossing behavior data were c...
The Effects of Dubbing Versus Subtitling of Television Program.
ERIC Educational Resources Information Center
Mokhtar, Fattawi B.
The purpose of this study was to investigate viewers' knowledge of program content under various television translation modes and viewing experiences. Subjects were 176 students from the Center for Matriculation Program, Universiti Sains Malaysia in Penang, Malaysia. The Spanish version of an instructional television program was used; the program…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-29
... Nonradioactive Versions of the Murine Local Lymph Node Assay for Assessing Allergic Contact Dermatitis Hazard... nonradioactive versions of the Local Lymph Node Assay (LLNA) for assessing allergic contact dermatitis (ACD... Nonradioactive Alternative Test Method to Assess the Allergic Contact Dermatitis Potential of Chemicals and...
ERIC Educational Resources Information Center
Gibson, Jennifer E.; Werner, Shelby S.; Sweeney, Andrew
2015-01-01
When evidence-based prevention programs are implemented in schools, adaptations are common. It is important to understand which adaptations can be made while maintaining positive outcomes for students. This preliminary study evaluated an abbreviated version of the Promoting Alternative Thinking Strategies (PATHS) Curriculum implemented by…
Marketing and the Low Income Consumer.
ERIC Educational Resources Information Center
Bureau of Domestic Commerce (DOC), Washington, DC.
This is a revised version of a 1969 bibliography dealing with the characteristics of the market system serving low-income consumers, with programs designed to improve the market system and with problems in low income marketing. This version contains 326 classified, annotated entries. The bibliography covers the following major areas: (1)…
What's New with MS Office Suites
ERIC Educational Resources Information Center
Goldsborough, Reid
2012-01-01
If one buys a new PC, laptop, or netbook computer today, it probably comes preloaded with Microsoft Office 2010 Starter Edition. This is a significantly limited, advertising-laden version of Microsoft's suite of productivity programs, Microsoft Office. This continues the trend of PC makers providing ever more crippled versions of Microsoft's…
MEASUREMENT AND PRECISION, EXPERIMENTAL VERSION.
ERIC Educational Resources Information Center
Harvard Univ., Cambridge, MA. Harvard Project Physics.
THIS DOCUMENT IS AN EXPERIMENTAL VERSION OF A PROGRAMED TEXT ON MEASUREMENT AND PRECISION. PART I CONTAINS 24 FRAMES DEALING WITH PRECISION AND SIGNIFICANT FIGURES ENCOUNTERED IN VARIOUS MATHEMATICAL COMPUTATIONS AND MEASUREMENTS. PART II BEGINS WITH A BRIEF SECTION ON EXPERIMENTAL DATA, COVERING SUCH POINTS AS (1) ESTABLISHING THE ZERO POINT, (2)…
Darton College Customized Nursing Program for the Fort Benning Community and Research Project
2013-10-01
Netbook /laptop versions of English 1102, Communication 1101, PSYC 2115 and PHED 1161 will be developed for deployed students with limited or no...internet accessibility. Netbook /laptop versions of English 1102, Communication 1101, PSYC, and PHED 1161 for deployed students with limited or no
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-30
... System Testing and Certification Program, Version 2.0 AGENCY: United States Election Assistance Commission (EAC). ACTION: Notice; publication of Voting System Testing and Certification Manual, Version 2.0, for 60 day public comment period on EAC Web site. SUMMARY: The U.S. Election Assistance Commission...
NASA Technical Reports Server (NTRS)
Watts, Michael E.; Dejpour, Shabob R.
1989-01-01
The changes made on the data analysis and management program DATAMAP (Data from Aeromechanics Test and Analytics - Management and Analysis Package) are detailed. These changes are made to Version 3.07 (released February, 1981) and are called Version 4.0. Version 4.0 improvements were performed by Sterling Software under contract to NASA Ames Research Center. The increased capabilities instituted in this version include the breakout of the source code into modules for ease of modification, addition of a more accurate curve fit routine, ability to handle higher frequency data, additional data analysis features, and improvements in the functionality of existing features. These modification will allow DATAMAP to be used on more data sets and will make future modifications and additions easier to implement.
An Expert-System-Like Feedback Approach in the hp-Version of the Finite Element Method.
1986-05-01
and, besides some research codes, the authors know of only two commercial programs based on the p-version. These are the computer program PROBE ( Noetic ...assumptions. Let us first study the problem of the best approximation on the interval I=(-1,1) . Let for E < 1 (x + X - for x > 0 for x and let W(a,E,x) = (x...Comp. and Maths. with Appls., 5, pp. 99-115, 1979. [] Szabo, B., PROBE: Theoretical Manual, NOETIC Technology Corporation, 7980 Clayton Road, Suite 205
NASA TLX: software for assessing subjective mental workload.
Cao, Alex; Chintamani, Keshav K; Pandya, Abhilash K; Ellis, R Darin
2009-02-01
The NASA Task Load Index (TLX) is a popular technique for measuring subjective mental workload. It relies on a multidimensional construct to derive an overall workload score based on a weighted average of ratings on six subscales: mental demand, physical demand, temporal demand, performance, effort, and frustration level. A program for implementing a computerized version of the NASA TLX is described. The software version assists in simplifying collection, postprocessing, and storage of raw data. The program collects raw data from the subject and calculates the weighted (or unweighted) workload score, which is output to a text file. The program can also be tailored to a specific experiment using a simple input text file, if desired. The program was designed in Visual Studio 2005 and is capable of running on a Pocket PC with Windows CE or on a PC with Windows 2000 or higher. The NASA TLX program is available for free download.
Thonse, Umesh; Behere, Rishikesh V; Frommann, Nicole; Sharma, Psvn
2018-01-01
Social cognition refers to mental operations involved in processing of social cues and includes the domains of emotion processing, Theory of Mind (ToM), social perception, social knowledge and attributional bias. Significant deficits in ToM, emotion perception and social perception have been demonstrated in schizophrenia which can have an impact on socio-occupational functioning. Intervention modules for social cognition have demonstrated moderate effect sizes for improving emotion identification and discrimination. We describe the Indian version of the Training of Affect Recognition (TAR) program and a pilot study to demonstrate the feasibility of administering this intervention program in the Indian population. We also discuss the cultural sensibilities in adopting an intervention program for the Indian setting. To the best of our knowledge this is the first intervention program for social cognition for use in persons with schizophrenia in India. Copyright © 2017 Elsevier B.V. All rights reserved.
C-Language Integrated Production System, Version 6.0
NASA Technical Reports Server (NTRS)
Riley, Gary; Donnell, Brian; Ly, Huyen-Anh Bebe; Ortiz, Chris
1995-01-01
C Language Integrated Production System (CLIPS) computer programs are specifically intended to model human expertise or other knowledge. CLIPS is designed to enable research on, and development and delivery of, artificial intelligence on conventional computers. CLIPS 6.0 provides cohesive software tool for handling wide variety of knowledge with support for three different programming paradigms: rule-based, object-oriented, and procedural. Rule-based programming: representation of knowledge as heuristics - essentially, rules of thumb that specify set of actions performed in given situation. Object-oriented programming: modeling of complex systems comprised of modular components easily reused to model other systems or create new components. Procedural-programming: representation of knowledge in ways similar to those of such languages as C, Pascal, Ada, and LISP. Version of CLIPS 6.0 for IBM PC-compatible computers requires DOS v3.3 or later and/or Windows 3.1 or later.
Hall, Gordon C Nagayama; Allard, Carolyn B
2009-07-01
The top 86 students were selected from a pool of approximately 400 applicants to a summer clinical psychology research training program for undergraduate students of color. Forty-three of the students were randomly assigned to 1 of 2 clinical psychology research training programs, and 43 were randomly assigned to a control condition without training. The multicultural version of the training program emphasized the cultural context of psychology in all areas of training, whereas cultural context was de-emphasized in the monocultural version of the program. Although the cultural content of the 2 training programs was effectively manipulated as indicated by a fidelity check by an outside expert, there were no significant differences between the effects of the 2 programs on the outcomes measured in this study. The primary differences in this study were between students who did versus those who did not participate in a training program. Sixty-five percent of the students who completed the multicultural training program applied to graduate schools in psychology, compared with 47% of those who completed the monocultural training program, and 31% of those in the control group. Participation in summer research training programs also increased self-perceptions of multicultural competence.
ERIC Educational Resources Information Center
Bearden, Donna; Muller, Jim
1983-01-01
In addition to turtle graphics, the Logo programing language has list and text processing capabilities that open up opportunities for word games, language programs, word processing, and other applications. Provided are examples of these applications using both Apple and MIT Logo versions. Includes sample interactive programs. (JN)
1987-04-30
AiBI 895 ADA (TRADENNANE) COMPILER VALIDATION SUMMARY REPORT / HARRIS CORPORATION HA (U) INFORMATION SYSTEMS AND TECHNOLOGY CENTER W-P AFS OH ADA...Compiler Validation Summary Report : 30 APR 1986 to 30 APR 1987 Harris Corporation, HARRIS Ada Compiler, Version 1.0, Harris H1200 and H800 6...the United States Government (Ada Joint Program Office). Adae Compiler Validation mary Report : Compiler Name: HARRIS Ada Compiler, Version 1.0 1 Host
Aviation Data Integration System
NASA Technical Reports Server (NTRS)
Kulkarni, Deepak; Wang, Yao; Windrem, May; Patel, Hemil; Keller, Richard
2003-01-01
During the analysis of flight data and safety reports done in ASAP and FOQA programs, airline personnel are not able to access relevant aviation data for a variety of reasons. We have developed the Aviation Data Integration System (ADIS), a software system that provides integrated heterogeneous data to support safety analysis. Types of data available in ADIS include weather, D-ATIS, RVR, radar data, and Jeppesen charts, and flight data. We developed three versions of ADIS to support airlines. The first version has been developed to support ASAP teams. A second version supports FOQA teams, and it integrates aviation data with flight data while keeping identification information inaccessible. Finally, we developed a prototype that demonstrates the integration of aviation data into flight data analysis programs. The initial feedback from airlines is that ADIS is very useful in FOQA and ASAP analysis.
Menu-Driven Solver Of Linear-Programming Problems
NASA Technical Reports Server (NTRS)
Viterna, L. A.; Ferencz, D.
1992-01-01
Program assists inexperienced user in formulating linear-programming problems. A Linear Program Solver (ALPS) computer program is full-featured LP analysis program. Solves plain linear-programming problems as well as more-complicated mixed-integer and pure-integer programs. Also contains efficient technique for solution of purely binary linear-programming problems. Written entirely in IBM's APL2/PC software, Version 1.01. Packed program contains licensed material, property of IBM (copyright 1988, all rights reserved).
Version 3.0 of EMINERS - Economic Mineral Resource Simulator
Duval, Joseph S.
2012-01-01
Quantitative mineral resource assessment, as developed by the U.S. Geological Survey (USGS), consists of three parts: (1) development of grade and tonnage mineral deposit models; (2) delineation of tracts permissive for each deposit type; and (3) probabilistic estimation of the numbers of undiscovered deposits for each deposit type. The estimate of the number of undiscovered deposits at different levels of probability is the input to the EMINERS (Economic Mineral Resource Simulator) program. EMINERS uses a Monte Carlo statistical process to combine probabilistic estimates of undiscovered mineral deposits with models of mineral deposit grade and tonnage to estimate mineral resources. Version 3.0 of the EMINERS program is available as this USGS Open-File Report 2004-1344. Changes from version 2.0 include updating 87 grade and tonnage models, designing new templates to produce graphs showing cumulative distribution and summary tables, and disabling economic filters. The economic filters were disabled because embedded data for costs of labor and materials, mining techniques, and beneficiation methods are out of date. However, the cost algorithms used in the disabled economic filters are still in the program and available for reference for mining methods and milling techniques. The release notes included with this report give more details on changes in EMINERS over the years. EMINERS is written in C++ and depends upon the Microsoft Visual C++ 6.0 programming environment. The code depends heavily on the use of Microsoft Foundation Classes (MFC) for implementation of the Windows interface. The program works only on Microsoft Windows XP or newer personal computers. It does not work on Macintosh computers. For help in using the program in this report, see the "Quick-Start Guide for Version 3.0 of EMINERS-Economic Mineral Resource Simulator" (W.J. Bawiec and G.T. Spanski, 2012, USGS Open-File Report 2009-1057, linked at right). It demonstrates how to execute EMINERS software using default settings and existing deposit models.
NASA Technical Reports Server (NTRS)
Purdon, David J.; Baruah, Pranab K.; Bussoletti, John E.; Epton, Michael A.; Massena, William A.; Nelson, Franklin D.; Tsurusaki, Kiyoharu
1990-01-01
The Maintenance Document Version 3.0 is a guide to the PAN AIR software system, a system which computes the subsonic or supersonic linear potential flow about a body of nearly arbitrary shape, using a higher order panel method. The document describes the overall system and each program module of the system. Sufficient detail is given for program maintenance, updating, and modification. It is assumed that the reader is familiar with programming and CRAY computer systems. The PAN AIR system was written in FORTRAN 4 language except for a few CAL language subroutines which exist in the PAN AIR library. Structured programming techniques were used to provide code documentation and maintainability. The operating systems accommodated are COS 1.11, COS 1.12, COS 1.13, and COS 1.14 on the CRAY 1S, 1M, and X-MP computing systems. The system is comprised of a data base management system, a program library, an execution control module, and nine separate FORTRAN technical modules. Each module calculates part of the posed PAN AIR problem. The data base manager is used to communicate between modules and within modules. The technical modules must be run in a prescribed fashion for each PAN AIR problem. In order to ease the problem of supplying the many JCL cards required to execute the modules, a set of CRAY procedures (PAPROCS) was created to automatically supply most of the JCL cards. Most of this document has not changed for Version 3.0. It now, however, strictly applies only to PAN AIR version 3.0. The major changes are: (1) additional sections covering the new FDP module (which calculates streamlines and offbody points); (2) a complete rewrite of the section on the MAG module; and (3) strict applicability to CRAY computing systems.
10 CFR 431.223 - Materials incorporated by reference.
Code of Federal Regulations, 2013 CFR
2013-01-01
... procedures incorporated by reference. (1) Environmental Protection Agency, “ENERGY STAR Program Requirements... Agency “ENERGY STAR Program Requirements for Traffic Signals,” Version 1.1, may be obtained from the...
10 CFR 431.223 - Materials incorporated by reference.
Code of Federal Regulations, 2014 CFR
2014-01-01
... procedures incorporated by reference. (1) Environmental Protection Agency, “ENERGY STAR Program Requirements... Agency “ENERGY STAR Program Requirements for Traffic Signals,” Version 1.1, may be obtained from the...
10 CFR 431.223 - Materials incorporated by reference.
Code of Federal Regulations, 2012 CFR
2012-01-01
... procedures incorporated by reference. (1) Environmental Protection Agency, “ENERGY STAR Program Requirements... Agency “ENERGY STAR Program Requirements for Traffic Signals,” Version 1.1, may be obtained from the...
Ohori, Manami; Inagaki, Yusuke; Shimooka, Yuko; Sugimura, Naoya; Ishihara, Ikuyo; Yoshida, Tomotaka
2018-01-01
The individualized occupational therapy (IOT) program is a psychosocial program that we developed to facilitate proactive participation in treatment and improve cognitive functioning and other outcomes for inpatients with acute schizophrenia. The program consists of motivational interviewing, self-monitoring, individualized visits, handicraft activities, individualized psychoeducation, and discharge planning. This multicenter, open-labeled, blinded-endpoint, randomized controlled trial evaluated the impact of adding IOT to a group OT (GOT) program as usual for outcomes in recently hospitalized patients with schizophrenia in Japanese psychiatric hospitals setting compared with GOT alone. Patients with schizophrenia were randomly assigned to the GOT+IOT group or the GOT alone group. Among 136 randomized patients, 129 were included in the intent-to-treat population: 66 in the GOT+IOT and 63 in the GOT alone groups. Outcomes were administered at baseline and discharge or 3 months following hospitalization including the Brief Assessment of Cognition in Schizophrenia Japanese version (BACS-J), the Schizophrenia Cognition Rating Scale Japanese version, the Social Functioning Scale Japanese version, the Global Assessment of Functioning scale, the Intrinsic Motivation Inventory Japanese version (IMI-J), the Morisky Medication Adherence Scale-8 (MMAS-8), the Positive and Negative Syndrome Scale (PANSS), and the Japanese version of Client Satisfaction Questionnaire-8 (CSQ-8J). Results of linear mixed effects models indicated that the IOT+GOT showed significant improvements in verbal memory (p <0.01), working memory (p = 0.02), verbal fluency (p < 0.01), attention (p < 0.01), and composite score (p < 0.01) on the BACS-J; interest/enjoyment (p < 0.01), value/usefulness (p < 0.01), perceived choice (p < 0.01), and IMI-J total (p < 0.01) on the IMI-J; MMAS-8 score (p < 0.01) compared with the GOT alone. Patients in the GOT+IOT demonstrated significant improvements on the CSQ-8J compared with the GOT alone (p < 0.01). The present findings provide support for the feasibility in implementing an IOT program and its effectiveness for improving cognitive impairment and other outcomes in patients with schizophrenia. PMID:29621261
Shimada, Takeshi; Ohori, Manami; Inagaki, Yusuke; Shimooka, Yuko; Sugimura, Naoya; Ishihara, Ikuyo; Yoshida, Tomotaka; Kobayashi, Masayoshi
2018-01-01
The individualized occupational therapy (IOT) program is a psychosocial program that we developed to facilitate proactive participation in treatment and improve cognitive functioning and other outcomes for inpatients with acute schizophrenia. The program consists of motivational interviewing, self-monitoring, individualized visits, handicraft activities, individualized psychoeducation, and discharge planning. This multicenter, open-labeled, blinded-endpoint, randomized controlled trial evaluated the impact of adding IOT to a group OT (GOT) program as usual for outcomes in recently hospitalized patients with schizophrenia in Japanese psychiatric hospitals setting compared with GOT alone. Patients with schizophrenia were randomly assigned to the GOT+IOT group or the GOT alone group. Among 136 randomized patients, 129 were included in the intent-to-treat population: 66 in the GOT+IOT and 63 in the GOT alone groups. Outcomes were administered at baseline and discharge or 3 months following hospitalization including the Brief Assessment of Cognition in Schizophrenia Japanese version (BACS-J), the Schizophrenia Cognition Rating Scale Japanese version, the Social Functioning Scale Japanese version, the Global Assessment of Functioning scale, the Intrinsic Motivation Inventory Japanese version (IMI-J), the Morisky Medication Adherence Scale-8 (MMAS-8), the Positive and Negative Syndrome Scale (PANSS), and the Japanese version of Client Satisfaction Questionnaire-8 (CSQ-8J). Results of linear mixed effects models indicated that the IOT+GOT showed significant improvements in verbal memory (p <0.01), working memory (p = 0.02), verbal fluency (p < 0.01), attention (p < 0.01), and composite score (p < 0.01) on the BACS-J; interest/enjoyment (p < 0.01), value/usefulness (p < 0.01), perceived choice (p < 0.01), and IMI-J total (p < 0.01) on the IMI-J; MMAS-8 score (p < 0.01) compared with the GOT alone. Patients in the GOT+IOT demonstrated significant improvements on the CSQ-8J compared with the GOT alone (p < 0.01). The present findings provide support for the feasibility in implementing an IOT program and its effectiveness for improving cognitive impairment and other outcomes in patients with schizophrenia.
PC Utilities: Small Programs with a Big Impact
ERIC Educational Resources Information Center
Baule, Steven
2004-01-01
The three utility commercial programs available on the Internet are like software packages purchased through a vendor or the Internet, shareware programs are developed by individuals and distributed via the Internet for a small fee to obtain the complete version of the product, and freeware programs are distributed via the Internet free of cost.…
GRASP92: a package for large-scale relativistic atomic structure calculations
NASA Astrophysics Data System (ADS)
Parpia, F. A.; Froese Fischer, C.; Grant, I. P.
2006-12-01
Program summaryTitle of program: GRASP92 Catalogue identifier: ADCU_v1_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADCU_v1_1 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing provisions: no Programming language used: Fortran Computer: IBM POWERstation 320H Operating system: IBM AIX 3.2.5+ RAM: 64M words No. of lines in distributed program, including test data, etc.: 65 224 No of bytes in distributed program, including test data, etc.: 409 198 Distribution format: tar.gz Catalogue identifier of previous version: ADCU_v1_0 Journal reference of previous version: Comput. Phys. Comm. 94 (1996) 249 Does the new version supersede the previous version?: Yes Nature of problem: Prediction of atomic spectra—atomic energy levels, oscillator strengths, and radiative decay rates—using a 'fully relativistic' approach. Solution method: Atomic orbitals are assumed to be four-component spinor eigenstates of the angular momentum operator, j=l+s, and the parity operator Π=βπ. Configuration state functions (CSFs) are linear combinations of Slater determinants of atomic orbitals, and are simultaneous eigenfunctions of the atomic electronic angular momentum operator, J, and the atomic parity operator, P. Lists of CSFs are either explicitly prescribed by the user or generated from a set of reference CSFs, a set of subshells, and rules for deriving other CSFs from these. Approximate atomic state functions (ASFs) are linear combinations of CSFs. A variational functional may be constructed by combining expressions for the energies of one or more ASFs. Average level (AL) functionals are weighted sums of energies of all possible ASFs that may be constructed from a set of CSFs; the number of ASFs is then the same as the number, n, of CSFs. Optimal level (OL) functionals are weighted sums of energies of some subset of ASFs; the GRASP92 package is optimized for this latter class of functionals. The composition of an ASF in terms of CSFs sharing the same quantum numbers is determined using the configuration-interaction (CI) procedure that results upon varying the expansion coefficients to determine the extremum of a variational functional. Radial functions may be determined by numerically solving the multiconfiguration Dirac-Fock (MCDF) equations that result upon varying the orbital radial functions or some subset thereof so as to obtain an extremum of the variational functional. Radial wavefunctions may also be determined using a screened hydrogenic or Thomas-Fermi model, although these schemes generally provide initial estimates for MCDF self-consistent-field (SCF) calculations. Transition properties for pairs of ASFs are computed from matrix elements of multipole operators of the electromagnetic field. All matrix elements of CSFs are evaluated using the Racah algebra. Reasons for the new version: During recent studies using the general relativistic atomic structure package (GRASP92), several errors were found, some of which might have been present already in the earlier GRASP92 version (program ABJN_v1_0, Comput. Phys. Comm. 55 (1989) 425). These errors were reported and discussed by Froese Fischer, Gaigalas, and Ralchenko in a separate publication [C. Froese Fischer, G. Gaigalas, Y. Ralchenko, Comput. Phys. Comm. 175 (2006) 738-744. [7
i-SVOC -- A simulation program for indoor SVOCs (Version 1.0)
Program i-SVOC estimates the emissions, transport, and sorption of semivolatile organic compounds (SVOCs) in the indoor environment as functions of time when a series of initial conditions is given. This program implements a framework for dynamic modeling of indoor SVOCs develope...
Hopson, Laura M.; Holleran Steiker, Lori K.
2010-01-01
Although there is a strong evidence base for effective substance abuse prevention programs for youth, there is a need to facilitate the implementation and evaluation of these programs in real world settings. This study evaluates the effectiveness of adapted versions of an evidence-based prevention program, keepin’ it REAL (kiR), with alternative school students. Programs are often adapted when used in schools and other community settings for a variety of reasons. The kiR adaptations, developed during an earlier phase of this study, were created to make the curriculum more appropriate for alternative high school youth. The adaptations were evaluated using a quasi-experimental design in which questionnaires were administered at pretest, posttest, and follow-up, and focus groups were conducted at posttest. MANOVA analyses indicate significantly reduced intentions to accept alcohol and, for younger participants, reduced alcohol use. Focus group data support the need for age appropriate prevention content. The authors discuss implications for practitioners implementing prevention programs in schools. PMID:20622971
A Preliminary Report on the PLATO V Terminal.
ERIC Educational Resources Information Center
Stifle, J. E.
This report is a preliminary description of a prototype of a second generation version of the PLATO IV (Programmed Logic for Automated Teaching Operations) student terminal. Development of a new terminal has been pursued with two objectives: to generate a more economic version of the PLATO IV terminal, and to expand capacities and performance of…
How to use the Stand-Damage Model: Version 2.0. (Computer program)
J.J. Colbert; George Racin
2001-01-01
The Stand-Damage Model simulates the growth of a forest stand, a spatially homogeneous collection of trees growing on a site. The model simulates growth from an initial inventory, user-prescribed management practices, and the effects of gypsy moth defoliation. Here we provide installation and operating instructions for Version 2.0.
ISS Expedition 42 Crew Profiles - Version 01
2014-11-14
Narrated program with biographical information about ISS Expedition 42 crewmembers Terry Virts, Samantha Cristoforetti and Anton Shjaplerov. The program covers the crewmember's career including childhood photographs; footage from previous missions; and interview sound bites.
Quick-start guide for version 3.0 of EMINERS - Economic Mineral Resource Simulator
Bawiec, Walter J.; Spanski, Gregory T.
2012-01-01
Quantitative mineral resource assessment, as developed by the U.S. Geological Survey (USGS), consists of three parts: (1) development of grade and tonnage mineral deposit models; (2) delineation of tracts permissive for each deposit type; and (3) probabilistic estimation of the numbers of undiscovered deposits for each deposit type (Singer and Menzie, 2010). The estimate of the number of undiscovered deposits at different levels of probability is the input to the EMINERS (Economic Mineral Resource Simulator) program. EMINERS uses a Monte Carlo statistical process to combine probabilistic estimates of undiscovered mineral deposits with models of mineral deposit grade and tonnage to estimate mineral resources. It is based upon a simulation program developed by Root and others (1992), who discussed many of the methods and algorithms of the program. Various versions of the original program (called "MARK3" and developed by David H. Root, William A. Scott, and Lawrence J. Drew of the USGS) have been published (Root, Scott, and Selner, 1996; Duval, 2000, 2012). The current version (3.0) of the EMINERS program is available as USGS Open-File Report 2004-1344 (Duval, 2012). Changes from version 2.0 include updating 87 grade and tonnage models, designing new templates to produce graphs showing cumulative distribution and summary tables, and disabling economic filters. The economic filters were disabled because embedded data for costs of labor and materials, mining techniques, and beneficiation methods are out of date. However, the cost algorithms used in the disabled economic filters are still in the program and available for reference for mining methods and milling techniques included in Camm (1991). EMINERS is written in C++ and depends upon the Microsoft Visual C++ 6.0 programming environment. The code depends heavily on the use of Microsoft Foundation Classes (MFC) for implementation of the Windows interface. The program works only on Microsoft Windows XP or newer personal computers. It does not work on Macintosh computers. This report demonstrates how to execute EMINERS software using default settings and existing deposit models. Many options are available when setting up the simulation. Information and explanations addressing these optional parameters can be found in the EMINERS Help files. Help files are available during execution of EMINERS by selecting EMINERS Help from the pull-down menu under Help on the EMINERS menu bar. There are four sections in this report. Part I describes the installation, setup, and application of the EMINERS program, and Part II illustrates how to interpret the text file that is produced. Part III describes the creation of tables and graphs by use of the provided Excel templates. Part IV summarizes grade and tonnage models used in version 3.0 of EMINERS.
Hua, Zhi-Gang; Lin, Yan; Yuan, Ya-Zhou; Yang, De-Chang; Wei, Wen; Guo, Feng-Biao
2015-07-01
In 2003, we developed an ab initio program, ZCURVE 1.0, to find genes in bacterial and archaeal genomes. In this work, we present the updated version (i.e. ZCURVE 3.0). Using 422 prokaryotic genomes, the average accuracy was 93.7% with the updated version, compared with 88.7% with the original version. Such results also demonstrate that ZCURVE 3.0 is comparable with Glimmer 3.02 and may provide complementary predictions to it. In fact, the joint application of the two programs generated better results by correctly finding more annotated genes while also containing fewer false-positive predictions. As the exclusive function, ZCURVE 3.0 contains one post-processing program that can identify essential genes with high accuracy (generally >90%). We hope ZCURVE 3.0 will receive wide use with the web-based running mode. The updated ZCURVE can be freely accessed from http://cefg.uestc.edu.cn/zcurve/ or http://tubic.tju.edu.cn/zcurveb/ without any restrictions. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
NASA Technical Reports Server (NTRS)
Wright, Jeffrey; Thakur, Siddharth
2006-01-01
Loci-STREAM is an evolving computational fluid dynamics (CFD) software tool for simulating possibly chemically reacting, possibly unsteady flows in diverse settings, including rocket engines, turbomachines, oil refineries, etc. Loci-STREAM implements a pressure- based flow-solving algorithm that utilizes unstructured grids. (The benefit of low memory usage by pressure-based algorithms is well recognized by experts in the field.) The algorithm is robust for flows at all speeds from zero to hypersonic. The flexibility of arbitrary polyhedral grids enables accurate, efficient simulation of flows in complex geometries, including those of plume-impingement problems. The present version - Loci-STREAM version 0.9 - includes an interface with the Portable, Extensible Toolkit for Scientific Computation (PETSc) library for access to enhanced linear-equation-solving programs therein that accelerate convergence toward a solution. The name "Loci" reflects the creation of this software within the Loci computational framework, which was developed at Mississippi State University for the primary purpose of simplifying the writing of complex multidisciplinary application programs to run in distributed-memory computing environments including clusters of personal computers. Loci has been designed to relieve application programmers of the details of programming for distributed-memory computers.
Wright-Berryman, Jennifer L; Salyers, Michelle P; O'Halloran, James P; Kemp, Aaron S; Mueser, Kim T; Diazoni, Amanda J
2013-12-01
To explore mental health consumer and provider responses to a computerized version of the Illness Management and Recovery (IMR) program. Semistructured interviews were conducted to gather data from 6 providers and 12 consumers who participated in a computerized prototype of the IMR program. An inductive-consensus-based approach was used to analyze the interview responses. Qualitative analysis revealed consumers perceived various personal benefits and ease of use afforded by the new technology platform. Consumers also highly valued provider assistance and offered several suggestions to improve the program. The largest perceived barriers to future implementation were lack of computer skills and access to computers. Similarly, IMR providers commented on its ease and convenience, and the reduction of time intensive material preparation. Providers also expressed that the use of technology creates more options for the consumer to access treatment. The technology was acceptable, easy to use, and well-liked by consumers and providers. Clinician assistance with technology was viewed as helpful to get clients started with the program, as lack of computer skills and access to computers was a concern. Access to materials between sessions appears to be desired; however, given perceived barriers of computer skills and computer access, additional supports may be needed for consumers to achieve full benefits of a computerized version of IMR. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Simulation of modern climate with the new version of the INM RAS climate model
NASA Astrophysics Data System (ADS)
Volodin, E. M.; Mortikov, E. V.; Kostrykin, S. V.; Galin, V. Ya.; Lykosov, V. N.; Gritsun, A. S.; Diansky, N. A.; Gusev, A. V.; Yakovlev, N. G.
2017-03-01
The INMCM5.0 numerical model of the Earth's climate system is presented, which is an evolution from the previous version, INMCM4.0. A higher vertical resolution for the stratosphere is applied in the atmospheric block. Also, we raised the upper boundary of the calculating area, added the aerosol block, modified parameterization of clouds and condensation, and increased the horizontal resolution in the ocean block. The program implementation of the model was also updated. We consider the simulation of the current climate using the new version of the model. Attention is focused on reducing systematic errors as compared to the previous version, reproducing phenomena that could not be simulated correctly in the previous version, and modeling the problems that remain unresolved.
Volumetric CT-images improve testing of radiological image interpretation skills.
Ravesloot, Cécile J; van der Schaaf, Marieke F; van Schaik, Jan P J; ten Cate, Olle Th J; van der Gijp, Anouk; Mol, Christian P; Vincken, Koen L
2015-05-01
Current radiology practice increasingly involves interpretation of volumetric data sets. In contrast, most radiology tests still contain only 2D images. We introduced a new testing tool that allows for stack viewing of volumetric images in our undergraduate radiology program. We hypothesized that tests with volumetric CT-images enhance test quality, in comparison with traditional completely 2D image-based tests, because they might better reflect required skills for clinical practice. Two groups of medical students (n=139; n=143), trained with 2D and volumetric CT-images, took a digital radiology test in two versions (A and B), each containing both 2D and volumetric CT-image questions. In a questionnaire, they were asked to comment on the representativeness for clinical practice, difficulty and user-friendliness of the test questions and testing program. Students' test scores and reliabilities, measured with Cronbach's alpha, of 2D and volumetric CT-image tests were compared. Estimated reliabilities (Cronbach's alphas) were higher for volumetric CT-image scores (version A: .51 and version B: .54), than for 2D CT-image scores (version A: .24 and version B: .37). Participants found volumetric CT-image tests more representative of clinical practice, and considered them to be less difficult than volumetric CT-image questions. However, in one version (A), volumetric CT-image scores (M 80.9, SD 14.8) were significantly lower than 2D CT-image scores (M 88.4, SD 10.4) (p<.001). The volumetric CT-image testing program was considered user-friendly. This study shows that volumetric image questions can be successfully integrated in students' radiology testing. Results suggests that the inclusion of volumetric CT-images might improve the quality of radiology tests by positively impacting perceived representativeness for clinical practice and increasing reliability of the test. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Translation and validation of the Malay version of the Stroke Knowledge Test.
Sowtali, Siti Noorkhairina; Yusoff, Dariah Mohd; Harith, Sakinah; Mohamed, Monniaty
2016-04-01
To date, there is a lack of published studies on assessment tools to evaluate the effectiveness of stroke education programs. This study developed and validated the Malay language version of the Stroke Knowledge Test research instrument. This study involved translation, validity, and reliability phases. The instrument underwent backward and forward translation of the English version into the Malay language. Nine experts reviewed the content for consistency, clarity, difficulty, and suitability for inclusion. Perceived usefulness and utilization were obtained from experts' opinions. Later, face validity assessment was conducted with 10 stroke patients to determine appropriateness of sentences and grammar used. A pilot study was conducted with 41 stroke patients to determine the item analysis and reliability of the translated instrument using the Kuder Richardson 20 or Cronbach's alpha. The final Malay version Stroke Knowledge Test included 20 items with good content coverage, acceptable item properties, and positive expert review ratings. Psychometric investigations suggest that Malay version Stroke Knowledge Test had moderate reliability with Kuder Richardson 20 or Cronbach's alpha of 0.58. Improvement is required for Stroke Knowledge Test items with unacceptable difficulty indices. Overall, the average rating of perceived usefulness and perceived utility of the instruments were both 72.7%, suggesting that reviewers were likely to use the instruments in their facilities. Malay version Stroke Knowledge Test was a valid and reliable tool to assess educational needs and to evaluate stroke knowledge among participants of group-based stroke education programs in Malaysia.
Fleming, Charles B; Mason, W Alex; Haggerty, Kevin P; Thompson, Ronald W; Fernandez, Kate; Casey-Goldstein, Mary; Oats, Robert G
2015-04-01
Engaging and retaining participants are crucial to achieving adequate implementation of parenting interventions designed to prevent problem behaviors among children and adolescents. This study examined predictors of engagement and retention in a group-based family intervention across two versions of the program: a standard version requiring only parent attendance for six sessions and an adapted version with two additional sessions that required attendance by the son or daughter. Families included a parent and an eighth grader who attended one of five high-poverty schools in an urban Pacific Northwest school district. The adapted version of the intervention had a higher rate of engagement than the standard version, a difference that was statistically significant after adjusting for other variables assessed at enrollment in the study. Higher household income and parent education, younger student age, and poorer affective quality in the parent-child relationship predicted greater likelihood of initial attendance. In the adapted version of the intervention, parents of boys were more likely to engage with the program than those of girls. The variables considered did not strongly predict retention, although retention was higher among parents of boys. Retention did not significantly differ between conditions. Asking for child attendance at workshops may have increased engagement in the intervention, while findings for other predictors of attendance point to the need for added efforts to recruit families who have less socioeconomic resources, as well as families who perceive they have less need for services.
Fleming, Charles B.; Mason, W. Alex; Haggerty, Kevin P.; Thompson, Ronald W.; Fernandez, Kate; Casey-Goldstein, Mary; Oats, Robert G.
2015-01-01
Engaging and retaining participants are crucial to achieving adequate implementation of parenting interventions designed to prevent problem behaviors among children and adolescents. This study examined predictors of engagement and retention in a group-based family intervention across two versions of the program: a standard version requiring only parent attendance for six sessions and an adapted version with two additional sessions that required attendance by the son or daughter. Families included a parent and an eighth grader who attended one of five high-poverty schools in an urban Pacific Northwest school district. The adapted version of the intervention had a higher rate of engagement than the standard version, a difference that was statistically significant after adjusting for other variables assessed at enrollment in the study. Higher household income and parent education, younger student age, and poorer affective quality in the parent-child relationship predicted greater likelihood of initial attendance. In the adapted version of the intervention, parents of boys were more likely to engage with the program than those of girls. The variables considered did not strongly predict retention, although retention was higher among parents of boys. Retention did not significantly differ between conditions. Asking for child attendance at workshops may have increased engagement in the intervention, while findings for other predictors of attendance point to the need for added efforts to recruit families who have less socioeconomic resources, as well as families who perceive they have less need for services. PMID:25656381
Modal analysis and dynamic stresses for acoustically excited shuttle insulation tiles
NASA Technical Reports Server (NTRS)
Ojalvo, I. U.; Ogilvie, P. L.
1975-01-01
Improvements and extensions to the RESIST computer program developed for determining the normalized modal stress response of shuttle insulation tiles are described. The new version of RESIST can accommodate primary structure panels with closed-cell stringers, in addition to the capability for treating open-cell stringers. In addition, the present version of RESIST numerically solves vibration problems several times faster than its predecessor. A new digital computer program, titled ARREST (Acoustic Response of Reusable Shuttle Tiles) is also described. Starting with modal information contained on output tapes from RESIST computer runs, ARREST determines RMS stresses, deflections and accelerations of shuttle panels with reusable surface insulation tiles. Both programs are applicable to stringer stiffened structural panels with or without reusable surface insulation titles.
User documentation for the FHWA Carpool Matching Program (second edition)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1975-01-01
This document provides persons interested in computerized carpool/buspool matching programs a complete description of the user documentation for the FHWA Carpool Matching Program. The FHWA program is written in American National Standard COBOL and thus should be readily transferable to environments other than the IBM 360/65 (OS) under which it has been developed and tested. The program has a compiled time core requirement of 110K and a maximum execution time core requirement of 110K. While considerable effort has been made to test the program in several applications and to achieve accuracy and completeness in the program and supporting documentation, themore » FHWA cannot guarantee the proper operation of this program by any user nor can it assume liability for any damage, loss, or inconvenience resulting from the operation of this program or the results obtained thereby. This present version of the carpool matching program represents the latest version of the first generation of an ongoing multi-phase process of improvements and refinements. The ultimate goal is an effective carpool and transit information system that will produce individualized information covering not only carpooling opportunities, but also transit routing, scheduling, and other identifying information for the commuter. (MCW)« less
Word Processors: A Look at Four Popular Programs.
ERIC Educational Resources Information Center
Press, Larry
1980-01-01
Described are types of programs used for processing text (editors, print formatters, and word processors), followed by the comparison of four word-processing packages: Auto Scribe, Electric Pencil, Magic Want and Word Star. With the exception of Auto Scribe, all programs reviewed are CP/M versions. (KC)
Text only version of the National Estuary Program Story Map
Since 1987, the EPA National Estuary Program (NEP) has made a unique and lasting contribution to protecting and restoring our nation's estuaries, delivering environmental and public health benefits to the American people.
An Interactive Version of MULR04 With Enhanced Graphic Capability
ERIC Educational Resources Information Center
Burkholder, Joel H.
1978-01-01
An existing computer program for computing multiple regression analyses is made interactive in order to alleviate core storage requirements. Also, some improvements in the graphics aspects of the program are included. (JKS)
User's Guide for the Interactive Scheduling Program : Preliminary Calendar Version
DOT National Transportation Integrated Search
1978-08-01
The Office of Transportation Management of the Urban Mass Transportation Administration (UMTA), in conjunction with the Transportation Systems Center (TSC), designed and developed the Interactive Scheduling Program (ISP) to assist rail-transit operat...
NASA Astrophysics Data System (ADS)
Deslippe, Jack; Samsonidze, Georgy; Strubbe, David A.; Jain, Manish; Cohen, Marvin L.; Louie, Steven G.
2012-06-01
BerkeleyGW is a massively parallel computational package for electron excited-state properties that is based on the many-body perturbation theory employing the ab initio GW and GW plus Bethe-Salpeter equation methodology. It can be used in conjunction with many density-functional theory codes for ground-state properties, including PARATEC, PARSEC, Quantum ESPRESSO, SIESTA, and Octopus. The package can be used to compute the electronic and optical properties of a wide variety of material systems from bulk semiconductors and metals to nanostructured materials and molecules. The package scales to 10 000s of CPUs and can be used to study systems containing up to 100s of atoms. Program summaryProgram title: BerkeleyGW Catalogue identifier: AELG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AELG_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Open source BSD License. See code for licensing details. No. of lines in distributed program, including test data, etc.: 576 540 No. of bytes in distributed program, including test data, etc.: 110 608 809 Distribution format: tar.gz Programming language: Fortran 90, C, C++, Python, Perl, BASH Computer: Linux/UNIX workstations or clusters Operating system: Tested on a variety of Linux distributions in parallel and serial as well as AIX and Mac OSX RAM: (50-2000) MB per CPU (Highly dependent on system size) Classification: 7.2, 7.3, 16.2, 18 External routines: BLAS, LAPACK, FFTW, ScaLAPACK (optional), MPI (optional). All available under open-source licenses. Nature of problem: The excited state properties of materials involve the addition or subtraction of electrons as well as the optical excitations of electron-hole pairs. The excited particles interact strongly with other electrons in a material system. This interaction affects the electronic energies, wavefunctions and lifetimes. It is well known that ground-state theories, such as standard methods based on density-functional theory, fail to correctly capture this physics. Solution method: We construct and solve the Dyson's equation for the quasiparticle energies and wavefunctions within the GW approximation for the electron self-energy. We additionally construct and solve the Bethe-Salpeter equation for the correlated electron-hole (exciton) wavefunctions and excitation energies. Restrictions: The material size is limited in practice by the computational resources available. Materials with up to 500 atoms per periodic cell can be studied on large HPCs. Additional comments: The distribution file for this program is approximately 110 Mbytes and therefore is not delivered directly when download or E-mail is requested. Instead a html file giving details of how the program can be obtained is sent. Running time: 1-1000 minutes (depending greatly on system size and processor number).
TRASYS - THERMAL RADIATION ANALYZER SYSTEM (DEC VAX VERSION WITH NASADIG)
NASA Technical Reports Server (NTRS)
Anderson, G. E.
1994-01-01
The Thermal Radiation Analyzer System, TRASYS, is a computer software system with generalized capability to solve the radiation related aspects of thermal analysis problems. TRASYS computes the total thermal radiation environment for a spacecraft in orbit. The software calculates internode radiation interchange data as well as incident and absorbed heat rate data originating from environmental radiant heat sources. TRASYS provides data of both types in a format directly usable by such thermal analyzer programs as SINDA/FLUINT (available from COSMIC, program number MSC-21528). One primary feature of TRASYS is that it allows users to write their own driver programs to organize and direct the preprocessor and processor library routines in solving specific thermal radiation problems. The preprocessor first reads and converts the user's geometry input data into the form used by the processor library routines. Then, the preprocessor accepts the user's driving logic, written in the TRASYS modified FORTRAN language. In many cases, the user has a choice of routines to solve a given problem. Users may also provide their own routines where desirable. In particular, the user may write output routines to provide for an interface between TRASYS and any thermal analyzer program using the R-C network concept. Input to the TRASYS program consists of Options and Edit data, Model data, and Logic Flow and Operations data. Options and Edit data provide for basic program control and user edit capability. The Model data describe the problem in terms of geometry and other properties. This information includes surface geometry data, documentation data, nodal data, block coordinate system data, form factor data, and flux data. Logic Flow and Operations data house the user's driver logic, including the sequence of subroutine calls and the subroutine library. Output from TRASYS consists of two basic types of data: internode radiation interchange data, and incident and absorbed heat rate data. The flexible structure of TRASYS allows considerable freedom in the definition and choice of solution method for a thermal radiation problem. The program's flexible structure has also allowed TRASYS to retain the same basic input structure as the authors update it in order to keep up with changing requirements. Among its other important features are the following: 1) up to 3200 node problem size capability with shadowing by intervening opaque or semi-transparent surfaces; 2) choice of diffuse, specular, or diffuse/specular radiant interchange solutions; 3) a restart capability that minimizes recomputing; 4) macroinstructions that automatically provide the executive logic for orbit generation that optimizes the use of previously completed computations; 5) a time variable geometry package that provides automatic pointing of the various parts of an articulated spacecraft and an automatic look-back feature that eliminates redundant form factor calculations; 6) capability to specify submodel names to identify sets of surfaces or components as an entity; and 7) subroutines to perform functions which save and recall the internodal and/or space form factors in subsequent steps for nodes with fixed geometry during a variable geometry run. There are two machine versions of TRASYS v27: a DEC VAX version and a Cray UNICOS version. Both versions require installation of the NASADIG library (MSC-21801 for DEC VAX or COS-10049 for CRAY), which is available from COSMIC either separately or bundled with TRASYS. The NASADIG (NASA Device Independent Graphics Library) plot package provides a pictorial representation of input geometry, orbital/orientation parameters, and heating rate output as a function of time. NASADIG supports Tektronix terminals. The CRAY version of TRASYS v27 is written in FORTRAN 77 for batch or interactive execution and has been implemented on CRAY X-MP and CRAY Y-MP series computers running UNICOS. The standard distribution medium for MSC-21959 (CRAY version without NASADIG) is a 1600 BPI 9-track magnetic tape in UNIX tar format. The standard distribution medium for COS-10040 (CRAY version with NASADIG) is a set of two 6250 BPI 9-track magnetic tapes in UNIX tar format. Alternate distribution media and formats are available upon request. The DEC VAX version of TRASYS v27 is written in FORTRAN 77 for batch execution (only the plotting driver program is interactive) and has been implemented on a DEC VAX 8650 computer under VMS. Since the source codes for MSC-21030 and COS-10026 are in VAX/VMS text library files and DEC Command Language files, COSMIC will only provide these programs in the following formats: MSC-21030, TRASYS (DEC VAX version without NASADIG) is available on a 1600 BPI 9-track magnetic tape in VAX BACKUP format (standard distribution medium) or in VAX BACKUP format on a TK50 tape cartridge; COS-10026, TRASYS (DEC VAX version with NASADIG), is available in VAX BACKUP format on a set of three 6250 BPI 9-track magnetic tapes (standard distribution medium) or a set of three TK50 tape cartridges in VAX BACKUP format. TRASYS was last updated in 1993.
TRASYS - THERMAL RADIATION ANALYZER SYSTEM (CRAY VERSION WITH NASADIG)
NASA Technical Reports Server (NTRS)
Anderson, G. E.
1994-01-01
The Thermal Radiation Analyzer System, TRASYS, is a computer software system with generalized capability to solve the radiation related aspects of thermal analysis problems. TRASYS computes the total thermal radiation environment for a spacecraft in orbit. The software calculates internode radiation interchange data as well as incident and absorbed heat rate data originating from environmental radiant heat sources. TRASYS provides data of both types in a format directly usable by such thermal analyzer programs as SINDA/FLUINT (available from COSMIC, program number MSC-21528). One primary feature of TRASYS is that it allows users to write their own driver programs to organize and direct the preprocessor and processor library routines in solving specific thermal radiation problems. The preprocessor first reads and converts the user's geometry input data into the form used by the processor library routines. Then, the preprocessor accepts the user's driving logic, written in the TRASYS modified FORTRAN language. In many cases, the user has a choice of routines to solve a given problem. Users may also provide their own routines where desirable. In particular, the user may write output routines to provide for an interface between TRASYS and any thermal analyzer program using the R-C network concept. Input to the TRASYS program consists of Options and Edit data, Model data, and Logic Flow and Operations data. Options and Edit data provide for basic program control and user edit capability. The Model data describe the problem in terms of geometry and other properties. This information includes surface geometry data, documentation data, nodal data, block coordinate system data, form factor data, and flux data. Logic Flow and Operations data house the user's driver logic, including the sequence of subroutine calls and the subroutine library. Output from TRASYS consists of two basic types of data: internode radiation interchange data, and incident and absorbed heat rate data. The flexible structure of TRASYS allows considerable freedom in the definition and choice of solution method for a thermal radiation problem. The program's flexible structure has also allowed TRASYS to retain the same basic input structure as the authors update it in order to keep up with changing requirements. Among its other important features are the following: 1) up to 3200 node problem size capability with shadowing by intervening opaque or semi-transparent surfaces; 2) choice of diffuse, specular, or diffuse/specular radiant interchange solutions; 3) a restart capability that minimizes recomputing; 4) macroinstructions that automatically provide the executive logic for orbit generation that optimizes the use of previously completed computations; 5) a time variable geometry package that provides automatic pointing of the various parts of an articulated spacecraft and an automatic look-back feature that eliminates redundant form factor calculations; 6) capability to specify submodel names to identify sets of surfaces or components as an entity; and 7) subroutines to perform functions which save and recall the internodal and/or space form factors in subsequent steps for nodes with fixed geometry during a variable geometry run. There are two machine versions of TRASYS v27: a DEC VAX version and a Cray UNICOS version. Both versions require installation of the NASADIG library (MSC-21801 for DEC VAX or COS-10049 for CRAY), which is available from COSMIC either separately or bundled with TRASYS. The NASADIG (NASA Device Independent Graphics Library) plot package provides a pictorial representation of input geometry, orbital/orientation parameters, and heating rate output as a function of time. NASADIG supports Tektronix terminals. The CRAY version of TRASYS v27 is written in FORTRAN 77 for batch or interactive execution and has been implemented on CRAY X-MP and CRAY Y-MP series computers running UNICOS. The standard distribution medium for MSC-21959 (CRAY version without NASADIG) is a 1600 BPI 9-track magnetic tape in UNIX tar format. The standard distribution medium for COS-10040 (CRAY version with NASADIG) is a set of two 6250 BPI 9-track magnetic tapes in UNIX tar format. Alternate distribution media and formats are available upon request. The DEC VAX version of TRASYS v27 is written in FORTRAN 77 for batch execution (only the plotting driver program is interactive) and has been implemented on a DEC VAX 8650 computer under VMS. Since the source codes for MSC-21030 and COS-10026 are in VAX/VMS text library files and DEC Command Language files, COSMIC will only provide these programs in the following formats: MSC-21030, TRASYS (DEC VAX version without NASADIG) is available on a 1600 BPI 9-track magnetic tape in VAX BACKUP format (standard distribution medium) or in VAX BACKUP format on a TK50 tape cartridge; COS-10026, TRASYS (DEC VAX version with NASADIG), is available in VAX BACKUP format on a set of three 6250 BPI 9-track magnetic tapes (standard distribution medium) or a set of three TK50 tape cartridges in VAX BACKUP format. TRASYS was last updated in 1993.
TRASYS - THERMAL RADIATION ANALYZER SYSTEM (DEC VAX VERSION WITHOUT NASADIG)
NASA Technical Reports Server (NTRS)
Vogt, R. A.
1994-01-01
The Thermal Radiation Analyzer System, TRASYS, is a computer software system with generalized capability to solve the radiation related aspects of thermal analysis problems. TRASYS computes the total thermal radiation environment for a spacecraft in orbit. The software calculates internode radiation interchange data as well as incident and absorbed heat rate data originating from environmental radiant heat sources. TRASYS provides data of both types in a format directly usable by such thermal analyzer programs as SINDA/FLUINT (available from COSMIC, program number MSC-21528). One primary feature of TRASYS is that it allows users to write their own driver programs to organize and direct the preprocessor and processor library routines in solving specific thermal radiation problems. The preprocessor first reads and converts the user's geometry input data into the form used by the processor library routines. Then, the preprocessor accepts the user's driving logic, written in the TRASYS modified FORTRAN language. In many cases, the user has a choice of routines to solve a given problem. Users may also provide their own routines where desirable. In particular, the user may write output routines to provide for an interface between TRASYS and any thermal analyzer program using the R-C network concept. Input to the TRASYS program consists of Options and Edit data, Model data, and Logic Flow and Operations data. Options and Edit data provide for basic program control and user edit capability. The Model data describe the problem in terms of geometry and other properties. This information includes surface geometry data, documentation data, nodal data, block coordinate system data, form factor data, and flux data. Logic Flow and Operations data house the user's driver logic, including the sequence of subroutine calls and the subroutine library. Output from TRASYS consists of two basic types of data: internode radiation interchange data, and incident and absorbed heat rate data. The flexible structure of TRASYS allows considerable freedom in the definition and choice of solution method for a thermal radiation problem. The program's flexible structure has also allowed TRASYS to retain the same basic input structure as the authors update it in order to keep up with changing requirements. Among its other important features are the following: 1) up to 3200 node problem size capability with shadowing by intervening opaque or semi-transparent surfaces; 2) choice of diffuse, specular, or diffuse/specular radiant interchange solutions; 3) a restart capability that minimizes recomputing; 4) macroinstructions that automatically provide the executive logic for orbit generation that optimizes the use of previously completed computations; 5) a time variable geometry package that provides automatic pointing of the various parts of an articulated spacecraft and an automatic look-back feature that eliminates redundant form factor calculations; 6) capability to specify submodel names to identify sets of surfaces or components as an entity; and 7) subroutines to perform functions which save and recall the internodal and/or space form factors in subsequent steps for nodes with fixed geometry during a variable geometry run. There are two machine versions of TRASYS v27: a DEC VAX version and a Cray UNICOS version. Both versions require installation of the NASADIG library (MSC-21801 for DEC VAX or COS-10049 for CRAY), which is available from COSMIC either separately or bundled with TRASYS. The NASADIG (NASA Device Independent Graphics Library) plot package provides a pictorial representation of input geometry, orbital/orientation parameters, and heating rate output as a function of time. NASADIG supports Tektronix terminals. The CRAY version of TRASYS v27 is written in FORTRAN 77 for batch or interactive execution and has been implemented on CRAY X-MP and CRAY Y-MP series computers running UNICOS. The standard distribution medium for MSC-21959 (CRAY version without NASADIG) is a 1600 BPI 9-track magnetic tape in UNIX tar format. The standard distribution medium for COS-10040 (CRAY version with NASADIG) is a set of two 6250 BPI 9-track magnetic tapes in UNIX tar format. Alternate distribution media and formats are available upon request. The DEC VAX version of TRASYS v27 is written in FORTRAN 77 for batch execution (only the plotting driver program is interactive) and has been implemented on a DEC VAX 8650 computer under VMS. Since the source codes for MSC-21030 and COS-10026 are in VAX/VMS text library files and DEC Command Language files, COSMIC will only provide these programs in the following formats: MSC-21030, TRASYS (DEC VAX version without NASADIG) is available on a 1600 BPI 9-track magnetic tape in VAX BACKUP format (standard distribution medium) or in VAX BACKUP format on a TK50 tape cartridge; COS-10026, TRASYS (DEC VAX version with NASADIG), is available in VAX BACKUP format on a set of three 6250 BPI 9-track magnetic tapes (standard distribution medium) or a set of three TK50 tape cartridges in VAX BACKUP format. TRASYS was last updated in 1993.
SEAWAT Version 4: A Computer Program for Simulation of Multi-Species Solute and Heat Transport
Langevin, Christian D.; Thorne, Daniel T.; Dausman, Alyssa M.; Sukop, Michael C.; Guo, Weixing
2008-01-01
The SEAWAT program is a coupled version of MODFLOW and MT3DMS designed to simulate three-dimensional, variable-density, saturated ground-water flow. Flexible equations were added to the program to allow fluid density to be calculated as a function of one or more MT3DMS species. Fluid density may also be calculated as a function of fluid pressure. The effect of fluid viscosity variations on ground-water flow was included as an option. Fluid viscosity can be calculated as a function of one or more MT3DMS species, and the program includes additional functions for representing the dependence on temperature. Although MT3DMS and SEAWAT are not explicitly designed to simulate heat transport, temperature can be simulated as one of the species by entering appropriate transport coefficients. For example, the process of heat conduction is mathematically analogous to Fickian diffusion. Heat conduction can be represented in SEAWAT by assigning a thermal diffusivity for the temperature species (instead of a molecular diffusion coefficient for a solute species). Heat exchange with the solid matrix can be treated in a similar manner by using the mathematically equivalent process of solute sorption. By combining flexible equations for fluid density and viscosity with multi-species transport, SEAWAT Version 4 represents variable-density ground-water flow coupled with multi-species solute and heat transport. SEAWAT Version 4 is based on MODFLOW-2000 and MT3DMS and retains all of the functionality of SEAWAT-2000. SEAWAT Version 4 also supports new simulation options for coupling flow and transport, and for representing constant-head boundaries. In previous versions of SEAWAT, the flow equation was solved for every transport timestep, regardless of whether or not there was a large change in fluid density. A new option was implemented in SEAWAT Version 4 that allows users to control how often the flow field is updated. New options were also implemented for representing constant-head boundaries with the Time-Variant Constant-Head (CHD) Package. These options allow for increased flexibility when using CHD flow boundaries with the zero-dispersive flux solute boundaries implemented by MT3DMS at constant-head cells. This report contains revised input instructions for the MT3DMS Dispersion (DSP) Package, Variable-Density Flow (VDF) Package, Viscosity (VSC) Package, and CHD Package. The report concludes with seven cases of an example problem designed to highlight many of the new features.
Utilities for master source code distribution: MAX and Friends
NASA Technical Reports Server (NTRS)
Felippa, Carlos A.
1988-01-01
MAX is a program for the manipulation of FORTRAN master source code (MSC). This is a technique by which one maintains one and only one master copy of a FORTRAN program under a program developing system, which for MAX is assumed to be VAX/VMS. The master copy is not intended to be directly compiled. Instead it must be pre-processed by MAX to produce compilable instances. These instances may correspond to different code versions (for example, double precision versus single precision), different machines (for example, IBM, CDC, Cray) or different operating systems (i.e., VAX/VMS versus VAX/UNIX). The advantage os using a master source is more pronounced in complex application programs that are developed and maintained over many years and are to be transported and executed on several computer environments. The version lag problem that plagues many such programs is avoided by this approach. MAX is complemented by several auxiliary programs that perform nonessential functions. The ensemble is collectively known as MAX and Friends. All of these programs, including MAX, are executed as foreign VAX/VMS commands and can easily be hidden in customized VMS command procedures.
COMBIMAN Programs (COMputerized BIomechanical MAN-Model). Version 8 (User’s Guide)
1989-02-01
prgrams The guide is based on the program as of January 1989. It deals with the conventions used to develop and analyze crew stations, the generation of the...END ACTION: This is a safety to prevent ending the program if PFK 31 is accidentally depressed. Select END COMBIMAN if you intended to end the program
Books and Pets: Our Friends for Life! Arizona Reading Program Manual.
ERIC Educational Resources Information Center
Arizona State Dept. of Library, Archives and Public Records, Phoenix.
This reading program manual delineates the "Books and Pets" program, a project of Arizona Reads, which is a collaboration between the Arizona Humanities Council and the Arizona State Library, Archives, and Public Records. A CD-ROM version of the program accompanies the manual. The manual is divided into the following parts: Introduction;…
Computer Programs for Chemistry Experiments I and II.
ERIC Educational Resources Information Center
Reynard, Dale C.
This unit of instruction includes nine laboratory experiments. All of the experiments are from the D.C. Health Revision of the Chemical Education Materials Study (CHEMS) with one exception. Program six is the lab from the original version of the CHEMS program. Each program consists of three parts (1) the lab and computer hints, (2) the description…
The U.S. Geological Survey coal quality (COALQUAL) database version 3.0
Palmer, Curtis A.; Oman, Charles L.; Park, Andy J.; Luppens, James A.
2015-12-21
Because of database size limits during the development of COALQUAL Version 1.3, many analyses of individual bench samples were merged into whole coal bed averages. The methodology for making these composite intervals was not consistent. Size limits also restricted the amount of georeferencing information and forced removal of qualifier notations such as "less than detection limit" (<) information, which can cause problems when using the data. A review of the original data sheets revealed that COALQUAL Version 2.0 was missing information that was needed for a complete understanding of a coal section. Another important database issue to resolve was the USGS "remnant moisture" problem. Prior to 1998, tests for remnant moisture (as-determined moisture in the sample at the time of analysis) were not performed on any USGS major, minor, or trace element coal analyses. Without the remnant moisture, it is impossible to convert the analyses to a usable basis (as-received, dry, etc.). Based on remnant moisture analyses of hundreds of samples of different ranks (and known residual moisture) reported after 1998, it was possible to develop a method to provide reasonable estimates of remnant moisture for older data to make it more useful in COALQUAL Version 3.0. In addition, COALQUAL Version 3.0 is improved by (1) adding qualifiers, including statistical programming to deal with the qualifiers; (2) clarifying the sample compositing problems; and (3) adding associated samples. Version 3.0 of COALQUAL also represents the first attempt to incorporate data verification by mathematically crosschecking certain analytical parameters. Finally, a new database system was designed and implemented to replace the outdated DOS program used in earlier versions of the database.
Second generation experiments in fault tolerant software
NASA Technical Reports Server (NTRS)
Knight, J. C.
1987-01-01
The purpose of the Multi-Version Software (MVS) experiment is to obtain empirical measurements of the performance of multi-version systems. Twenty version of a program were prepared under reasonably realistic development conditions from the same specifications. The overall structure of the testing environment for the MVS experiment and its status are described. A preliminary version of the control system is described that was implemented for the MVS experiment to allow the experimenter to have control over the details of the testing. The results of an empirical study of error detection using self checks are also presented. The analysis of the checks revealed that there are great differences in the ability of individual programmers to design effective checks.
Program Participants Increase Equitable Attitudes.
ERIC Educational Resources Information Center
New Jersey Research Bulletin, 1996
1996-01-01
In 1996, the effectiveness of New Jersey's Perkins Act-funded single parent/displaced homemaker and gender equity programs were evaluated by using a modified version of MacDonald's Sex Role Survey to determine the effects of program participation on participants' attitudes toward four dimensions of sex equity: work, behavior, equity, and home.…
Facilitating Family Group Inquiry at Science Museum Exhibits
ERIC Educational Resources Information Center
Gutwill, Joshua P.; Allen, Sue
2010-01-01
We describe a study of programs to deepen families' scientific inquiry practices in a science museum setting. The programs incorporated research-based learning principles from formal and informal educational environments. In a randomized experimental design, two versions of the programs, called "inquiry games," were compared to two control…
Teaching Adaptability of Object-Oriented Programming Language Curriculum
ERIC Educational Resources Information Center
Zhu, Xiao-dong
2012-01-01
The evolution of object-oriented programming languages includes update of their own versions, update of development environments, and reform of new languages upon old languages. In this paper, the evolution analysis of object-oriented programming languages is presented in term of the characters and development. The notion of adaptive teaching upon…
Foresters' Metric Conversions program (version 1.0). [Computer program
Jefferson A. Palmer
1999-01-01
The conversion of scientific measurements has become commonplace in the fields of - engineering, research, and forestry. Foresters? Metric Conversions is a Windows-based computer program that quickly converts user-defined measurements from English to metric and from metric to English. Foresters? Metric Conversions was derived from the publication "Metric...
Standard interface files and procedures for reactor physics codes, version III
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carmichael, B.M.
Standards and procedures for promoting the exchange of reactor physics codes are updated to Version-III status. Standards covering program structure, interface files, file handling subroutines, and card input format are included. The implementation status of the standards in codes and the extension of the standards to new code areas are summarized. (15 references) (auth)
Comprehensive Monitoring Program. Air Quality Data Assessment for 1989. Version 2.1. Volume 2
1990-06-01
Hg collected on Hopcalite TH media. 4.4.3.2 Basin F Data. Table 4.4-5 shows average and maximum metals values for the Basin F Remedial Monitoring...PMRMA. "Chemical Quality Assurance Plan." Version 1.0, July, 1989. Rathje and Marcero. 1976. "Improved Hopcalite Procedure for the Determination of
The Hydrologic Evaluation of Landfill Performance (HELP) computer program is a quasi-two-dimensional hydrologic model of water movement across, into, through and out of landfills. The model accepts weather, soil and design data. Landfill systems including various combinations o...
ERIC Educational Resources Information Center
Brese, Falk, Ed.
2012-01-01
The Teacher Education Study in Mathematics (TEDS-M) International Database includes data for all questionnaires administered as part of the TEDS-M study. These consisted of questionnaires administered to future teachers, educators, and institutions with teacher preparation programs. This supplement contains the international version of the TEDS-M…
John R. Mills
1989-01-01
The timber resource inventory model (TRIM) has been adapted to run on person al computers. The personal computer version of TRIM (PC-TRIM) is more widely used than its mainframe parent. Errors that existed in previous versions of TRIM have been corrected. Information is presented to help users with program input and output management in the DOS environment, to...
ERIC Educational Resources Information Center
Kulis, Stephen; Marsiglia, Flavio F.; Elek, Elvira; Dustman, Patricia; Wagstaff, David A.; Hecht, Michael L.
2005-01-01
A randomized trial tested the efficacy of three curriculum versions teaching drug resistance strategies, one modeled on Mexican American culture; another modeled on European American and African American culture; and a multicultural version. Self-report data at baseline and 14 months post-intervention were obtained from 3,402 Mexican heritage…
User's manual for the Macintosh version of PASCO
NASA Technical Reports Server (NTRS)
Lucas, S. H.; Davis, Randall C.
1991-01-01
A user's manual for Macintosh PASCO is presented. Macintosh PASCO is an Apple Macintosh version of PASCO, an existing computer code for structural analysis and optimization of longitudinally stiffened composite panels. PASCO combines a rigorous buckling analysis program with a nonlinear mathematical optimization routine to minimize panel mass. Macintosh PASCO accepts the same input as mainframe versions of PASCO. As output, Macintosh PASCO produces a text file and mode shape plots in the form of Apple Macintosh PICT files. Only the user interface for Macintosh is discussed here.
Version 4.0 of code Java for 3D simulation of the CCA model
NASA Astrophysics Data System (ADS)
Fan, Linyu; Liao, Jianwei; Zuo, Junsen; Zhang, Kebo; Li, Chao; Xiong, Hailing
2018-07-01
This paper presents a new version Java code for the three-dimensional simulation of Cluster-Cluster Aggregation (CCA) model to replace the previous version. Many redundant traverses of clusters-list in the program were totally avoided, so that the consumed simulation time is significantly reduced. In order to show the aggregation process in a more intuitive way, we have labeled different clusters with varied colors. Besides, a new function is added for outputting the particle's coordinates of aggregates in file to benefit coupling our model with other models.
Computing Operating Characteristics Of Bearing/Shaft Systems
NASA Technical Reports Server (NTRS)
Moore, James D.
1996-01-01
SHABERTH computer program predicts operating characteristics of bearings in multibearing load-support system. Lubricated and nonlubricated bearings modeled. Calculates loads, torques, temperatures, and fatigue lives of ball and/or roller bearings on single shaft. Provides for analysis of reaction of system to termination of supply of lubricant to bearings and other lubricated mechanical elements. Valuable in design and analysis of shaft/bearing systems. Two versions of SHABERTH available. Cray version (LEW-14860), "Computing Thermal Performances Of Shafts and Bearings". IBM PC version (MFS-28818), written for IBM PC-series and compatible computers running MS-DOS.
UFO (UnFold Operator) computer program abstract
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kissel, L.; Biggs, F.
UFO (UnFold Operator) is an interactive user-oriented computer program designed to solve a wide range of problems commonly encountered in physical measurements. This document provides a summary of the capabilities of version 3A of UFO.
Stewplan: software for creating forest stewardship plans (Version 1.3)
Peter D. Knopp; Mark J. Twery
2003-01-01
Describes the purpose and function of the Stewplan computer program. Provides instructions for loading Stewplan, a tutorial for getting started, and instructions for use. A copy of the program is included. [User's manual; CD-ROM].
AutoCAD-To-NASTRAN Translator Program
NASA Technical Reports Server (NTRS)
Jones, A.
1989-01-01
Program facilitates creation of finite-element mathematical models from geometric entities. AutoCAD to NASTRAN translator (ACTON) computer program developed to facilitate quick generation of small finite-element mathematical models for use with NASTRAN finite-element modeling program. Reads geometric data of drawing from Data Exchange File (DXF) used in AutoCAD and other PC-based drafting programs. Written in Microsoft Quick-Basic (Version 2.0).
A user's guide to Sandia's latin hypercube sampling software : LHS UNIX library/standalone version.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swiler, Laura Painton; Wyss, Gregory Dane
2004-07-01
This document is a reference guide for the UNIX Library/Standalone version of the Latin Hypercube Sampling Software. This software has been developed to generate Latin hypercube multivariate samples. This version runs on Linux or UNIX platforms. This manual covers the use of the LHS code in a UNIX environment, run either as a standalone program or as a callable library. The underlying code in the UNIX Library/Standalone version of LHS is almost identical to the updated Windows version of LHS released in 1998 (SAND98-0210). However, some modifications were made to customize it for a UNIX environment and as a librarymore » that is called from the DAKOTA environment. This manual covers the use of the LHS code as a library and in the standalone mode under UNIX.« less
Image retrieval and processing system version 2.0 development work
NASA Technical Reports Server (NTRS)
Slavney, Susan H.; Guinness, Edward A.
1991-01-01
The Image Retrieval and Processing System (IRPS) is a software package developed at Washington University and used by the NASA Regional Planetary Image Facilities (RPIF's). The IRPS combines data base management and image processing components to allow the user to examine catalogs of image data, locate the data of interest, and perform radiometric and geometric calibration of the data in preparation for analysis. Version 1.0 of IRPS was completed in Aug. 1989 and was installed at several IRPS's. Other RPIF's use remote logins via NASA Science Internet to access IRPS at Washington University. Work was begun on designing and population a catalog of Magellan image products that will be part of IRPS Version 2.0, planned for release by the end of calendar year 1991. With this catalog, a user will be able to search by orbit and by location for Magellan Basic Image Data Records (BIDR's), Mosaicked Image Data Records (MIDR's), and Altimetry-Radiometry Composite Data Records (ARCDR's). The catalog will include the Magellan CD-ROM volume, director, and file name for each data product. The image processing component of IRPS is based on the Planetary Image Cartography Software (PICS) developed by the U.S. Geological Survey, Flagstaff, Arizona. To augment PICS capabilities, a set of image processing programs were developed that are compatible with PICS-format images. This software includes general-purpose functions that PICS does not have, analysis and utility programs for specific data sets, and programs from other sources that were modified to work with PICS images. Some of the software will be integrated into the Version 2.0 release of IRPS. A table is presented that lists the programs with a brief functional description of each.
[Violence prevention in secondary schools: the Faustlos-curriculum for middle school].
Schick, Andreas; Cierpka, Manfred
2009-01-01
Schools and kindergartens are particularly suitable for the implementation of violence prevention programs. Many German schools and kindergartens have securely established the violence prevention curriculum Faustlos. The Faustlos programs for kindergartens and elementary schools are now complemented with the version for middle schools. As the kindergarten- and elementary school versions the middle school program too focuses on the theoretically profound, age group-tailored promotion of empathy, impulse control and anger management. These dimensions are subdivided into the five themes "understanding the problem" "training for empathy"; "anger management", "problem solving" and "applying skills" and taught stepwise, highly structured and based on several video sequences in 31 lessons. US-American evaluation studies proof the effectiveness and the violence prevention potential of the program. With the curriculum for middle schools a comprehensive Faustlos program package is now made available to sustainably promote core violence prevention competences of children and adolescents on a developmentally appropriate level and with a consistent didactic approach.
NASA Technical Reports Server (NTRS)
McManus, John W.; Goodrich, Kenneth H.
1989-01-01
A research program investigating the use of Artificial Intelligence (AI) techniques to aid in the development of a Tactical Decision Generator (TDG) for Within-Visual-Range (WVR) air combat engagements is discussed. The application of AI methods for development and implementation of the TDG is presented. The history of the Adaptive Maneuvering Logic (AML) program is traced and current versions of the AML program are compared and contrasted with the TDG system. The Knowledge-Based Systems (KBS) used by the TDG to aid in the decision-making process are outlined in detail and example rules are presented. The results of tests to evaluate the performance of the TDG versus a version of AML and versus human pilots in the Langley Differential Maneuvering Simulator (DMS) are presented. To date, these results have shown significant performance gains in one-versus-one air combat engagements, and the AI-based TDG software has proven to be much easier to modify than the updated FORTRAN AML programs.
Moghadam, Marjan; Jahangiri, Leila
2015-08-01
An electronic quality assurance (eQA) program was developed to replace a paper-based system and to address standards introduced by the Commission on Dental Accreditation (CODA) and to improve educational outcomes. This eQA program provides feedback to predoctoral dental students on prosthodontic laboratory steps at New York University College of Dentistry. The purpose of this study was to compare the eQA program of performing laboratory quality assurance with the former paper-based format. Fourth-year predoctoral dental students (n=334) who experienced both the paper-based and the electronic version of the quality assurance program were surveyed about their experiences. Additionally, data extracted from the eQA program were analyzed to identify areas of weakness in the curriculum. The study findings revealed that 73.8% of the students preferred the eQA program to the paper-based version. The average number of treatments that did not pass quality assurance standards was 119.5 per month. This indicated a 6.34% laboratory failure rate. Further analysis of these data revealed that 62.1% of the errors were related to fixed prosthodontic treatment, 27.9% to partial removable dental prostheses, and 10% to complete removable dental prostheses in the first 18 months of program implementation. The eQA program was favored by dental students who have experienced both electronic and paper-based versions of the system. Error type analysis can yield the ability to create customized faculty standardization sessions and refine the didactic and clinical teaching of the predoctoral students. This program was also able to link patient care activity with the student's laboratory activities, thus addressing the latest requirements of the CODA regarding the competence of graduates in evaluating laboratory work related to their patient care. Copyright © 2015 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Sandner, Raimar; Vukics, András
2014-09-01
The v2 Milestone 10 release of C++QED is primarily a feature release, which also corrects some problems of the previous release, especially as regards the build system. The adoption of C++11 features has led to many simplifications in the codebase. A full doxygen-based API manual [1] is now provided together with updated user guides. A largely automated, versatile new testsuite directed both towards computational and physics features allows for quickly spotting arising errors. The states of trajectories are now savable and recoverable with full binary precision, allowing for trajectory continuation regardless of evolution method (single/ensemble Monte Carlo wave-function or Master equation trajectory). As the main new feature, the framework now presents Python bindings to the highest-level programming interface, so that actual simulations for given composite quantum systems can now be performed from Python. Catalogue identifier: AELU_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AELU_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: yes No. of lines in distributed program, including test data, etc.: 492422 No. of bytes in distributed program, including test data, etc.: 8070987 Distribution format: tar.gz Programming language: C++/Python. Computer: i386-i686, x86 64. Operating system: In principle cross-platform, as yet tested only on UNIX-like systems (including Mac OS X). RAM: The framework itself takes about 60MB, which is fully shared. The additional memory taken by the program which defines the actual physical system (script) is typically less than 1MB. The memory storing the actual data scales with the system dimension for state-vector manipulations, and the square of the dimension for density-operator manipulations. This might easily be GBs, and often the memory of the machine limits the size of the simulated system. Classification: 4.3, 4.13, 6.2. External routines: Boost C++ libraries, GNU Scientific Library, Blitz++, FLENS, NumPy, SciPy Catalogue identifier of previous version: AELU_v1_0 Journal reference of previous version: Comput. Phys. Comm. 183 (2012) 1381 Does the new version supersede the previous version?: Yes Nature of problem: Definition of (open) composite quantum systems out of elementary building blocks [2,3]. Manipulation of such systems, with emphasis on dynamical simulations such as Master-equation evolution [4] and Monte Carlo wave-function simulation [5]. Solution method: Master equation, Monte Carlo wave-function method Reasons for new version: The new version is mainly a feature release, but it does correct some problems of the previous version, especially as regards the build system. Summary of revisions: We give an example for a typical Python script implementing the ring-cavity system presented in Sec. 3.3 of Ref. [2]: Restrictions: Total dimensionality of the system. Master equation-few thousands. Monte Carlo wave-function trajectory-several millions. Unusual features: Because of the heavy use of compile-time algorithms, compilation of programs written in the framework may take a long time and much memory (up to several GBs). Additional comments: The framework is not a program, but provides and implements an application-programming interface for developing simulations in the indicated problem domain. We use several C++11 features which limits the range of supported compilers (g++ 4.7, clang++ 3.1) Documentation, http://cppqed.sourceforge.net/ Running time: Depending on the magnitude of the problem, can vary from a few seconds to weeks. References: [1] Entry point: http://cppqed.sf.net [2] A. Vukics, C++QEDv2: The multi-array concept and compile-time algorithms in the definition of composite quantum systems, Comp. Phys. Comm. 183(2012)1381. [3] A. Vukics, H. Ritsch, C++QED: an object-oriented framework for wave-function simulations of cavity QED systems, Eur. Phys. J. D 44 (2007) 585. [4] H. J. Carmichael, An Open Systems Approach to Quantum Optics, Springer, 1993. [5] J. Dalibard, Y. Castin, K. Molmer, Wave-function approach to dissipative processes in quantum optics, Phys. Rev. Lett. 68 (1992) 580.
How to Keep Your Campus Safe from Infection
ERIC Educational Resources Information Center
Brown, Scott
2005-01-01
In this article, the author explains how antivirus programs work. He also explains how performances of various antivirus programs vary from one to another. He also takes a look at 13 antivirus programs and explains which of these will keep computers protected. These programs include: (1) Sophos Anti-Virus Version 3.86.2; (2) McAfee VirusScan 9.0;…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, D; Danielewicz, P
2002-03-15
This is the manual for a collection of programs that can be used to invert angled-averaged (i.e. one dimensional) two-particle correlation functions. This package consists of several programs that generate kernel matrices (basically the relative wavefunction of the pair, squared), programs that generate test correlation functions from test sources of various types and the program that actually inverts the data using the kernel matrix.
ERIC Educational Resources Information Center
Zucker, Marla; Spinazzola, Joseph; Pollack, Amie Alley; Pepe, Lauren; Barry, Stephanie; Zhang, Lynda; van der Kolk, Bessel
2010-01-01
This study replicated and extended our previous evaluation of Urban Improv (UI), a theater-based youth violence prevention (YVP) program developed for urban youth. It assessed the replicability of positive program impacts when implemented by nonprogram originators, as well as the utility of a comprehensive version of the UI program that included a…
Two autowire versions for CDC-3200 and IBM-360
NASA Technical Reports Server (NTRS)
Billingsley, J. B.
1972-01-01
Microelectronics program was initiated to evaluate circuitry, packaging methods, and fabrication approaches necessary to produce completely procured logic system. Two autowire programs were developed for CDC-3200 and IBM-360 computers for use in designing logic systems.
HUMAN--A Comprehensive Physiological Model.
ERIC Educational Resources Information Center
Coleman, Thomas G.; Randall, James E.
1983-01-01
Describes computer program (HUMAN) used to simulate physiological experiments on patient pathology. Program (available from authors, including versions for microcomputers) consists of dynamic interactions of over 150 physiological variables and integrating approximations of cardiovascular, renal, lung, temperature regulation, and some hormone…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weber, Scott; Bixler, Nathan E.; McFadden, Katherine Letizia
In 1973 the U.S. Environmental Protection Agency (EPA) developed SecPop to calculate population estimates to support a study on air quality. The Nuclear Regulatory Commission (NRC) adopted this program to support siting reviews for nuclear power plant construction and license applications. Currently SecPop is used to prepare site data input files for offsite consequence calculations with the MELCOR Accident Consequence Code System (MACCS). SecPop enables the use of site-specific population, land use, and economic data for a polar grid defined by the user. Updated versions of SecPop have been released to use U.S. decennial census population data. SECPOP90 was releasedmore » in 1997 to use 1990 population and economic data. SECPOP2000 was released in 2003 to use 2000 population data and 1997 economic data. This report describes the current code version, SecPop version 4.3, which uses 2010 population data and both 2007 and 2012 economic data. It is also compatible with 2000 census and 2002 economic data. At the time of this writing, the current version of SecPop is 4.3.0, and that version is described herein. This report contains guidance for the installation and use of the code as well as a description of the theory, models, and algorithms involved. This report contains appendices which describe the development of the 2010 census file, 2007 county file, and 2012 county file. Finally, an appendix is included that describes the validation assessments performed.« less
NASA Technical Reports Server (NTRS)
Alfaro, Victor O.; Casey, Nancy J.
2005-01-01
SQL-RAMS (where "SQL" signifies Structured Query Language and "RAMS" signifies Rocketdyne Automated Management System) is a successor to the legacy version of RAMS -- a computer program used to manage all work, nonconformance, corrective action, and configuration management on rocket engines and ground support equipment at Stennis Space Center. The legacy version resided in the File-Maker Pro software system and was constructed in modules that could act as standalone programs. There was little or no integration among modules. Because of limitations on file-management capabilities in FileMaker Pro, and because of difficulty of integration of FileMaker Pro with other software systems for exchange of data using such industry standards as SQL, the legacy version of RAMS proved to be limited, and working to circumvent its limitations too time-consuming. In contrast, SQL-RAMS is an integrated SQL-server-based program that supports all data-exchange software industry standards. Whereas in the legacy version, it was necessary to access individual modules to gain insight into a particular workstatus document, SQL-RAMS provides access through a single-screen presentation of core modules. In addition, SQL-RAMS enables rapid and efficient filtering of displayed statuses by predefined categories and test numbers. SQL-RAMS is rich in functionality and encompasses significant improvements over the legacy system. It provides users the ability to perform many tasks, which in the past required administrator intervention. Additionally, many of the design limitations have been corrected, allowing for a robust application that is user centric.
NASA Technical Reports Server (NTRS)
Alfaro, Victor O.; Casey, Nancy J.
2005-01-01
SQL-RAMS (where "SQL" signifies Structured Query Language and "RAMS" signifies Rocketdyne Automated Management System) is a successor to the legacy version of RAMS a computer program used to manage all work, nonconformance, corrective action, and configuration management on rocket engines and ground support equipment at Stennis Space Center. The legacy version resided in the FileMaker Pro software system and was constructed in modules that could act as stand-alone programs. There was little or no integration among modules. Because of limitations on file-management capabilities in FileMaker Pro, and because of difficulty of integration of FileMaker Pro with other software systems for exchange of data using such industry standards as SQL, the legacy version of RAMS proved to be limited, and working to circumvent its limitations too time-consuming. In contrast, SQL-RAMS is an integrated SQL-server-based program that supports all data-exchange software industry standards. Whereas in the legacy version, it was necessary to access individual modules to gain insight to a particular work-status documents, SQL-RAMS provides access through a single-screen presentation of core modules. In addition, SQL-RAMS enable rapid and efficient filtering of displayed statuses by predefined categories and test numbers. SQL-RAMS is rich in functionality and encompasses significant improvements over the legacy system. It provides users the ability to perform many tasks which in the past required administrator intervention. Additionally many of the design limitations have been corrected allowing for a robust application that is user centric.
The NASA/MSFC Global Reference Atmospheric Model: 1999 Version (GRAM-99)
NASA Technical Reports Server (NTRS)
Justus, C. G.; Johnson, D. L.
1999-01-01
The latest version of Global Reference Atmospheric Model (GRAM-99) is presented and discussed. GRAM-99 uses either (binary) Global Upper Air Climatic Atlas (GUACA) or (ASCII) Global Gridded Upper Air Statistics (GGUAS) CD-ROM data sets, for 0-27 km altitudes. As with earlier versions, GRAM-99 provides complete geographical and altitude coverage for each month of the year. GRAM-99 uses a specially-developed data set, based on Middle Atmosphere Program (MAP) data, for 20-120 km altitudes, and NASA's 1999 version Marshall Engineering Thermosphere (MET-99) model for heights above 90 km. Fairing techniques assure smooth transition in overlap height ranges (20-27 km and 90-120 km). GRAM-99 includes water vapor and 11 other atmospheric constituents (O3, N2O, CO, CH4, CO2, N2, O2, O, A, He and H). A variable-scale perturbation model provides both large-scale (wave) and small-scale (stochastic) deviations from mean values for thermodynamic variables and horizontal and vertical wind components. The small-scale perturbation model includes improvements in representing intermittency ("patchiness"). A major new feature is an option to substitute Range Reference Atmosphere (RRA) data for conventional GRAM climatology when a trajectory passes sufficiently near any RRA site. A complete user's guide for running the program, plus sample input and output, is provided. An example is provided for how to incorporate GRAM-99 as subroutines in other programs (e.g., trajectory codes).
Numerical Arc Segmentation Algorithm for a Radio Conference-NASARC (version 4.0) technical manual
NASA Technical Reports Server (NTRS)
Whyte, Wayne A., Jr.; Heyward, Ann O.; Ponchak, Denise S.; Spence, Rodney L.; Zuzek, John E.
1988-01-01
The information contained in the NASARC (Version 4.0) Technical Manual and NASARC (Version 4.0) User's Manual relates to the Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) software development through November 1, 1988. The Technical Manual describes the NASARC concept and the algorithms used to implement the concept. The User's Manual provides information on computer system considerations, installation instructions, description of input files, and program operation instructions. Significant revisions were incorporated in the Version 4.0 software over prior versions. These revisions have further enhanced the modeling capabilities of the NASARC procedure and provide improved arrangements of predetermined arcs within the geostationary orbits. Array dimensions within the software were structured to fit within the currently available 12 megabyte memory capacity of the International Frequency Registration Board (IFRB) computer facility. A piecewise approach to predetermined arc generation in NASARC (Version 4.0) allows worldwide planning problem scenarios to be accommodated within computer run time and memory constraints with enhanced likelihood and ease of solution.
ERIC Educational Resources Information Center
Sink, Christopher A.; Spencer, Lisa R.
2007-01-01
This article reports on a psychometric study examining the validity and reliability of the My Class Inventory-Short Form for Teachers, an accountability measure for elementary school counselors to use as they evaluate aspects of their school counseling programs. As a companion inventory to the student version of the My Class Inventory-Short Form…
AEOSS runtime manual for system analysis on Advanced Earth-Orbital Spacecraft Systems
NASA Technical Reports Server (NTRS)
Lee, Hwa-Ping
1990-01-01
Advanced earth orbital spacecraft system (AEOSS) enables users to project the required power, weight, and cost for a generic earth-orbital spacecraft system. These variables are calculated on the component and subsystem levels, and then the system level. The included six subsystems are electric power, thermal control, structure, auxiliary propulsion, attitude control, and communication, command, and data handling. The costs are computed using statistically determined models that were derived from the flown spacecraft in the past and were categorized into classes according to their functions and structural complexity. Selected design and performance analyses for essential components and subsystems are also provided. AEOSS has the feature permitting a user to enter known values of these parameters, totally and partially, at all levels. All information is of vital importance to project managers of subsystems or a spacecraft system. AEOSS is a specially tailored software coded from the relational database program of the Acius' 4th Dimension with a Macintosh version. Because of the licensing agreements, two versions of the AEOSS documents were prepared. This version, AEOSS Runtime Manual, is permitted to be distributed with a finite number of the restrictive 4D Runtime version. It can perform all contained applications without any programming alterations.
SHABERTH - ANALYSIS OF A SHAFT BEARING SYSTEM (CRAY VERSION)
NASA Technical Reports Server (NTRS)
Coe, H. H.
1994-01-01
The SHABERTH computer program was developed to predict operating characteristics of bearings in a multibearing load support system. Lubricated and non-lubricated bearings can be modeled. SHABERTH calculates the loads, torques, temperatures, and fatigue life for ball and/or roller bearings on a single shaft. The program also allows for an analysis of the system reaction to the termination of lubricant supply to the bearings and other lubricated mechanical elements. SHABERTH has proven to be a valuable tool in the design and analysis of shaft bearing systems. The SHABERTH program is structured with four nested calculation schemes. The thermal scheme performs steady state and transient temperature calculations which predict system temperatures for a given operating state. The bearing dimensional equilibrium scheme uses the bearing temperatures, predicted by the temperature mapping subprograms, and the rolling element raceway load distribution, predicted by the bearing subprogram, to calculate bearing diametral clearance for a given operating state. The shaft-bearing system load equilibrium scheme calculates bearing inner ring positions relative to the respective outer rings such that the external loading applied to the shaft is brought into equilibrium by the rolling element loads which develop at each bearing inner ring for a given operating state. The bearing rolling element and cage load equilibrium scheme calculates the rolling element and cage equilibrium positions and rotational speeds based on the relative inner-outer ring positions, inertia effects, and friction conditions. The ball bearing subprograms in the current SHABERTH program have several model enhancements over similar programs. These enhancements include an elastohydrodynamic (EHD) film thickness model that accounts for thermal heating in the contact area and lubricant film starvation; a new model for traction combined with an asperity load sharing model; a model for the hydrodynamic rolling and shear forces in the inlet zone of lubricated contacts, which accounts for the degree of lubricant film starvation; modeling normal and friction forces between a ball and a cage pocket, which account for the transition between the hydrodynamic and elastohydrodynamic regimes of lubrication; and a model of the effect on fatigue life of the ratio of the EHD plateau film thickness to the composite surface roughness. SHABERTH is intended to be as general as possible. The models in SHABERTH allow for the complete mathematical simulation of real physical systems. Systems are limited to a maximum of five bearings supporting the shaft, a maximum of thirty rolling elements per bearing, and a maximum of one hundred temperature nodes. The SHABERTH program structure is modular and has been designed to permit refinement and replacement of various component models as the need and opportunities develop. A preprocessor is included in the IBM PC version of SHABERTH to provide a user friendly means of developing SHABERTH models and executing the resulting code. The preprocessor allows the user to create and modify data files with minimal effort and a reduced chance for errors. Data is utilized as it is entered; the preprocessor then decides what additional data is required to complete the model. Only this required information is requested. The preprocessor can accommodate data input for any SHABERTH compatible shaft bearing system model. The system may include ball bearings, roller bearings, and/or tapered roller bearings. SHABERTH is written in FORTRAN 77, and two machine versions are available from COSMIC. The CRAY version (LEW-14860) has a RAM requirement of 176K of 64 bit words. The IBM PC version (MFS-28818) is written for IBM PC series and compatible computers running MS-DOS, and includes a sample MS-DOS executable. For execution, the PC version requires at least 1Mb of RAM and an 80386 or 486 processor machine with an 80x87 math co-processor. The standard distribution medium for the IBM PC version is a set of two 5.25 inch 360K MS-DOS format diskettes. The contents of the diskettes are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. The standard distribution medium for the CRAY version is also a 5.25 inch 360K MS-DOS format diskette, but alternate distribution media and formats are available upon request. The original version of SHABERTH was developed in FORTRAN IV at Lewis Research Center for use on a UNIVAC 1100 series computer. The Cray version was released in 1988, and was updated in 1990 to incorporate fluid rheological data for Rocket Propellant 1 (RP-1), thereby allowing the analysis of bearings lubricated with RP-1. The PC version is a port of the 1990 CRAY version and was developed in 1992 by SRS Technologies under contract to NASA Marshall Space Flight Center.
Second-Year Accountability Report for WorkFirst Training Programs.
ERIC Educational Resources Information Center
Washington State Board for Community and Technical Colleges, Olympia.
In 1998, Washington passed into law WorkFirst, its version of the federal welfare reform program, Temporary Assistance for Needy Families (TANF). Colleges were funded for four training programs: (1) Pre-Employment Training; (2) Tuition Assistance; (3) Workplace Basic Skills; and (4) Families That Work. This paper presents the overall second-year…
FRAGSTATS: spatial pattern analysis program for quantifying landscape structure.
Kevin McGarigal; Barbara J. Marks
1995-01-01
This report describes a program, FRAGSTATS, developed to quantify landscape structure. FRAGSTATS offers a comprehensive choice of landscape metrics and was designed to be as versatile as possible. The program is almost completely automated and thus requires little technical training. Two separate versions of FRAGSTATS exist: one for vector images and one for raster...
CHINESE GRAMMARS AND THE COMPUTER AT THE OHIO STATE UNIVERSITY. PRELIMINARY REPORT.
ERIC Educational Resources Information Center
MEYERS, L.F.; YANG, J.
SAMPLE OUTPUT SENTENCES OF VARIOUS COMIT AND SNOBOL PROGRAMS FOR TESTING A CHINESE GENERATIVE GRAMMAR ARE PRESENTED. THE GRAMMAR CHOSEN FOR EXPERIMENTATION IS A PRELIMINARY VERSION OF A TRANSFORMATIONAL GRAMMAR. ALL OF THE COMIT PROGRAMS AND ONE OF THE SNOBOL PROGRAMS USE A LINEARIZED REPRESENTATION OF TREE STRUCTURES, WITH ADDITIONAL NUMERICAL…
Understanding the Promise: A Typology of State and Local College Promise Programs
ERIC Educational Resources Information Center
Perna, Laura W.; Leigh, Elaine W.
2018-01-01
Over the past decade, but especially in the past few years, programs with a "promise" label have been advanced at the local, state, and federal levels. To advance understanding of the design, implementation, and impact of the many different versions of emerging programs, policymakers, practitioners, and researchers need an organizing…
Social Validity Evaluation of the FRIENDS for Life Program with Mexican Children
ERIC Educational Resources Information Center
Gallegos-Guajardo, Julia; Ruvalcaba-Romero, Norma Alicia; Garza-Tamez, Martha; Villegas-Guinea, Diana
2013-01-01
This study is the first social validity evaluation of the Spanish version of the "FRIENDS for Life" program with Mexican children. "FRIENDS for Life" is a cognitive-behavioral intervention aimed at increasing social and emotional competence and decreasing anxiety and depressive symptoms in children. The program is designed to…
RAWS II: A MULTIPLE REGRESSION ANALYSIS PROGRAM,
This memorandum gives instructions for the use and operation of a revised version of RAWS, a multiple regression analysis program. The program...of preprocessed data, the directed retention of variable, listing of the matrix of the normal equations and its inverse, and the bypassing of the regression analysis to provide the input variable statistics only. (Author)
ERIC Educational Resources Information Center
What Works Clearinghouse, 2006
2006-01-01
"Caring School Community[TM]" ("CSC") is a modified version of a program formerly known as the "Child Development Project." The program aims to promote core values, prosocial behavior, and a schoolwide feeling of community. The program consists of four elements originally developed for the "Child Development…