NASA Astrophysics Data System (ADS)
Bytev, Vladimir V.; Kniehl, Bernd A.
2016-09-01
We present a further extension of the HYPERDIRE project, which is devoted to the creation of a set of Mathematica-based program packages for manipulations with Horn-type hypergeometric functions on the basis of differential equations. Specifically, we present the implementation of the differential reduction for the Lauricella function FC of three variables. Catalogue identifier: AEPP_v4_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEPP_v4_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 243461 No. of bytes in distributed program, including test data, etc.: 61610782 Distribution format: tar.gz Programming language: Mathematica. Computer: All computers running Mathematica. Operating system: Operating systems running Mathematica. Classification: 4.4. Does the new version supersede the previous version?: No, it significantly extends the previous version. Nature of problem: Reduction of hypergeometric function FC of three variables to a set of basis functions. Solution method: Differential reduction. Reasons for new version: The extension package allows the user to handle the Lauricella function FC of three variables. Summary of revisions: The previous version goes unchanged. Running time: Depends on the complexity of the problem.
QDENSITY—A Mathematica quantum computer simulation
NASA Astrophysics Data System (ADS)
Juliá-Díaz, Bruno; Burdis, Joseph M.; Tabakin, Frank
2009-03-01
This Mathematica 6.0 package is a simulation of a Quantum Computer. The program provides a modular, instructive approach for generating the basic elements that make up a quantum circuit. The main emphasis is on using the density matrix, although an approach using state vectors is also implemented in the package. The package commands are defined in Qdensity.m which contains the tools needed in quantum circuits, e.g., multiqubit kets, projectors, gates, etc. New version program summaryProgram title: QDENSITY 2.0 Catalogue identifier: ADXH_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXH_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 26 055 No. of bytes in distributed program, including test data, etc.: 227 540 Distribution format: tar.gz Programming language: Mathematica 6.0 Operating system: Any which supports Mathematica; tested under Microsoft Windows XP, Macintosh OS X, and Linux FC4 Catalogue identifier of previous version: ADXH_v1_0 Journal reference of previous version: Comput. Phys. Comm. 174 (2006) 914 Classification: 4.15 Does the new version supersede the previous version?: Offers an alternative, more up to date, implementation Nature of problem: Analysis and design of quantum circuits, quantum algorithms and quantum clusters. Solution method: A Mathematica package is provided which contains commands to create and analyze quantum circuits. Several Mathematica notebooks containing relevant examples: Teleportation, Shor's Algorithm and Grover's search are explained in detail. A tutorial, Tutorial.nb is also enclosed. Reasons for new version: The package has been updated to make it fully compatible with Mathematica 6.0 Summary of revisions: The package has been updated to make it fully compatible with Mathematica 6.0 Running time: Most examples included in the package, e.g., the tutorial, Shor's examples, Teleportation examples and Grover's search, run in less than a minute on a Pentium 4 processor (2.6 GHz). The running time for a quantum computation depends crucially on the number of qubits employed.
Spinors: A Mathematica package for doing spinor calculus in General Relativity
NASA Astrophysics Data System (ADS)
Gómez-Lobo, Alfonso García-Parrado; Martín-García, José M.
2012-10-01
The Spinors software is a Mathematica package which implements 2-component spinor calculus as devised by Penrose for General Relativity in dimension 3+1. The Spinors software is part of the xAct system, which is a collection of Mathematica packages to do tensor analysis by computer. In this paper we give a thorough description of Spinors and present practical examples of use. Program summary Program title: Spinors Catalogue identifier: AEMQ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEMQ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 117039 No. of bytes in distributed program, including test data, etc.: 300404 Distribution format: tar.gz Programming language: Mathematica. Computer: Any computer running Mathematica 7.0 or higher. Operating system: Any operating system compatible with Mathematica 7.0 or higher. RAM: 94Mb in Mathematica 8.0. Classification: 1.5. External routines: Mathematica packages xCore, xPerm and xTensor which are part of the xAct system. These can be obtained at http://www.xact.es. Nature of problem: Manipulation and simplification of spinor expressions in General Relativity. Solution method: Adaptation of the tensor functionality of the xAct system for the specific situation of spinor calculus in four dimensional Lorentzian geometry. Restrictions: The software only works on 4-dimensional Lorentzian space-times with metric of signature (1, -1, -1, -1). There is no direct support for Dirac spinors. Unusual features: Easy rules to transform tensor expressions into spinor ones and back. Seamless integration of abstract index manipulation of spinor expressions with component computations. Running time: Under one second to handle and canonicalize standard spinorial expressions with a few dozen indices. (These expressions arise naturally in the transformation of a spinor expression into a tensor one or vice versa.)
Lambda: A Mathematica package for operator product expansions in vertex algebras
NASA Astrophysics Data System (ADS)
Ekstrand, Joel
2011-02-01
We give an introduction to the Mathematica package Lambda, designed for calculating λ-brackets in both vertex algebras, and in SUSY vertex algebras. This is equivalent to calculating operator product expansions in two-dimensional conformal field theory. The syntax of λ-brackets is reviewed, and some simple examples are shown, both in component notation, and in N=1 superfield notation. Program summaryProgram title: Lambda Catalogue identifier: AEHF_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHF_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License No. of lines in distributed program, including test data, etc.: 18 087 No. of bytes in distributed program, including test data, etc.: 131 812 Distribution format: tar.gz Programming language: Mathematica Computer: See specifications for running Mathematica V7 or above. Operating system: See specifications for running Mathematica V7 or above. RAM: Varies greatly depending on calculation to be performed. Classification: 4.2, 5, 11.1. Nature of problem: Calculate operator product expansions (OPEs) of composite fields in 2d conformal field theory. Solution method: Implementation of the algebraic formulation of OPEs given by vertex algebras, and especially by λ-brackets. Running time: Varies greatly depending on calculation requested. The example notebook provided takes about 3 s to run.
MESAFace, a graphical interface to analyze the MESA output
NASA Astrophysics Data System (ADS)
Giannotti, M.; Wise, M.; Mohammed, A.
2013-04-01
MESA (Modules for Experiments in Stellar Astrophysics) has become very popular among astrophysicists as a powerful and reliable code to simulate stellar evolution. Analyzing the output data thoroughly may, however, present some challenges and be rather time-consuming. Here we describe MESAFace, a graphical and dynamical interface which provides an intuitive, efficient and quick way to analyze the MESA output. Catalogue identifier: AEOQ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOQ_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 19165 No. of bytes in distributed program, including test data, etc.: 6300592 Distribution format: tar.gz Programming language: Mathematica. Computer: Any computer capable of running Mathematica. Operating system: Any capable of running Mathematica. Tested on Linux, Mac, Windows XP, Windows 7. RAM: Recommended 2 Gigabytes or more. Supplementary material: Additional test data files are available. Classification: 1.7, 14. Nature of problem: Find a way to quickly and thoroughly analyze the output of a MESA run, including all the profiles, and have an efficient method to produce graphical representations of the data. Solution method: We created two scripts (to be run consecutively). The first one downloads all the data from a MESA run and organizes the profiles in order of age. All the files are saved as tables or arrays of tables which can then be accessed very quickly by Mathematica. The second script uses the Manipulate function to create a graphical interface which allows the user to choose what to plot from a set of menus and buttons. The information shown is updated in real time. The user can access very quickly all the data from the run under examination and visualize it with plots and tables. Unusual features: Moving the slides in certain regions may cause an error message. This happens when Mathematica is asked to read nonexistent data. The error message, however, disappears when the slides are moved back. This issue does not preclude the good functioning of the interface. Additional comments: The program uses the dynamical capabilities of Mathematica. When the program is opened, Mathematica prompts the user to “Enable Dynamics”. It is necessary to accept before proceeding. Running time: Depends on the size of the data downloaded, on where the data are stored (hard-drive or web), and on the speed of the computer or network connection. In general, downloading the data may take from a minute to several minutes. Loading directly from the web is slower. For example, downloading a 200 MB data folder (a total of 102 files) with a dual-core Intel laptop, P8700, 2 GB of RAM, at 2.53 GHz took about a minute from the hard-drive and about 23 min from the web (with a basic home wireless connection).
Beam-plasma dielectric tensor with Mathematica
NASA Astrophysics Data System (ADS)
Bret, A.
2007-03-01
We present a Mathematica notebook allowing for the symbolic calculation of the 3×3 dielectric tensor of an electron-beam plasma system in the fluid approximation. Calculation is detailed for a cold relativistic electron beam entering a cold magnetized plasma, and for arbitrarily oriented wave vectors. We show how one can elaborate on this example to account for temperatures, arbitrarily oriented magnetic field or a different kind of plasma. Program summaryTitle of program: Tensor Catalog identifier: ADYT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADYT_v1_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer for which the program is designed and others on which it has been tested: Computers: Any computer running Mathematica 4.1. Tested on DELL Dimension 5100 and IBM ThinkPad T42. Installations: ETSI Industriales, Universidad Castilla la Mancha, Ciudad Real, Spain Operating system under which the program has been tested: Windows XP Pro Programming language used: Mathematica 4.1 Memory required to execute with typical data: 7.17 Mbytes No. of bytes in distributed program, including test data, etc.: 33 439 No. of lines in distributed program, including test data, etc.: 3169 Distribution format: tar.gz Nature of the physical problem: The dielectric tensor of a relativistic beam plasma system may be quite involved to calculate symbolically when considering a magnetized plasma, kinetic pressure, collisions between species, and so on. The present Mathematica notebook performs the symbolic computation in terms of some usual dimensionless variables. Method of solution: The linearized relativistic fluid equations are directly entered and solved by Mathematica to express the first-order expression of the current. This expression is then introduced into a combination of Faraday and Ampère-Maxwell's equations to give the dielectric tensor. Some additional manipulations are needed to express the result in terms of the dimensionless variables. Restrictions on the complexity of the problem: Temperature effects are limited to small, i.e. non-relativistic, temperatures. The kinetic counterpart of the present Mathematica will usually not compute the required integrals. Typical running time: About 1 minute on a Intel Centrino 1.5 GHz Laptop with 512 MB of RAM. Unusual features of the program: None.
Automated symbolic calculations in nonequilibrium thermodynamics
NASA Astrophysics Data System (ADS)
Kröger, Martin; Hütter, Markus
2010-12-01
We cast the Jacobi identity for continuous fields into a local form which eliminates the need to perform any partial integration to the expense of performing variational derivatives. This allows us to test the Jacobi identity definitely and efficiently and to provide equations between different components defining a potential Poisson bracket. We provide a simple Mathematica TM notebook which allows to perform this task conveniently, and which offers some additional functionalities of use within the framework of nonequilibrium thermodynamics: reversible equations of change for fields, and the conservation of entropy during the reversible dynamics. Program summaryProgram title: Poissonbracket.nb Catalogue identifier: AEGW_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGW_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 227 952 No. of bytes in distributed program, including test data, etc.: 268 918 Distribution format: tar.gz Programming language: Mathematica TM 7.0 Computer: Any computer running Mathematica TM 6.0 and later versions Operating system: Linux, MacOS, Windows RAM: 100 Mb Classification: 4.2, 5, 23 Nature of problem: Testing the Jacobi identity can be a very complex task depending on the structure of the Poisson bracket. The Mathematica TM notebook provided here solves this problem using a novel symbolic approach based on inherent properties of the variational derivative, highly suitable for the present tasks. As a by product, calculations performed with the Poisson bracket assume a compact form. Solution method: The problem is first cast into a form which eliminates the need to perform partial integration for arbitrary functionals at the expense of performing variational derivatives. The corresponding equations are conveniently obtained using the symbolic programming environment Mathematica TM. Running time: For the test cases and most typical cases in the literature, the running time is of the order of seconds or minutes, respectively.
SLAM, a Mathematica interface for SUSY spectrum generators
NASA Astrophysics Data System (ADS)
Marquard, Peter; Zerf, Nikolai
2014-03-01
We present and publish a Mathematica package, which can be used to automatically obtain any numerical MSSM input parameter from SUSY spectrum generators, which follow the SLHA standard, like SPheno, SOFTSUSY, SuSeFLAV or Suspect. The package enables a very comfortable way of numerical evaluations within the MSSM using Mathematica. It implements easy to use predefined high scale and low scale scenarios like mSUGRA or mhmax and if needed enables the user to directly specify the input required by the spectrum generators. In addition it supports an automatic saving and loading of SUSY spectra to and from a SQL data base, avoiding the rerun of a spectrum generator for a known spectrum. Catalogue identifier: AERX_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERX_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 4387 No. of bytes in distributed program, including test data, etc.: 37748 Distribution format: tar.gz Programming language: Mathematica. Computer: Any computer where Mathematica version 6 or higher is running providing bash and sed. Operating system: Linux. Classification: 11.1. External routines: A SUSY spectrum generator such as SPheno, SOFTSUSY, SuSeFLAV or SUSPECT Nature of problem: Interfacing published spectrum generators for automated creation, saving and loading of SUSY particle spectra. Solution method: SLAM automatically writes/reads SLHA spectrum generator input/output and is able to save/load generated data in/from a data base. Restrictions: No general restrictions, specific restrictions are given in the manuscript. Running time: A single spectrum calculation takes much less than one second on a modern PC.
NASA Astrophysics Data System (ADS)
Haxton, Wick; Lunardini, Cecilia
2008-09-01
Semi-leptonic electroweak interactions in nuclei—such as β decay, μ capture, charged- and neutral-current neutrino reactions, and electron scattering—are described by a set of multipole operators carrying definite parity and angular momentum, obtained by projection from the underlying nuclear charge and three-current operators. If these nuclear operators are approximated by their one-body forms and expanded in the nucleon velocity through order |p→|/M, where p→ and M are the nucleon momentum and mass, a set of seven multipole operators is obtained. Nuclear structure calculations are often performed in a basis of Slater determinants formed from harmonic oscillator orbitals, a choice that allows translational invariance to be preserved. Harmonic-oscillator single-particle matrix elements of the multipole operators can be evaluated analytically and expressed in terms of finite polynomials in q, where q is the magnitude of the three-momentum transfer. While results for such matrix elements are available in tabular form, with certain restriction on quantum numbers, the task of determining the analytic form of a response function can still be quite tedious, requiring the folding of the tabulated matrix elements with the nuclear density matrix, and subsequent algebra to evaluate products of operators. Here we provide a Mathematica script for generating these matrix elements, which will allow users to carry out all such calculations by symbolic manipulation. This will eliminate the errors that may accompany hand calculations and speed the calculation of electroweak nuclear cross sections and rates. We illustrate the use of the new script by calculating the cross sections for charged- and neutral-current neutrino scattering in 12C. Program summaryProgram title: SevenOperators Catalogue identifier: AEAY_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAY_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 2227 No. of bytes in distributed program, including test data, etc.: 19 382 Distribution format: tar.gz Programming language: Mathematica Computer: Any computer running Mathematica; tested on Mac OS X PowerPC (32-bit) running Mathematica 6.0.0 Operating system: Any running Mathematica RAM: Memory requirements determined by Mathematica; 512 MB or greater RAM and hard drive space of at least 3.0 GB recommended Classification: 17.16, 17.19 Nature of problem: Algebraic evaluation of harmonic oscillator nuclear matrix elements for the one-body multipole operators governing semi-leptonic weak interactions, such as charged- or neutral-current neutrino scattering off nuclei. Solution method: Mathematica evaluation of associated angular momentum algebra and spherical Bessel function radial integrals. Running time: Depends on the complexity of the one-body density matrix employed, but times of a few seconds are typical.
FAPT: A Mathematica package for calculations in QCD Fractional Analytic Perturbation Theory
NASA Astrophysics Data System (ADS)
Bakulev, Alexander P.; Khandramai, Vyacheslav L.
2013-01-01
We provide here all the procedures in Mathematica which are needed for the computation of the analytic images of the strong coupling constant powers in Minkowski (A(s;nf) and Aνglob(s)) and Euclidean (A(Q2;nf) and Aνglob(Q2)) domains at arbitrary energy scales (s and Q2, correspondingly) for both schemes — with fixed number of active flavours nf=3,4,5,6 and the global one with taking into account all heavy-quark thresholds. These singularity-free couplings are inevitable elements of Analytic Perturbation Theory (APT) in QCD, proposed in [10,69,70], and its generalization — Fractional APT, suggested in [42,46,43], needed to apply the APT imperative for renormalization-group improved hadronic observables. Program summaryProgram title: FAPT Catalogue identifier: AENJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENJ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 1985 No. of bytes in distributed program, including test data, etc.: 1895776 Distribution format: tar.gz Programming language: Mathematica. Computer: Any work-station or PC where Mathematica is running. Operating system: Windows XP, Mathematica (versions 5 and 7). Classification: 11.5. Nature of problem: The values of analytic images A(Q2) and A(s) of the QCD running coupling powers αsν(Q2) in Euclidean and Minkowski regions, correspondingly, are determined through the spectral representation in the QCD Analytic Perturbation Theory (APT). In the program FAPT we collect all relevant formulas and various procedures which allow for a convenient evaluation of A(Q2) and A(s) using numerical integrations of the relevant spectral densities. Solution method: FAPT uses Mathematica functions to calculate different spectral densities and then performs numerical integration of these spectral integrals to obtain analytic images of different objects. Restrictions: It could be that for an unphysical choice of the input parameters the results are without any meaning. Running time: For all operations the run time does not exceed a few seconds. Usually numerical integration is not fast, so that we advise the use of arrays of precalculated data and then to apply the routine Interpolate(as shown in supplied example of the program usage, namely in the notebook FAPT_Interp.nb).
mr: A C++ library for the matching and running of the Standard Model parameters
NASA Astrophysics Data System (ADS)
Kniehl, Bernd A.; Pikelner, Andrey F.; Veretin, Oleg L.
2016-09-01
We present the C++ program library mr that allows us to reliably calculate the values of the running parameters in the Standard Model at high energy scales. The initial conditions are obtained by relating the running parameters in the MS bar renormalization scheme to observables at lower energies with full two-loop precision. The evolution is then performed in accordance with the renormalization group equations with full three-loop precision. Pure QCD corrections to the matching and running are included through four loops. We also provide a Mathematica interface for this program library. Catalogue identifier: AFAI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AFAI_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 517613 No. of bytes in distributed program, including test data, etc.: 2358729 Distribution format: tar.gz Programming language: C++. Computer: IBM PC. Operating system: Linux, Mac OS X. RAM: 1 GB Classification: 11.1. External routines: TSIL [1], OdeInt [2], boost [3] Nature of problem: The running parameters of the Standard Model renormalized in the MS bar scheme at some high renormalization scale, which is chosen by the user, are evaluated in perturbation theory as precisely as possible in two steps. First, the initial conditions at the electroweak energy scale are evaluated from the Fermi constant GF and the pole masses of the W, Z, and Higgs bosons and the bottom and top quarks including the full two-loop threshold corrections. Second, the evolution to the high energy scale is performed by numerically solving the renormalization group evolution equations through three loops. Pure QCD corrections to the matching and running are included through four loops. Solution method: Numerical integration of analytic expressions Additional comments: Available for download from URL: http://apik.github.io/mr/. The MathLink interface is tested to work with Mathematica 7-9 and, with an additional flag, also with Mathematica 10 under Linux and with Mathematica 10 under Mac OS X. Running time: less than 1 second References: [1] S. P. Martin and D. G. Robertson, Comput. Phys. Commun. 174 (2006) 133-151 [hep-ph/0501132]. [2] K. Ahnert and M. Mulansky, AIP Conf. Proc. 1389 (2011) 1586-1589 [arxiv:1110.3397 [cs.MS
Kranc: a Mathematica package to generate numerical codes for tensorial evolution equations
NASA Astrophysics Data System (ADS)
Husa, Sascha; Hinder, Ian; Lechner, Christiane
2006-06-01
We present a suite of Mathematica-based computer-algebra packages, termed "Kranc", which comprise a toolbox to convert certain (tensorial) systems of partial differential evolution equations to parallelized C or Fortran code for solving initial boundary value problems. Kranc can be used as a "rapid prototyping" system for physicists or mathematicians handling very complicated systems of partial differential equations, but through integration into the Cactus computational toolkit we can also produce efficient parallelized production codes. Our work is motivated by the field of numerical relativity, where Kranc is used as a research tool by the authors. In this paper we describe the design and implementation of both the Mathematica packages and the resulting code, we discuss some example applications, and provide results on the performance of an example numerical code for the Einstein equations. Program summaryTitle of program: Kranc Catalogue identifier: ADXS_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXS_v1_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Distribution format: tar.gz Computer for which the program is designed and others on which it has been tested: General computers which run Mathematica (for code generation) and Cactus (for numerical simulations), tested under Linux Programming language used: Mathematica, C, Fortran 90 Memory required to execute with typical data: This depends on the number of variables and gridsize, the included ADM example requires 4308 KB Has the code been vectorized or parallelized: The code is parallelized based on the Cactus framework. Number of bytes in distributed program, including test data, etc.: 1 578 142 Number of lines in distributed program, including test data, etc.: 11 711 Nature of physical problem: Solution of partial differential equations in three space dimensions, which are formulated as an initial value problem. In particular, the program is geared towards handling very complex tensorial equations as they appear, e.g., in numerical relativity. The worked out examples comprise the Klein-Gordon equations, the Maxwell equations, and the ADM formulation of the Einstein equations. Method of solution: The method of numerical solution is finite differencing and method of lines time integration, the numerical code is generated through a high level Mathematica interface. Restrictions on the complexity of the program: Typical numerical relativity applications will contain up to several dozen evolution variables and thousands of source terms, Cactus applications have shown scaling up to several thousand processors and grid sizes exceeding 500 3. Typical running time: This depends on the number of variables and the grid size: the included ADM example takes approximately 100 seconds on a 1600 MHz Intel Pentium M processor. Unusual features of the program: based on Mathematica and Cactus
CUGatesDensity—Quantum circuit analyser extended to density matrices
NASA Astrophysics Data System (ADS)
Loke, T.; Wang, J. B.
2013-12-01
CUGatesDensity is an extension of the original quantum circuit analyser CUGates (Loke and Wang, 2011) [7] to provide explicit support for the use of density matrices. The new package enables simulation of quantum circuits involving statistical ensemble of mixed quantum states. Such analysis is of vital importance in dealing with quantum decoherence, measurements, noise and error correction, and fault tolerant computation. Several examples involving mixed state quantum computation are presented to illustrate the use of this package. Catalogue identifier: AEPY_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEPY_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 5368 No. of bytes in distributed program, including test data, etc.: 143994 Distribution format: tar.gz Programming language: Mathematica. Computer: Any computer installed with a copy of Mathematica 6.0 or higher. Operating system: Any system with a copy of Mathematica 6.0 or higher installed. Classification: 4.15. Nature of problem: To simulate arbitrarily complex quantum circuits comprised of single/multiple qubit and qudit quantum gates with mixed state registers. Solution method: A density matrix representation for mixed states and a state vector representation for pure states are used. The construct is based on an irreducible form of matrix decomposition, which allows a highly efficient implementation of general controlled gates with multiple conditionals. Running time: The examples provided in the notebook CUGatesDensity.nb take approximately 30 s to run on a laptop PC.
The Invar tensor package: Differential invariants of Riemann
NASA Astrophysics Data System (ADS)
Martín-García, J. M.; Yllanes, D.; Portugal, R.
2008-10-01
The long standing problem of the relations among the scalar invariants of the Riemann tensor is computationally solved for all 6ṡ10 objects with up to 12 derivatives of the metric. This covers cases ranging from products of up to 6 undifferentiated Riemann tensors to cases with up to 10 covariant derivatives of a single Riemann. We extend our computer algebra system Invar to produce within seconds a canonical form for any of those objects in terms of a basis. The process is as follows: (1) an invariant is converted in real time into a canonical form with respect to the permutation symmetries of the Riemann tensor; (2) Invar reads a database of more than 6ṡ10 relations and applies those coming from the cyclic symmetry of the Riemann tensor; (3) then applies the relations coming from the Bianchi identity, (4) the relations coming from commutations of covariant derivatives, (5) the dimensionally-dependent identities for dimension 4, and finally (6) simplifies invariants that can be expressed as product of dual invariants. Invar runs on top of the tensor computer algebra systems xTensor (for Mathematica) and Canon (for Maple). Program summaryProgram title:Invar Tensor Package v2.0 Catalogue identifier:ADZK_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZK_v2_0.html Program obtainable from:CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions:Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.:3 243 249 No. of bytes in distributed program, including test data, etc.:939 Distribution format:tar.gz Programming language:Mathematica and Maple Computer:Any computer running Mathematica versions 5.0 to 6.0 or Maple versions 9 and 11 Operating system:Linux, Unix, Windows XP, MacOS RAM:100 Mb Word size:64 or 32 bits Supplementary material:The new database of relations is much larger than that for the previous version and therefore has not been included in the distribution. To obtain the Mathematica and Maple database files click on this link. Classification:1.5, 5 Does the new version supersede the previous version?:Yes. The previous version (1.0) only handled algebraic invariants. The current version (2.0) has been extended to cover differential invariants as well. Nature of problem:Manipulation and simplification of scalar polynomial expressions formed from the Riemann tensor and its covariant derivatives. Solution method:Algorithms of computational group theory to simplify expressions with tensors that obey permutation symmetries. Tables of syzygies of the scalar invariants of the Riemann tensor. Reasons for new version:With this new version, the user can manipulate differential invariants of the Riemann tensor. Differential invariants are required in many physical problems in classical and quantum gravity. Summary of revisions:The database of syzygies has been expanded by a factor of 30. New commands were added in order to deal with the enlarged database and to manipulate the covariant derivative. Restrictions:The present version only handles scalars, and not expressions with free indices. Additional comments:The distribution file for this program is over 53 Mbytes and therefore is not delivered directly when download or Email is requested. Instead a html file giving details of how the program can be obtained is sent. Running time:One second to fully reduce any monomial of the Riemann tensor up to degree 7 or order 10 in terms of independent invariants. The Mathematica notebook included in the distribution takes approximately 5 minutes to run.
Generating and using truly random quantum states in Mathematica
NASA Astrophysics Data System (ADS)
Miszczak, Jarosław Adam
2012-01-01
The problem of generating random quantum states is of a great interest from the quantum information theory point of view. In this paper we present a package for Mathematica computing system harnessing a specific piece of hardware, namely Quantis quantum random number generator (QRNG), for investigating statistical properties of quantum states. The described package implements a number of functions for generating random states, which use Quantis QRNG as a source of randomness. It also provides procedures which can be used in simulations not related directly to quantum information processing. Program summaryProgram title: TRQS Catalogue identifier: AEKA_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKA_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 7924 No. of bytes in distributed program, including test data, etc.: 88 651 Distribution format: tar.gz Programming language: Mathematica, C Computer: Requires a Quantis quantum random number generator (QRNG, http://www.idquantique.com/true-random-number-generator/products-overview.html) and supporting a recent version of Mathematica Operating system: Any platform supporting Mathematica; tested with GNU/Linux (32 and 64 bit) RAM: Case dependent Classification: 4.15 Nature of problem: Generation of random density matrices. Solution method: Use of a physical quantum random number generator. Running time: Generating 100 random numbers takes about 1 second, generating 1000 random density matrices takes more than a minute.
Symbolic computation of the Hartree-Fock energy from a chiral EFT three-nucleon interaction at N 2LO
NASA Astrophysics Data System (ADS)
Gebremariam, B.; Bogner, S. K.; Duguet, T.
2010-06-01
We present the first of a two-part Mathematica notebook collection that implements a symbolic approach for the application of the density matrix expansion (DME) to the Hartree-Fock (HF) energy from a chiral effective field theory (EFT) three-nucleon interaction at N 2LO. The final output from the notebooks is a Skyrme-like energy density functional that provides a quasi-local approximation to the non-local HF energy. In this paper, we discuss the derivation of the HF energy and its simplification in terms of the scalar/vector-isoscalar/isovector parts of the one-body density matrix. Furthermore, a set of steps is described and illustrated on how to extend the approach to other three-nucleon interactions. Program summaryProgram title: SymbHFNNN Catalogue identifier: AEGC_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGC_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 96 666 No. of bytes in distributed program, including test data, etc.: 378 083 Distribution format: tar.gz Programming language: Mathematica 7.1 Computer: Any computer running Mathematica 6.0 and later versions Operating system: Windows Xp, Linux/Unix RAM: 256 Mb Classification: 5, 17.16, 17.22 Nature of problem: The calculation of the HF energy from the chiral EFT three-nucleon interaction at N 2LO involves tremendous spin-isospin algebra. The problem is compounded by the need to eventually obtain a quasi-local approximation to the HF energy, which requires the HF energy to be expressed in terms of scalar/vector-isoscalar/isovector parts of the one-body density matrix. The Mathematica notebooks discussed in this paper solve the latter issue. Solution method: The HF energy from the chiral EFT three-nucleon interaction at N 2LO is cast into a form suitable for an automatic simplification of the spin-isospin traces. Several Mathematica functions and symbolic manipulation techniques are used to obtain the result in terms of the scalar/vector-isoscalar/isovector parts of the one-body density matrix. Running time: Several hours
Calculating the renormalisation group equations of a SUSY model with Susyno
NASA Astrophysics Data System (ADS)
Fonseca, Renato M.
2012-10-01
Susyno is a Mathematica package dedicated to the computation of the 2-loop renormalisation group equations of a supersymmetric model based on any gauge group (the only exception being multiple U(1) groups) and for any field content. Program summary Program title: Susyno Catalogue identifier: AEMX_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEMX_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 30829 No. of bytes in distributed program, including test data, etc.: 650170 Distribution format: tar.gz Programming language: Mathematica 7 or higher. Computer: All systems that Mathematica 7+ is available for (PC, Mac). Operating system: Any platform supporting Mathematica 7+ (Windows, Linux, Mac OS). Classification: 4.2, 5, 11.1. Nature of problem: Calculating the renormalisation group equations of a supersymmetric model involves using long and complicated general formulae [1, 2]. In addition, to apply them it is necessary to know the Lagrangian in its full form. Building the complete Lagrangian of models with small representations of SU(2) and SU(3) might be easy but in the general case of arbitrary representations of an arbitrary gauge group, this task can be hard, lengthy and error prone. Solution method: The Susyno package uses group theoretical functions to calculate the super-potential and the soft-SUSY-breaking Lagrangian of a supersymmetric model, and calculates the two-loop RGEs of the model using the general equations of [1, 2]. Susyno works for models based on any representation(s) of any gauge group (the only exception being multiple U(1) groups). Restrictions: As the program is based on the formalism of [1, 2], it shares its limitations. Running time can also be a significant restriction, in particular for models with many fields. Unusual features: Susyno contains functions that (a) calculate the Lagrangian of supersymmetric models and (b) calculate some group theoretical quantities. Some of these functions are available to the user and can be freely used. A built-in help system provides detailed information. Running time: Tests were made using a computer with an Intel Core i5 760 CPU, running under Ubuntu 11.04 and with Mathematica 8.0.1 installed. Using the option to suppress printing, the one- and two-loop beta functions of the MSSM were obtained in 2.5 s (NMSSM: 5.4 s). Note that the running time scales up very quickly with the total number of fields in the model. References: [1] S.P. Martin and M.T. Vaughn, Phys. Rev. D 50 (1994) 2282. [Erratum-ibid D 78 (2008) 039903] [arXiv:hep-ph/9311340]. [2] Y. Yamada, Phys. Rev. D 50 (1994) 3537 [arXiv:hep-ph/9401241].
xPerm: fast index canonicalization for tensor computer algebra
NASA Astrophysics Data System (ADS)
Martín-García, José M.
2008-10-01
We present a very fast implementation of the Butler-Portugal algorithm for index canonicalization with respect to permutation symmetries. It is called xPerm, and has been written as a combination of a Mathematica package and a C subroutine. The latter performs the most demanding parts of the computations and can be linked from any other program or computer algebra system. We demonstrate with tests and timings the effectively polynomial performance of the Butler-Portugal algorithm with respect to the number of indices, though we also show a case in which it is exponential. Our implementation handles generic tensorial expressions with several dozen indices in hundredths of a second, or one hundred indices in a few seconds, clearly outperforming all other current canonicalizers. The code has been already under intensive testing for several years and has been essential in recent investigations in large-scale tensor computer algebra. Program summaryProgram title: xPerm Catalogue identifier: AEBH_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEBH_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 93 582 No. of bytes in distributed program, including test data, etc.: 1 537 832 Distribution format: tar.gz Programming language: C and Mathematica (version 5.0 or higher) Computer: Any computer running C and Mathematica (version 5.0 or higher) Operating system: Linux, Unix, Windows XP, MacOS RAM:: 20 Mbyte Word size: 64 or 32 bits Classification: 1.5, 5 Nature of problem: Canonicalization of indexed expressions with respect to permutation symmetries. Solution method: The Butler-Portugal algorithm. Restrictions: Multiterm symmetries are not considered. Running time: A few seconds with generic expressions of up to 100 indices. The xPermDoc.nb notebook supplied with the distribution takes approximately one and a half hours to execute in full.
NASA Astrophysics Data System (ADS)
Angeli, C.; Cimiraglia, R.
2013-02-01
A symbolic program performing the Formal Reduction of Density Operators (FRODO), formerly developed in the MuPAD computer algebra system with the purpose of evaluating the matrix elements of the electronic Hamiltonian between internally contracted functions in a complete active space (CAS) scheme, has been rewritten in Mathematica. New version : A program summaryProgram title: FRODO Catalogue identifier: ADV Y _v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVY_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 3878 No. of bytes in distributed program, including test data, etc.: 170729 Distribution format: tar.gz Programming language: Mathematica Computer: Any computer on which the Mathematica computer algebra system can be installed Operating system: Linux Classification: 5 Catalogue identifier of previous version: ADV Y _v1_0 Journal reference of previous version: Comput. Phys. Comm. 171(2005)63 Does the new version supersede the previous version?: No Nature of problem. In order to improve on the CAS-SCF wavefunction one can resort to multireference perturbation theory or configuration interaction based on internally contracted functions (ICFs) which are obtained by application of the excitation operators to the reference CAS-SCF wavefunction. The previous formulation of such matrix elements in the MuPAD computer algebra system, has been rewritten using Mathematica. Solution method: The method adopted consists in successively eliminating all occurrences of inactive orbital indices (core and virtual) from the products of excitation operators which appear in the definition of the ICFs and in the electronic Hamiltonian expressed in the second quantization formalism. Reasons for new version: Some years ago we published in this journal a couple of papers [1, 2] hereafter to be referred to as papers I and II, respectively dedicated to the automated evaluation of the matrix elements of the molecular electronic Hamiltonian between internally contracted functions [3] (ICFs). In paper II the program FRODO (after Formal Reduction Of Density Operators) was presented with the purpose of providing working formulas for each occurrence of the ICFs. The original FRODO program was written in the MuPAD computer algebra system [4] and was actively used in our group for the generation of the matrix elements to be employed in the third-order n-electron valence state perturbation theory (NEVPT) [5-8] as well as in the internally contracted configuration interaction (IC-CI) [9]. We present a new version of the program FRODO written in the Mathematica system [10]. The reason for the rewriting of the program lies in the fact that, on the one hand, MuPAD does not seem to be any longer available as a stand-alone system and, on the other hand, Mathematica, due to its ubiquitousness, appears to be increasingly the computer algebra system most widely used nowadays. Restrictions: The program is limited to no more than doubly excited ICFs. Running time: The examples described in the Readme file take a few seconds to run. References: [1] C. Angeli, R. Cimiraglia, Comp. Phys. Comm. 166 (2005) 53. [2] C. Angeli, R. Cimiraglia, Comp. Phys. Comm. 171 (2005) 63. [3] H.-J. Werner, P. J. Knowles, Adv. Chem. Phys. 89 (1988) 5803. [4] B. Fuchssteiner, W. Oevel: http://www.mupad.de Mupad research group, university of Paderborn. Mupad version 2.5.3 for Linux. [5] C. Angeli, R. Cimiraglia, S. Evangelisti, T. Leininger, J.-P. Malrieu, J. Chem. Phys. 114 (2001) 10252. [6] C. Angeli, R. Cimiraglia, J.-P. Malrieu, J. Chem. Phys. 117 (2002) 9138. [7] C. Angeli, B. Bories, A. Cavallini, R. Cimiraglia, J. Chem. Phys. 124 (2006) 054108. [8] C. Angeli, M. Pastore, R. Cimiraglia, Theor. Chem. Acc. 117 (2007) 743. [9] C. Angeli, R. Cimiraglia, Mol. Phys. in press, DOI:10.1080/00268976.2012.689872 [10] http://www.wolfram.com/Mathematica. Mathematica version 8 for Linux.
Strongdeco: Expansion of analytical, strongly correlated quantum states into a many-body basis
NASA Astrophysics Data System (ADS)
Juliá-Díaz, Bruno; Graß, Tobias
2012-03-01
We provide a Mathematica code for decomposing strongly correlated quantum states described by a first-quantized, analytical wave function into many-body Fock states. Within them, the single-particle occupations refer to the subset of Fock-Darwin functions with no nodes. Such states, commonly appearing in two-dimensional systems subjected to gauge fields, were first discussed in the context of quantum Hall physics and are nowadays very relevant in the field of ultracold quantum gases. As important examples, we explicitly apply our decomposition scheme to the prominent Laughlin and Pfaffian states. This allows for easily calculating the overlap between arbitrary states with these highly correlated test states, and thus provides a useful tool to classify correlated quantum systems. Furthermore, we can directly read off the angular momentum distribution of a state from its decomposition. Finally we make use of our code to calculate the normalization factors for Laughlin's famous quasi-particle/quasi-hole excitations, from which we gain insight into the intriguing fractional behavior of these excitations. Program summaryProgram title: Strongdeco Catalogue identifier: AELA_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AELA_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 5475 No. of bytes in distributed program, including test data, etc.: 31 071 Distribution format: tar.gz Programming language: Mathematica Computer: Any computer on which Mathematica can be installed Operating system: Linux, Windows, Mac Classification: 2.9 Nature of problem: Analysis of strongly correlated quantum states. Solution method: The program makes use of the tools developed in Mathematica to deal with multivariate polynomials to decompose analytical strongly correlated states of bosons and fermions into a standard many-body basis. Operations with polynomials, determinants and permanents are the basic tools. Running time: The distributed notebook takes a couple of minutes to run.
Construction of SO(5)⊃SO(3) spherical harmonics and Clebsch-Gordan coefficients
NASA Astrophysics Data System (ADS)
Caprio, M. A.; Rowe, D. J.; Welsh, T. A.
2009-07-01
The SO(5)⊃SO(3) spherical harmonics form a natural basis for expansion of nuclear collective model angular wave functions. They underlie the recently-proposed algebraic method for diagonalization of the nuclear collective model Hamiltonian in an SU(1,1)×SO(5) basis. We present a computer code for explicit construction of the SO(5)⊃SO(3) spherical harmonics and use them to compute the Clebsch-Gordan coefficients needed for collective model calculations in an SO(3)-coupled basis. With these Clebsch-Gordan coefficients it becomes possible to compute the matrix elements of collective model observables by purely algebraic methods. Program summaryProgram title: GammaHarmonic Catalogue identifier: AECY_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECY_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 346 421 No. of bytes in distributed program, including test data, etc.: 16 037 234 Distribution format: tar.gz Programming language: Mathematica 6 Computer: Any which supports Mathematica Operating system: Any which supports Mathematica; tested under Microsoft Windows XP and Linux Classification: 4.2 Nature of problem: Explicit construction of SO(5) ⊃ SO(3) spherical harmonics on S. Evaluation of SO(3)-reduced matrix elements and SO(5) ⊃ SO(3) Clebsch-Gordan coefficients (isoscalar factors). Solution method: Construction of SO(5) ⊃ SO(3) spherical harmonics by orthonormalization, obtained from a generating set of functions, according to the method of Rowe, Turner, and Repka [1]. Matrix elements and Clebsch-Gordan coefficients follow by construction and integration of SO(3) scalar products. Running time: Depends strongly on the maximum SO(5) and SO(3) representation labels involved. A few minutes for the calculation in the Mathematica notebook. References: [1] D.J. Rowe, P.S. Turner, J. Repka, J. Math. Phys. 45 (2004) 2761.
ERIC Educational Resources Information Center
Matsumoto, Paul S.
2014-01-01
The article describes the use of Mathematica, a computer algebra system (CAS), in a high school chemistry course. Mathematica was used to generate a graph, where a slider controls the value of parameter(s) in the equation; thus, students can visualize the effect of the parameter(s) on the behavior of the system. Also, Mathematica can show the…
NASA Astrophysics Data System (ADS)
Heusler, Stefan
2006-12-01
The main focus of the second, enlarged edition of the book Mathematica for Theoretical Physics is on computational examples using the computer program Mathematica in various areas in physics. It is a notebook rather than a textbook. Indeed, the book is just a printout of the Mathematica notebooks included on the CD. The second edition is divided into two volumes, the first covering classical mechanics and nonlinear dynamics, the second dealing with examples in electrodynamics, quantum mechanics, general relativity and fractal geometry. The second volume is not suited for newcomers because basic and simple physical ideas which lead to complex formulas are not explained in detail. Instead, the computer technology makes it possible to write down and manipulate formulas of practically any length. For researchers with experience in computing, the book contains a lot of interesting and non-trivial examples. Most of the examples discussed are standard textbook problems, but the power of Mathematica opens the path to more sophisticated solutions. For example, the exact solution for the perihelion shift of Mercury within general relativity is worked out in detail using elliptic functions. The virial equation of state for molecules' interaction with Lennard-Jones-like potentials is discussed, including both classical and quantum corrections to the second virial coefficient. Interestingly, closed solutions become available using sophisticated computing methods within Mathematica. In my opinion, the textbook should not show formulas in detail which cover three or more pages—these technical data should just be contained on the CD. Instead, the textbook should focus on more detailed explanation of the physical concepts behind the technicalities. The discussion of the virial equation would benefit much from replacing 15 pages of Mathematica output with 15 pages of further explanation and motivation. In this combination, the power of computing merged with physical intuition would be of benefit even for newcomers. In summary, this book shows in a convincing manner how classical problems in physics can be attacked with modern computing technology. The second volume is interesting for experienced users of Mathematica. For students, the textbook can be very useful in combination with a seminar.
FeynArts model file for MSSM transition counterterms from DREG to DRED
NASA Astrophysics Data System (ADS)
Stöckinger, Dominik; Varšo, Philipp
2012-02-01
The FeynArts model file MSSMdreg2dred implements MSSM transition counterterms which can convert one-loop Green functions from dimensional regularization to dimensional reduction. They correspond to a slight extension of the well-known Martin/Vaughn counterterms, specialized to the MSSM, and can serve also as supersymmetry-restoring counterterms. The paper provides full analytic results for the counterterms and gives one- and two-loop usage examples. The model file can simplify combining MS¯-parton distribution functions with supersymmetric renormalization or avoiding the renormalization of ɛ-scalars in dimensional reduction. Program summaryProgram title:MSSMdreg2dred.mod Catalogue identifier: AEKR_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKR_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: LGPL-License [1] No. of lines in distributed program, including test data, etc.: 7600 No. of bytes in distributed program, including test data, etc.: 197 629 Distribution format: tar.gz Programming language: Mathematica, FeynArts Computer: Any, capable of running Mathematica and FeynArts Operating system: Any, with running Mathematica, FeynArts installation Classification: 4.4, 5, 11.1 Subprograms used: Cat Id Title Reference ADOW_v1_0 FeynArts CPC 140 (2001) 418 Nature of problem: The computation of one-loop Feynman diagrams in the minimal supersymmetric standard model (MSSM) requires regularization. Two schemes, dimensional regularization and dimensional reduction are both common but have different pros and cons. In order to combine the advantages of both schemes one would like to easily convert existing results from one scheme into the other. Solution method: Finite counterterms are constructed which correspond precisely to the one-loop scheme differences for the MSSM. They are provided as a FeynArts [2] model file. Using this model file together with FeynArts, the (ultra-violet) regularization of any MSSM one-loop Green function is switched automatically from dimensional regularization to dimensional reduction. In particular the counterterms serve as supersymmetry-restoring counterterms for dimensional regularization. Restrictions: The counterterms are restricted to the one-loop level and the MSSM. Running time: A few seconds to generate typical Feynman graphs with FeynArts.
HEPMath 1.4: A mathematica package for semi-automatic computations in high energy physics
NASA Astrophysics Data System (ADS)
Wiebusch, Martin
2015-10-01
This article introduces the Mathematica package HEPMath which provides a number of utilities and algorithms for High Energy Physics computations in Mathematica. Its functionality is similar to packages like FormCalc or FeynCalc, but it takes a more complete and extensible approach to implementing common High Energy Physics notations in the Mathematica language, in particular those related to tensors and index contractions. It also provides a more flexible method for the generation of numerical code which is based on new features for C code generation in Mathematica. In particular it can automatically generate Python extension modules which make the compiled functions callable from Python, thus eliminating the need to write any code in a low-level language like C or Fortran. It also contains seamless interfaces to LHAPDF, FeynArts, and LoopTools.
NASA Astrophysics Data System (ADS)
Nazarov, Anton
2012-11-01
In this paper we present Affine.m-a program for computations in representation theory of finite-dimensional and affine Lie algebras and describe implemented algorithms. The algorithms are based on the properties of weights and Weyl symmetry. Computation of weight multiplicities in irreducible and Verma modules, branching of representations and tensor product decomposition are the most important problems for us. These problems have numerous applications in physics and we provide some examples of these applications. The program is implemented in the popular computer algebra system Mathematica and works with finite-dimensional and affine Lie algebras. Catalogue identifier: AENA_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENB_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, UK Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 24 844 No. of bytes in distributed program, including test data, etc.: 1 045 908 Distribution format: tar.gz Programming language: Mathematica. Computer: i386-i686, x86_64. Operating system: Linux, Windows, Mac OS, Solaris. RAM: 5-500 Mb Classification: 4.2, 5. Nature of problem: Representation theory of finite-dimensional Lie algebras has many applications in different branches of physics, including elementary particle physics, molecular physics, nuclear physics. Representations of affine Lie algebras appear in string theories and two-dimensional conformal field theory used for the description of critical phenomena in two-dimensional systems. Also Lie symmetries play a major role in a study of quantum integrable systems. Solution method: We work with weights and roots of finite-dimensional and affine Lie algebras and use Weyl symmetry extensively. Central problems which are the computations of weight multiplicities, branching and fusion coefficients are solved using one general recurrent algorithm based on generalization of Weyl character formula. We also offer alternative implementation based on the Freudenthal multiplicity formula which can be faster in some cases. Restrictions: Computational complexity grows fast with the rank of an algebra, so computations for algebras of ranks greater than 8 are not practical. Unusual features: We offer the possibility of using a traditional mathematical notation for the objects in representation theory of Lie algebras in computations if Affine.m is used in the Mathematica notebook interface. Running time: From seconds to days depending on the rank of the algebra and the complexity of the representation.
Gas Permeation Computations with Mathematica
ERIC Educational Resources Information Center
Binous, Housam
2006-01-01
We show a new approach, based on the utilization of Mathematica, to solve gas permeation problems using membranes. We start with the design of a membrane unit for the separation of a multicomponent mixture. The built-in Mathematica function, FindRoot, allows one to solve seven simultaneous equations instead of using the iterative approach of…
Projectile Motion with Mathematica.
ERIC Educational Resources Information Center
de Alwis, Tilak
2000-01-01
Describes how to use the computer algebra system (CAS) Mathematica to analyze projectile motion with and without air resistance. These experiments result in several conjectures leading to theorems. (Contains 17 references.) (Author/ASK)
ERIC Educational Resources Information Center
Ardiç, Mehmet Alper; Isleyen, Tevfik
2017-01-01
This study aimed at determining the secondary school mathematics teachers' and students' views on computer-assisted mathematics instruction (CAMI) conducted via Mathematica. Accordingly, three mathematics teachers in Adiyaman and nine 10th-grade students participated in the research. Firstly, the researchers trained the mathematics teachers in the…
Computing Properties of Hadrons, Nuclei and Nuclear Matter from Quantum Chromodynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Savage, Martin J.
This project was part of a coordinated software development effort which the nuclear physics lattice QCD community pursues in order to ensure that lattice calculations can make optimal use of present, and forthcoming leadership-class and dedicated hardware, including those of the national laboratories, and prepares for the exploitation of future computational resources in the exascale era. The UW team improved and extended software libraries used in lattice QCD calculations related to multi-nucleon systems, enhanced production running codes related to load balancing multi-nucleon production on large-scale computing platforms, and developed SQLite (addressable database) interfaces to efficiently archive and analyze multi-nucleon datamore » and developed a Mathematica interface for the SQLite databases.« less
On One Unusual Method of Computation of Limits of Rational Functions in the Program Mathematica[R
ERIC Educational Resources Information Center
Hora, Jaroslav; Pech, Pavel
2005-01-01
Computing limits of functions is a traditional part of mathematical analysis which is very difficult for students. Now an algorithm for the elimination of quantifiers in the field of real numbers is implemented in the program Mathematica. This offers a non-traditional view on this classical theme. (Contains 1 table.)
Sequences, Series, and Mathematica.
ERIC Educational Resources Information Center
Mathews, John H.
1992-01-01
Describes how the computer algebra system Mathematica can be used to enhance the teaching of the topics of sequences and series. Examines its capabilities to find exact, approximate, and graphically generated approximate solutions to problems from these topics and to understand proofs about sequences. (MDH)
CrasyDSE: A framework for solving Dyson–Schwinger equations☆
Huber, Markus Q.; Mitter, Mario
2012-01-01
Dyson–Schwinger equations are important tools for non-perturbative analyses of quantum field theories. For example, they are very useful for investigations in quantum chromodynamics and related theories. However, sometimes progress is impeded by the complexity of the equations. Thus automating parts of the calculations will certainly be helpful in future investigations. In this article we present a framework for such an automation based on a C++ code that can deal with a large number of Green functions. Since also the creation of the expressions for the integrals of the Dyson–Schwinger equations needs to be automated, we defer this task to a Mathematica notebook. We illustrate the complete workflow with an example from Yang–Mills theory coupled to a fundamental scalar field that has been investigated recently. As a second example we calculate the propagators of pure Yang–Mills theory. Our code can serve as a basis for many further investigations where the equations are too complicated to tackle by hand. It also can easily be combined with DoFun, a program for the derivation of Dyson–Schwinger equations.1 Program summary Program title: CrasyDSE Catalogue identifier: AEMY _v1_0 Program summary URL: http://cpc.cs.qub.ac.uk/summaries/AEMY_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 49030 No. of bytes in distributed program, including test data, etc.: 303958 Distribution format: tar.gz Programming language: Mathematica 8 and higher, C++. Computer: All on which Mathematica and C++ are available. Operating system: All on which Mathematica and C++ are available (Windows, Unix, Mac OS). Classification: 11.1, 11.4, 11.5, 11.6. Nature of problem: Solve (large) systems of Dyson–Schwinger equations numerically. Solution method: Create C++ functions in Mathematica to be used for the numeric code in C++. This code uses structures to handle large numbers of Green functions. Unusual features: Provides a tool to convert Mathematica expressions into C++ expressions including conversion of function names. Running time: Depending on the complexity of the investigated system solving the equations numerically can take seconds on a desktop PC to hours on a cluster. PMID:25540463
CrasyDSE: A framework for solving Dyson-Schwinger equations.
Huber, Markus Q; Mitter, Mario
2012-11-01
Dyson-Schwinger equations are important tools for non-perturbative analyses of quantum field theories. For example, they are very useful for investigations in quantum chromodynamics and related theories. However, sometimes progress is impeded by the complexity of the equations. Thus automating parts of the calculations will certainly be helpful in future investigations. In this article we present a framework for such an automation based on a C++ code that can deal with a large number of Green functions. Since also the creation of the expressions for the integrals of the Dyson-Schwinger equations needs to be automated, we defer this task to a Mathematica notebook. We illustrate the complete workflow with an example from Yang-Mills theory coupled to a fundamental scalar field that has been investigated recently. As a second example we calculate the propagators of pure Yang-Mills theory. Our code can serve as a basis for many further investigations where the equations are too complicated to tackle by hand. It also can easily be combined with DoFun , a program for the derivation of Dyson-Schwinger equations. Program title : CrasyDSE Catalogue identifier : AEMY _v1_0 Program summary URL : http://cpc.cs.qub.ac.uk/summaries/AEMY_v1_0.html Program obtainable from : CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions : Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc. : 49030 No. of bytes in distributed program, including test data, etc. : 303958 Distribution format : tar.gz Programming language : Mathematica 8 and higher, C++ . Computer : All on which Mathematica and C++ are available. Operating system : All on which Mathematica and C++ are available (Windows, Unix, Mac OS). Classification : 11.1, 11.4, 11.5, 11.6. Nature of problem : Solve (large) systems of Dyson-Schwinger equations numerically. Solution method : Create C++ functions in Mathematica to be used for the numeric code in C++ . This code uses structures to handle large numbers of Green functions. Unusual features : Provides a tool to convert Mathematica expressions into C++ expressions including conversion of function names. Running time : Depending on the complexity of the investigated system solving the equations numerically can take seconds on a desktop PC to hours on a cluster.
CrasyDSE: A framework for solving Dyson-Schwinger equations
NASA Astrophysics Data System (ADS)
Huber, Markus Q.; Mitter, Mario
2012-11-01
Dyson-Schwinger equations are important tools for non-perturbative analyses of quantum field theories. For example, they are very useful for investigations in quantum chromodynamics and related theories. However, sometimes progress is impeded by the complexity of the equations. Thus automating parts of the calculations will certainly be helpful in future investigations. In this article we present a framework for such an automation based on a C++ code that can deal with a large number of Green functions. Since also the creation of the expressions for the integrals of the Dyson-Schwinger equations needs to be automated, we defer this task to a Mathematica notebook. We illustrate the complete workflow with an example from Yang-Mills theory coupled to a fundamental scalar field that has been investigated recently. As a second example we calculate the propagators of pure Yang-Mills theory. Our code can serve as a basis for many further investigations where the equations are too complicated to tackle by hand. It also can easily be combined with DoFun, a program for the derivation of Dyson-Schwinger equations.
QDENSITY—A Mathematica Quantum Computer simulation
NASA Astrophysics Data System (ADS)
Juliá-Díaz, Bruno; Burdis, Joseph M.; Tabakin, Frank
2006-06-01
This Mathematica 5.2 package is a simulation of a Quantum Computer. The program provides a modular, instructive approach for generating the basic elements that make up a quantum circuit. The main emphasis is on using the density matrix, although an approach using state vectors is also implemented in the package. The package commands are defined in Qdensity.m which contains the tools needed in quantum circuits, e.g., multiqubit kets, projectors, gates, etc. Selected examples of the basic commands are presented here and a tutorial notebook, Tutorial.nb is provided with the package (available on our website) that serves as a full guide to the package. Finally, application is made to a variety of relevant cases, including Teleportation, Quantum Fourier transform, Grover's search and Shor's algorithm, in separate notebooks: QFT.nb, Teleportation.nb, Grover.nb and Shor.nb where each algorithm is explained in detail. Finally, two examples of the construction and manipulation of cluster states, which are part of "one way computing" ideas, are included as an additional tool in the notebook Cluster.nb. A Mathematica palette containing most commands in QDENSITY is also included: QDENSpalette.nb. Program summaryTitle of program: QDENSITY Catalogue identifier: ADXH_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXH_v1_0 Program available from: CPC Program Library, Queen's University of Belfast, N. Ireland Operating systems: Any which supports Mathematica; tested under Microsoft Windows XP, Macintosh OS X, and Linux FC4 Programming language used: Mathematica 5.2 No. of bytes in distributed program, including test data, etc.: 180 581 No. of lines in distributed program, including test data, etc.: 19 382 Distribution format: tar.gz Method of solution: A Mathematica package is provided which contains commands to create and analyze quantum circuits. Several Mathematica notebooks containing relevant examples: Teleportation, Shor's Algorithm and Grover's search are explained in detail. A tutorial, Tutorial.nb is also enclosed. QDENSITY is available at http://www.pitt.edu/~tabakin/QDENSITY.
Creating Three-Dimensional Scenes
ERIC Educational Resources Information Center
Krumpe, Norm
2005-01-01
Persistence of Vision Raytracer (POV-Ray), a free computer program for creating photo-realistic, three-dimensional scenes and a link for Mathematica users interested in generating POV-Ray files from within Mathematica, is discussed. POV-Ray has great potential in secondary mathematics classrooms and helps in strengthening students' visualization…
MULTIVARIATERESIDUES : A Mathematica package for computing multivariate residues
NASA Astrophysics Data System (ADS)
Larsen, Kasper J.; Rietkerk, Robbert
2018-01-01
Multivariate residues appear in many different contexts in theoretical physics and algebraic geometry. In theoretical physics, they for example give the proper definition of generalized-unitarity cuts, and they play a central role in the Grassmannian formulation of the S-matrix by Arkani-Hamed et al. In realistic cases their evaluation can be non-trivial. In this paper we provide a Mathematica package for efficient evaluation of multivariate residues based on methods from computational algebraic geometry.
Using Mathematica to Teach Process Units: A Distillation Case Study
ERIC Educational Resources Information Center
Rasteiro, Maria G.; Bernardo, Fernando P.; Saraiva, Pedro M.
2005-01-01
The question addressed here is how to integrate computational tools, namely interactive general-purpose platforms, in the teaching of process units. Mathematica has been selected as a complementary tool to teach distillation processes, with the main objective of leading students to achieve a better understanding of the physical phenomena involved…
A Mathematica program for the calculation of five-body Moshinsky brackets
NASA Astrophysics Data System (ADS)
Xiao, Shuyuan; Mu, Xueli; Liu, Tingting; Chen, Hong
2016-06-01
Five-body Moshinsky brackets that relate harmonic oscillator wavefunctions in two different sets of Jacobi coordinates make it straightforward to calculate some matrix elements in the variational calculations of five-body systems. The analytical expression of these transformation coefficients and the computer code written in the Mathematica language are presented here for accurate calculations.
ERIC Educational Resources Information Center
Binous, Housam
2007-01-01
We study four non-Newtonian fluid mechanics problems using Mathematica[R]. Constitutive equations describing the behavior of power-law, Bingham and Carreau models are recalled. The velocity profile is obtained for the horizontal flow of power-law fluids in pipes and annuli. For the vertical laminar film flow of a Bingham fluid we determine the…
ERIC Educational Resources Information Center
Yaacob, Yuzita; Wester, Michael; Steinberg, Stanly
2010-01-01
This paper presents a prototype of a computer learning assistant ILMEV (Interactive Learning-Mathematica Enhanced Vector calculus) package with the purpose of helping students to understand the theory and applications of integration in vector calculus. The main problem for students using Mathematica is to convert a textbook description of a…
FormTracer. A mathematica tracing package using FORM
NASA Astrophysics Data System (ADS)
Cyrol, Anton K.; Mitter, Mario; Strodthoff, Nils
2017-10-01
We present FormTracer, a high-performance, general purpose, easy-to-use Mathematica tracing package which uses FORM. It supports arbitrary space and spinor dimensions as well as an arbitrary number of simple compact Lie groups. While keeping the usability of the Mathematica interface, it relies on the efficiency of FORM. An additional performance gain is achieved by a decomposition algorithm that avoids redundant traces in the product tensors spaces. FormTracer supports a wide range of syntaxes which endows it with a high flexibility. Mathematica notebooks that automatically install the package and guide the user through performing standard traces in space-time, spinor and gauge-group spaces are provided. Program Files doi:http://dx.doi.org/10.17632/7rd29h4p3m.1 Licensing provisions: GPLv3 Programming language: Mathematica and FORM Nature of problem: Efficiently compute traces of large expressions Solution method: The expression to be traced is decomposed into its subspaces by a recursive Mathematica expansion algorithm. The result is subsequently translated to a FORM script that takes the traces. After FORM is executed, the final result is either imported into Mathematica or exported as optimized C/C++/Fortran code. Unusual features: The outstanding features of FormTracer are the simple interface, the capability to efficiently handle an arbitrary number of Lie groups in addition to Dirac and Lorentz tensors, and a customizable input-syntax.
NASA Astrophysics Data System (ADS)
McConnell, Sean; Fritzsche, Stephan; Surzhykov, Andrey
2010-03-01
During recent years, the DIRAC package has proved to be an efficient tool for studying the structural properties and dynamic behavior of hydrogen-like ions. Originally designed as a set of MAPLE procedures, this package provides interactive access to the wave and Green's functions in the non-relativistic and relativistic frameworks and supports analytical evaluation of a large number of radial integrals that are required for the construction of transition amplitudes and interaction cross sections. We provide here a new version of the DIRAC program which is developed within the framework of MATHEMATICA (version 6.0). This new version aims to cater to a wider community of researchers that use the MATHEMATICA platform and to take advantage of the generally faster processing times therein. Moreover, the addition of new procedures, a more convenient and detailed help system, as well as source code revisions to overcome identified shortcomings should ensure expanded use of the new DIRAC program over its predecessor. New version program summaryProgram title: DIRAC Catalogue identifier: ADUQ_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADUQ_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 45 073 No. of bytes in distributed program, including test data, etc.: 285 828 Distribution format: tar.gz Programming language: Mathematica 6.0 or higher Computer: All computers with a license for the computer algebra package Mathematica (version 6.0 or higher) Operating system: Mathematica is O/S independent Classification: 2.1 Catalogue identifier of previous version: ADUQ_v1_0 Journal reference of previous version: Comput. Phys. Comm. 165 (2005) 139 Does the new version supersede the previous version?: Yes Nature of problem: Since the early days of quantum mechanics, the "hydrogen atom" has served as one of the key models for studying the structure and dynamics of various quantum systems. Its analytic solutions are frequently used in case studies in atomic and molecular physics, quantum optics, plasma physics, or even in the field of quantum information and computation. Fast and reliable access to functions and properties of the hydrogenic systems are frequently required, in both the non-relativistic and relativistic frameworks. Despite all the knowledge about one-electron ions, providing such an access is not a simple task, owing to the rather complicated mathematical structure of the Schrödinger and especially Dirac equations. Moreover, for analyzing experimental results as well as for performing advanced theoretical studies one often needs (apart from the detailed information on atomic wave- and Green's functions) to be able to calculate a number of integrals involving these functions. Although for many types of transition operators these integrals can be evaluated analytically in terms of special mathematical functions, such an evaluation is usually rather involved and prone to mistakes. Solution method: A set of Mathematica procedures is developed which provides both the non-relativistic and relativistic solutions of the "Hydrogen atom model". It facilitates, moreover, the symbolic evaluation of integrals involved in the calculations of cross sections and transition amplitudes. These procedures are based on a large number of relations among special mathematical functions, information about their integral representations, recurrence formulae and series expansions. Based on this knowledge, the DIRAC tools provide a fast and reliable algebraic (and if necessary, numeric) manipulation of functions and properties of one-electron systems, thus helping to obtain further insight into the behavior of quantum physical systems. Reasons for new version: The original version of the DIRAC program was developed as a toolbox of Maple procedures and was submitted to the CPC library in 2004 (cf. Ref. [1]). Since then DIRAC has found its niche in advanced theoretical studies carried out in realm of heavy ion physics. With the help of this program detailed analysis has been performed, in particular, for the various excitation and ionization processes occurring in relativistic ion-atom collisions [2], the polarization of the characteristic X-ray radiation following radiative electron capture [3], the correlation properties of the two-photon emission from few-electron heavy ions [4], the spin entanglement phenomena in atomic photoionization [5] and even for exploring the vibrational excitations of the heavy nuclei [6]. Although these studies have conclusively proven the potential of the program, they have also illuminated routes for its further enhancement. Apart from certain source code revisions, demand has grown for a new version of DIRAC compatible with the Mathematica platform. The version presented here includes a wider ranging and more user friendly interactive help system, a number of new procedures and reprogramming for greater computational efficiency. Summary of revisions: The most important new capabilities of the DIRAC program since the previous version are: The utilization of the Mathematica (version 6.0) platform. The addition of a number of new procedures. Since the complete list of the new (and updated) procedures can be found in the interactive help library of the program, we mention here only the most important ones: DiracGlobal[] - Displays a list of the current global settings which specify the framework, nuclear charge and the units which are to be used by the DIRAC program. DiracRadialOrbitalMomentum[] - Returns a non-relativistic radial orbital in momentum space for both, the bound and free electron states. DiracSlaterRadial[] - Evaluates the radial Slater integral both, with the non-relativistic and relativistic wavefunctions. In the previous version of the program this procedure was restricted to the non-relativistic framework only. DiracGreensIntegralRadial[] - Evaluates the two-dimensional radial integrals with the wave- and Green's functions both in non-relativistic and relativistic frameworks. DiracAngularMatrixElement[] - Calculates the angular matrix elements for various irreducible tensor operators. The elimination of some redundant procedures. In particular, the previous version supported evaluation of the spherical Bessel functions, Wigner 3j symbols, Clebsch-Gordan coefficients and spherical harmonics functions. These tools are now superseded by in-built procedures of Mathematica. The development of a full featured interactive help system which follows the style of the Mathematica Help Pages. Extensive revision of the source code in order to correct a number of bugs and inconsistencies that have been identified during use of the previous version of Dirac. The DIRAC package is distributed as a compressed tar file from which the DIRAC root directory can be (re-)generated. The root directory contains the source code and help libraries, a "Readme" file, Dirac_Installation_Instructions, as well as the notebook DemonstrationNotebook.nb that includes a number of test cases to illustrate the use of the program. These test cases, which concern the theoretical analysis of wavefunctions and the fine-structure of hydrogen-like ions, has already been discussed in detail in Ref. [1] and are provided here in order to underline the continuity between the previous (Maple) and new (Mathematica) versions of the DIRAC program. Unusual features: Even though all basic features of the previous Maple version have been retained in as close to the original form as possible, some small syntax changes became necessary in the new version of DIRAC in order to follow Mathematica standards. First of all, these changes concern naming conventions for DIRAC's procedures. As was discussed in Ref. [1], previously rather long names were employed in which each word was separated by an underscore. For example, when running the Maple version of the program one had to call the procedure Dirac_Slater_radial() in order to evaluate the Slater integral. Such a naming convention however, cannot be used in the Mathematica framework where the underscore character is reserved to represent Blank, a built-in symbol. In the new version of DIRAC we therefore follow the Mathematica convention of delimiting each word in a procedure's name by capitalization. Evaluation of the Slater determinant can be accomplished now simply by entering DiracSlaterRadial[]. Besides procedure names, a new convention is introduced to represent fundamental physical constants. In this version of DIRAC the group of (preset) global variables has changed to resemble their conventional symbols, specifically α, a, e, m, c and ℏ, being the fine structure constant, Bohr radius, electron charge, electron mass, speed of light and the Planck constant respectively. If the numerical evaluator N is wrapped around any of these constants, their numerical values are returned. Running time: Although the program replies promptly upon most requests, the running time also depends on the particular task. For example, computation of (radial) matrix elements involving components of relativistic wavefunctions might require a few seconds of a runtime. A number of test calculations performed regarding this and other tasks clearly indicate that the new version of Dirac requires up to 90% less evaluation time compared to its predecessor. References:A. Surzhykov, P. Koval, S. Fritzsche, Comput. Phys. Comm. 165 (2005) 139. H. Ogawa, et al., Phys. Rev. A 75 (2007) 1. A.V. Maiorova, et al., J. Phys. B: At. Mol. Opt. Phys. 42 (2009) 125003. L. Borowska, A. Surzhykov, Th. Stöhlker, S. Fritzsche, Phys. Rev. A 74 (2006) 062516. T. Radtke, S. Fritzsche, A. Surzhykov, Phys. Rev. A 74 (2006) 032709. A. Pálffy, Z. Harman, A. Surzhykov, U.D. Jentschura, Phys. Rev. A 75 (2007) 012712.
The two-electron atomic systems. S-states
NASA Astrophysics Data System (ADS)
Liverts, Evgeny Z.; Barnea, Nir
2010-01-01
A simple Mathematica program for computing the S-state energies and wave functions of two-electron (helium-like) atoms (ions) is presented. The well-known method of projecting the Schrödinger equation onto the finite subspace of basis functions was applied. The basis functions are composed of the exponentials combined with integer powers of the simplest perimetric coordinates. No special subroutines were used, only built-in objects supported by Mathematica. The accuracy of results and computation time depend on the basis size. The precise energy values of 7-8 significant figures along with the corresponding wave functions can be computed on a single processor within a few minutes. The resultant wave functions have a simple analytical form consisting of elementary functions, that enables one to calculate the expectation values of arbitrary physical operators without any difficulties. Program summaryProgram title: TwoElAtom-S Catalogue identifier: AEFK_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFK_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 10 185 No. of bytes in distributed program, including test data, etc.: 495 164 Distribution format: tar.gz Programming language: Mathematica 6.0; 7.0 Computer: Any PC Operating system: Any which supports Mathematica; tested under Microsoft Windows XP and Linux SUSE 11.0 RAM:⩾10 bytes Classification: 2.1, 2.2, 2.7, 2.9 Nature of problem: The Schrödinger equation for atoms (ions) with more than one electron has not been solved analytically. Approximate methods must be applied in order to obtain the wave functions or other physical attributes from quantum mechanical calculations. Solution method: The S-wave function is expanded into a triple basis set in three perimetric coordinates. Method of projecting the two-electron Schrödinger equation (for atoms/ions) onto a subspace of the basis functions enables one to obtain the set of homogeneous linear equations F.C=0 for the coefficients C of the above expansion. The roots of equation det(F)=0 yield the bound energies. Restrictions: First, the too large length of expansion (basis size) takes the too large computation time giving no perceptible improvement in accuracy. Second, the order of polynomial Ω (input parameter) in the wave function expansion enables one to calculate the excited nS-states up to n=Ω+1 inclusive. Additional comments: The CPC Program Library includes "A program to calculate the eigenfunctions of the random phase approximation for two electron systems" (AAJD). It should be emphasized that this fortran code realizes a very rough approximation describing only the averaged electron density of the two electron systems. It does not characterize the properties of the individual electrons and has a number of input parameters including the Roothaan orbitals. Running time: ˜10 minutes (depends on basis size and computer speed)
Improved Load Alleviation Capability for the KC-135
1997-09-01
software, such as Matlab, Mathematica, Simulink, and Robotica Front End for Mathematica available in the simulation laboratory Overview This thesis report is...outlined in Spong’s text in order to utilize the Robotica system development software which automates the process of calculating the kinematic and...kinematic and dynamic equations can be accomplished using a computer tool called Robotica Front End (RFE) [ 15], developed by Doctor Spong. Boom Root d3
Raja, Muhammad Asif Zahoor; Khan, Junaid Ali; Ahmad, Siraj-ul-Islam; Qureshi, Ijaz Mansoor
2012-01-01
A methodology for solution of Painlevé equation-I is presented using computational intelligence technique based on neural networks and particle swarm optimization hybridized with active set algorithm. The mathematical model of the equation is developed with the help of linear combination of feed-forward artificial neural networks that define the unsupervised error of the model. This error is minimized subject to the availability of appropriate weights of the networks. The learning of the weights is carried out using particle swarm optimization algorithm used as a tool for viable global search method, hybridized with active set algorithm for rapid local convergence. The accuracy, convergence rate, and computational complexity of the scheme are analyzed based on large number of independents runs and their comprehensive statistical analysis. The comparative studies of the results obtained are made with MATHEMATICA solutions, as well as, with variational iteration method and homotopy perturbation method. PMID:22919371
NASA Astrophysics Data System (ADS)
Miszczak, Jarosław Adam
2013-01-01
The presented package for the Mathematica computing system allows the harnessing of quantum random number generators (QRNG) for investigating the statistical properties of quantum states. The described package implements a number of functions for generating random states. The new version of the package adds the ability to use the on-line quantum random number generator service and implements new functions for retrieving lists of random numbers. Thanks to the introduced improvements, the new version provides faster access to high-quality sources of random numbers and can be used in simulations requiring large amount of random data. New version program summaryProgram title: TRQS Catalogue identifier: AEKA_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKA_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 18 134 No. of bytes in distributed program, including test data, etc.: 2 520 49 Distribution format: tar.gz Programming language: Mathematica, C. Computer: Any supporting Mathematica in version 7 or higher. Operating system: Any platform supporting Mathematica; tested with GNU/Linux (32 and 64 bit). RAM: Case-dependent Supplementary material: Fig. 1 mentioned below can be downloaded. Classification: 4.15. External routines: Quantis software library (http://www.idquantique.com/support/quantis-trng.html) Catalogue identifier of previous version: AEKA_v1_0 Journal reference of previous version: Comput. Phys. Comm. 183(2012)118 Does the new version supersede the previous version?: Yes Nature of problem: Generation of random density matrices and utilization of high-quality random numbers for the purpose of computer simulation. Solution method: Use of a physical quantum random number generator and an on-line service providing access to the source of true random numbers generated by quantum real number generator. Reasons for new version: Added support for the high-speed on-line quantum random number generator and improved methods for retrieving lists of random numbers. Summary of revisions: The presented version provides two signicant improvements. The first one is the ability to use the on-line Quantum Random Number Generation service developed by PicoQuant GmbH and the Nano-Optics groups at the Department of Physics of Humboldt University. The on-line service supported in the version 2.0 of the TRQS package provides faster access to true randomness sources constructed using the laws of quantum physics. The service is freely available at https://qrng.physik.hu-berlin.de/. The use of this service allows using the presented package with the need of a physical quantum random number generator. The second improvement introduced in this version is the ability to retrieve arrays of random data directly for the used source. This increases the speed of the random number generation, especially in the case of an on-line service, where it reduces the time necessary to establish the connection. Thanks to the speed improvement of the presented version, the package can now be used in simulations requiring larger amounts of random data. Moreover, the functions for generating random numbers provided by the current version of the package more closely follow the pattern of functions for generating pseudo- random numbers provided in Mathematica. Additional comments: Speed comparison: The implementation of the support for the QRNG on-line service provides a noticeable improvement in the speed of random number generation. For the samples of real numbers of size 101; 102,…,107 the times required to generate these samples using Quantis USB device and QRNG service are compared in Fig. 1. The presented results show that the use of the on-line service provides faster access to random numbers. One should note, however, that the speed gain can increase or decrease depending on the connection speed between the computer and the server providing random numbers. Running time: Depends on the used source of randomness and the amount of random data used in the experiment. References: [1] M. Wahl, M. Leifgen, M. Berlin, T. Röhlicke, H.-J. Rahn, O. Benson., An ultrafast quantum random number generator with provably bounded output bias based on photon arrival time measurements, Applied Physics Letters, Vol. 098, 171105 (2011). http://dx.doi.org/10.1063/1.3578456.
Spin wave Feynman diagram vertex computation package
NASA Astrophysics Data System (ADS)
Price, Alexander; Javernick, Philip; Datta, Trinanjan
Spin wave theory is a well-established theoretical technique that can correctly predict the physical behavior of ordered magnetic states. However, computing the effects of an interacting spin wave theory incorporating magnons involve a laborious by hand derivation of Feynman diagram vertices. The process is tedious and time consuming. Hence, to improve productivity and have another means to check the analytical calculations, we have devised a Feynman Diagram Vertex Computation package. In this talk, we will describe our research group's effort to implement a Mathematica based symbolic Feynman diagram vertex computation package that computes spin wave vertices. Utilizing the non-commutative algebra package NCAlgebra as an add-on to Mathematica, symbolic expressions for the Feynman diagram vertices of a Heisenberg quantum antiferromagnet are obtained. Our existing code reproduces the well-known expressions of a nearest neighbor square lattice Heisenberg model. We also discuss the case of a triangular lattice Heisenberg model where non collinear terms contribute to the vertex interactions.
NASA Astrophysics Data System (ADS)
Azadegan, B.
2013-03-01
The presented Mathematica code is an efficient tool for simulation of planar channeling radiation spectra of relativistic electrons channeled along major crystallographic planes of a diamond-structure single crystal. The program is based on the quantum theory of channeling radiation which has been successfully applied to study planar channeling at electron energies between 10 and 100 MeV. Continuum potentials for different planes of diamond, silicon and germanium single crystals are calculated using the Doyle-Turner approximation to the atomic scattering factor and taking thermal vibrations of the crystal atoms into account. Numerical methods are applied to solve the one-dimensional Schrödinger equation. The code is designed to calculate the electron wave functions, transverse electron states in the planar continuum potential, transition energies, line widths of channeling radiation and depth dependencies of the population of quantum states. Finally the spectral distribution of spontaneously emitted channeling radiation is obtained. The simulation of radiation spectra considerably facilitates the interpretation of experimental data. Catalog identifier: AEOH_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOH_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 446 No. of bytes in distributed program, including test data, etc.: 209805 Distribution format: tar.gz Programming language: Mathematica. Computer: Platforms on which Mathematica is available. Operating system: Operating systems on which Mathematica is available. RAM: 1 MB Classification: 7.10. Nature of problem: Planar channeling radiation is emitted by relativistic charged particles during traversing a single crystal in direction parallel to a crystallographic plane. Channeling is modeled as the motion of charged particles in a continuous planar potential which is formed by the spatially and thermally averaged action of the individual electrostatic potentials of the crystal atoms of the corresponding plane. Classically, the motion of channeled particles through the crystal resembles transverse oscillations being the source of radiation emission. For electrons of energy less than 100 MeV considered here, planar channeling has to be treated quantum mechanically by a one-dimensional Schrödinger equation for the transverse motion. Hence, this motion of the channeled electrons is restricted to a number of discrete (bound) channeling states in the planar continuum potential, and the emission of channeling radiation is caused by spontaneous electron transitions between these eigenstates. Due to relativistic and Doppler effects, the energy of the emitted photons directed into a narrow forward cone is typically shifted up by about three to five orders of magnitude. Consequently, the observed energy spectrum of channeling radiation is characterized by a number of radiation lines in the energy domain of hard X-rays. Channeling radiation may, therefore, be applied as an intense, tunable, quasi-monochromatic X-ray source. Solution method: The problem consists in finding the electron wave function for the planar continuum potential. Both the wave functions and corresponding energies of channeling states solve the Schrödinger equation of transverse electron motion. In the framework of the so-called many-beam formalism, solving the Schrödinger equation reduces to a eigenvector-eigenvalue problem of a Hermitian matrix. For that the program employs the mathematical tools allocated in the commercial computation software Mathematica. The electric field of the atomic planes in the crystal forces dipole oscillations of the channeled charged particles. In the quantum mechanical approach, the dipole approximation is also valid for spontaneous transitions between bound states. The transition strength for dedicated states depends on the magnitude of the corresponding dipole matrix element. The photon energy correlates with the particle energy, and the spectral width of radiation lines is a function of the life times of the channeling states. Running time: The program has been tested on a PC AMD Athlon X2 245 processor 2.9 GHz with 2 GB RAM. Depending on electron energy and crystal thickness, the running time of the program amounts to 5-10 min.
Development of CCSDS DCT to Support Spacecraft Dynamic Events
NASA Technical Reports Server (NTRS)
Sidhwa, Anahita F
2011-01-01
This report discusses the development of Consultative Committee for Space Data Systems (CCSDS) Design Control Table (DCT) to support spacecraft dynamic events. The Consultative Committee for Space Data Systems (CCSDS) Design Control Table (DCT) is a versatile link calculation tool to analyze different kinds of radio frequency links. It started out as an Excel-based program, and is now being evolved into a Mathematica-based link analysis tool. The Mathematica platform offers a rich set of advanced analysis capabilities, and can be easily extended to a web-based architecture. Last year the CCSDS DCT's for the uplink, downlink, two-way, and ranging models were developed as well as the corresponding input and output interfaces. Another significant accomplishment is the integration of the NAIF SPICE library into the Mathematica computation platform.
Insertion device calculations with mathematica
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carr, R.; Lidia, S.
1995-02-01
The design of accelerator insertion devices such as wigglers and undulators has usually been aided by numerical modeling on digital computers, using code in high level languages like Fortran. In the present era, there are higher level programming environments like IDL{reg_sign}, MatLab{reg_sign}, and Mathematica{reg_sign} in which these calculations may be performed by writing much less code, and in which standard mathematical techniques are very easily used. The authors present a suite of standard insertion device modeling routines in Mathematica to illustrate the new techniques. These routines include a simple way to generate magnetic fields using blocks of CSEM materials, trajectorymore » solutions from the Lorentz force equations for given magnetic fields, Bessel function calculations of radiation for wigglers and undulators and general radiation calculations for undulators.« less
Bringing education to your virtual doorstep
NASA Astrophysics Data System (ADS)
Kaurov, Vitaliy
2013-03-01
We currently witness significant migration of academic resources towards online CMS, social networking, and high-end computerized education. This happens for traditional academic programs as well as for outreach initiatives. The talk will go over a set of innovative integrated technologies, many of which are free. These were developed by Wolfram Research in order to facilitate and enhance the learning process in mathematical and physical sciences. Topics include: cloud computing with Mathematica Online; natural language programming; interactive educational resources and web publishing at the Wolfram Demonstrations Project; the computational knowledge engine Wolfram Alpha; Computable Document Format (CDF) and self-publishing with interactive e-books; course assistant apps for mobile platforms. We will also discuss outreach programs where such technologies are extensively used, such as the Wolfram Science Summer School and the Mathematica Summer Camp.
Symbolic Computational Approach to the Marangoni Convection Problem With Soret Diffusion
NASA Technical Reports Server (NTRS)
Skarda, J. Raymond
1998-01-01
A recently reported solution for stationary stability of a thermosolutal system with Soret diffusion is re-derived and examined using a symbolic computational package. Symbolic computational languages are well suited for such an analysis and facilitate a pragmatic approach that is adaptable to similar problems. Linearization of the equations, normal mode analysis, and extraction of the final solution are performed in a Mathematica notebook format. An exact solution is obtained for stationary stability in the limit of zero gravity. A closed form expression is also obtained for the location of asymptotes in relevant parameter, (Sm(sub c), Mac(sub c)), space. The stationary stability behavior is conveniently examined within the symbolic language environment. An abbreviated version of the Mathematica notebook is given in the Appendix.
VEST: Abstract Vector Calculus Simplification in Mathematica
DOE Office of Scientific and Technical Information (OSTI.GOV)
J. Squire, J. Burby and H. Qin
2013-03-12
We present a new package, VEST (Vector Einstein Summation Tools), that performs abstract vector calculus computations in Mathematica. Through the use of index notation, VEST is able to reduce scalar and vector expressions of a very general type using a systematic canonicalization procedure. In addition, utilizing properties of the Levi-Civita symbol, the program can derive types of multi-term vector identities that are not recognized by canonicalization, subsequently applying these to simplify large expressions. In a companion paper [1], we employ VEST in the automation of the calculation of Lagrangians for the single particle guiding center system in plasma physics, amore » computation which illustrates its ability to handle very large expressions. VEST has been designed to be simple and intuitive to use, both for basic checking of work and more involved computations. __________________________________________________« less
VEST: Abstract vector calculus simplification in Mathematica
NASA Astrophysics Data System (ADS)
Squire, J.; Burby, J.; Qin, H.
2014-01-01
We present a new package, VEST (Vector Einstein Summation Tools), that performs abstract vector calculus computations in Mathematica. Through the use of index notation, VEST is able to reduce three-dimensional scalar and vector expressions of a very general type to a well defined standard form. In addition, utilizing properties of the Levi-Civita symbol, the program can derive types of multi-term vector identities that are not recognized by reduction, subsequently applying these to simplify large expressions. In a companion paper Burby et al. (2013) [12], we employ VEST in the automation of the calculation of high-order Lagrangians for the single particle guiding center system in plasma physics, a computation which illustrates its ability to handle very large expressions. VEST has been designed to be simple and intuitive to use, both for basic checking of work and more involved computations.
EARL: Exoplanet Analytic Reflected Lightcurves package
NASA Astrophysics Data System (ADS)
Haggard, Hal M.; Cowan, Nicolas B.
2018-05-01
EARL (Exoplanet Analytic Reflected Lightcurves) computes the analytic form of a reflected lightcurve, given a spherical harmonic decomposition of the planet albedo map and the viewing and orbital geometries. The EARL Mathematica notebook allows rapid computation of reflected lightcurves, thus making lightcurve numerical experiments accessible.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Conover, W.J.; Cox, D.D.; Martz, H.F.
1997-12-01
When using parametric empirical Bayes estimation methods for estimating the binomial or Poisson parameter, the validity of the assumed beta or gamma conjugate prior distribution is an important diagnostic consideration. Chi-square goodness-of-fit tests of the beta or gamma prior hypothesis are developed for use when the binomial sample sizes or Poisson exposure times vary. Nine examples illustrate the application of the methods, using real data from such diverse applications as the loss of feedwater flow rates in nuclear power plants, the probability of failure to run on demand and the failure rates of the high pressure coolant injection systems atmore » US commercial boiling water reactors, the probability of failure to run on demand of emergency diesel generators in US commercial nuclear power plants, the rate of failure of aircraft air conditioners, baseball batting averages, the probability of testing positive for toxoplasmosis, and the probability of tumors in rats. The tests are easily applied in practice by means of corresponding Mathematica{reg_sign} computer programs which are provided.« less
PyR@TE. Renormalization group equations for general gauge theories
NASA Astrophysics Data System (ADS)
Lyonnet, F.; Schienbein, I.; Staub, F.; Wingerter, A.
2014-03-01
Although the two-loop renormalization group equations for a general gauge field theory have been known for quite some time, deriving them for specific models has often been difficult in practice. This is mainly due to the fact that, albeit straightforward, the involved calculations are quite long, tedious and prone to error. The present work is an attempt to facilitate the practical use of the renormalization group equations in model building. To that end, we have developed two completely independent sets of programs written in Python and Mathematica, respectively. The Mathematica scripts will be part of an upcoming release of SARAH 4. The present article describes the collection of Python routines that we dubbed PyR@TE which is an acronym for “Python Renormalization group equations At Two-loop for Everyone”. In PyR@TE, once the user specifies the gauge group and the particle content of the model, the routines automatically generate the full two-loop renormalization group equations for all (dimensionless and dimensionful) parameters. The results can optionally be exported to LaTeX and Mathematica, or stored in a Python data structure for further processing by other programs. For ease of use, we have implemented an interactive mode for PyR@TE in form of an IPython Notebook. As a first application, we have generated with PyR@TE the renormalization group equations for several non-supersymmetric extensions of the Standard Model and found some discrepancies with the existing literature. Catalogue identifier: AERV_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERV_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 924959 No. of bytes in distributed program, including test data, etc.: 495197 Distribution format: tar.gz Programming language: Python. Computer: Personal computer. Operating system: Tested on Fedora 15, MacOS 10 and 11, Ubuntu 12. Classification: 11.1. External routines: SymPy, PyYAML, NumPy, IPython, SciPy Nature of problem: Deriving the renormalization group equations for a general quantum field theory. Solution method: Group theory, tensor algebra Running time: Tens of seconds per model (one-loop), tens of minutes (two-loop)
WiLE: A Mathematica package for weak coupling expansion of Wilson loops in ABJ(M) theory
NASA Astrophysics Data System (ADS)
Preti, M.
2018-06-01
We present WiLE, a Mathematica® package designed to perform the weak coupling expansion of any Wilson loop in ABJ(M) theory at arbitrary perturbative order. For a given set of fields on the loop and internal vertices, the package displays all the possible Feynman diagrams and their integral representations. The user can also choose to exclude non planar diagrams, tadpoles and self-energies. Through the use of interactive input windows, the package should be easily accessible to users with little or no previous experience. The package manual provides some pedagogical examples and the computation of all ladder diagrams at three-loop relevant for the cusp anomalous dimension in ABJ(M). The latter application gives also support to some recent results computed in different contexts.
Student's Lab Assignments in PDE Course with MAPLE.
ERIC Educational Resources Information Center
Ponidi, B. Alhadi
Computer-aided software has been used intensively in many mathematics courses, especially in computational subjects, to solve initial value and boundary value problems in Partial Differential Equations (PDE). Many software packages were used in student lab assignments such as FORTRAN, PASCAL, MATLAB, MATHEMATICA, and MAPLE in order to accelerate…
Discrete Mathematics Course Supported by CAS MATHEMATICA
ERIC Educational Resources Information Center
Ivanov, O. A.; Ivanova, V. V.; Saltan, A. A.
2017-01-01
In this paper, we discuss examples of assignments for a course in discrete mathematics for undergraduate students majoring in business informatics. We consider several problems with computer-based solutions and discuss general strategies for using computers in teaching mathematics and its applications. In order to evaluate the effectiveness of our…
Some Unexpected Results Using Computer Algebra Systems.
ERIC Educational Resources Information Center
Alonso, Felix; Garcia, Alfonsa; Garcia, Francisco; Hoya, Sara; Rodriguez, Gerardo; de la Villa, Agustin
2001-01-01
Shows how teachers can often use unexpected outputs from Computer Algebra Systems (CAS) to reinforce concepts and to show students the importance of thinking about how they use the software and reflecting on their results. Presents different examples where DERIVE, MAPLE, or Mathematica does not work as expected and suggests how to use them as a…
Developing a TI-92 Manual Generator Based on Computer Algebra Systems
ERIC Educational Resources Information Center
Jun, Youngcook
2004-01-01
The electronic medium suitable for mathematics learning and teaching is often designed with a notebook interface provided in a computer algebra system. Such a notebook interface facilitates a workspace for mathematical activities along with an online help system. In this paper, the proposed feature is implemented in the Mathematica's notebook…
A Survey of Quantum Programming Languages: History, Methods, and Tools
2008-01-01
and entanglement , to achieve computational solutions to certain problems in less time (fewer computational cycles) than is possible using classical...superposition of quantum bits, entanglement , destructive measurement, and the no-cloning theorem. These differences must be thoroughly understood and even...computers using well-known languages such as C, C++, Java, and rapid prototyping languages such as Maple, Mathematica, and Matlab . A good on-line
ERIC Educational Resources Information Center
Savelsbergh, Elwin R.; Ferguson-Hessler, Monica G. M.; de Jong, Ton
An approach to teaching problem-solving based on using the computer software Mathematica is applied to the study of electrostatics and is compared with the normal approach to the module. Learning outcomes for both approaches were not significantly different. The experimental course successfully addressed a number of misconceptions. Students in the…
ERIC Educational Resources Information Center
Cahalan, Margaret; Goodwin, David
2014-01-01
In January 2009, in the last week of the Bush Administration, the U.S. Department of Education (ED), upon orders from the departing political appointee staff, published the final report in a long running National Evaluation of Upward Bound (UB). The study was conducted by the contractor, Mathematica Policy Research. After more than a year in…
ALC: automated reduction of rule-based models
Koschorreck, Markus; Gilles, Ernst Dieter
2008-01-01
Background Combinatorial complexity is a challenging problem for the modeling of cellular signal transduction since the association of a few proteins can give rise to an enormous amount of feasible protein complexes. The layer-based approach is an approximative, but accurate method for the mathematical modeling of signaling systems with inherent combinatorial complexity. The number of variables in the simulation equations is highly reduced and the resulting dynamic models show a pronounced modularity. Layer-based modeling allows for the modeling of systems not accessible previously. Results ALC (Automated Layer Construction) is a computer program that highly simplifies the building of reduced modular models, according to the layer-based approach. The model is defined using a simple but powerful rule-based syntax that supports the concepts of modularity and macrostates. ALC performs consistency checks on the model definition and provides the model output in different formats (C MEX, MATLAB, Mathematica and SBML) as ready-to-run simulation files. ALC also provides additional documentation files that simplify the publication or presentation of the models. The tool can be used offline or via a form on the ALC website. Conclusion ALC allows for a simple rule-based generation of layer-based reduced models. The model files are given in different formats as ready-to-run simulation files. PMID:18973705
Parallel Curves: Getting There and Getting Back
ERIC Educational Resources Information Center
Agnew, A. F.; Mathews, J. H.
2006-01-01
This note takes up the issue of parallel curves while illustrating the utility of "Mathematica" in computations. This work complements results presented earlier. The presented treatment, considering the more general case of parametric curves, provides an analysis of the appearance of cusp singularities, and emphasizes the utility of symbolic…
ERIC Educational Resources Information Center
Fuchs, Karl Josef; Simonovits, Reinhard; Thaller, Bernd
2008-01-01
This paper describes a high school project where the mathematics teaching and learning software M@th Desktop (MD) based on the Computer Algebra System Mathematica was used for symbolical and numerical calculations and for visualisation. The mathematics teaching and learning software M@th Desktop 2.0 (MD) contains the modules Basics including tools…
Computerized Proof Techniques for Undergraduates
ERIC Educational Resources Information Center
Smith, Christopher J.; Tefera, Akalu; Zeleke, Aklilu
2012-01-01
The use of computer algebra systems such as Maple and Mathematica is becoming increasingly important and widespread in mathematics learning, teaching and research. In this article, we present computerized proof techniques of Gosper, Wilf-Zeilberger and Zeilberger that can be used for enhancing the teaching and learning of topics in discrete…
Titration Calculations with Computer Algebra Software
ERIC Educational Resources Information Center
Lachance, Russ; Biaglow, Andrew
2012-01-01
This article examines the symbolic algebraic solution of the titration equations for a diprotic acid, as obtained using "Mathematica," "Maple," and "Mathcad." The equilibrium and conservation equations are solved symbolically by the programs to eliminate the approximations that normally would be performed by the student. Of the three programs,…
Methods in Symbolic Computation and p-Adic Valuations of Polynomials
NASA Astrophysics Data System (ADS)
Guan, Xiao
Symbolic computation has widely appear in many mathematical fields such as combinatorics, number theory and stochastic processes. The techniques created in the area of experimental mathematics provide us efficient ways of symbolic computing and verification of complicated relations. Part I consists of three problems. The first one focuses on a unimodal sequence derived from a quartic integral. Many of its properties are explored with the help of hypergeometric representations and automatic proofs. The second problem tackles the generating function of the reciprocal of Catalan number. It springs from the closed form given by Mathematica. Furthermore, three methods in special functions are used to justify this result. The third issue addresses the closed form solutions for the moments of products of generalized elliptic integrals , which combines the experimental mathematics and classical analysis. Part II concentrates on the p-adic valuations of polynomials from the perspective of trees. For a given polynomial f( n) indexed in positive integers, the package developed in Mathematica will create certain tree structure following a couple of rules. The evolution of such trees are studied both rigorously and experimentally from the view of field extension, nonparametric statistics and random matrix.
Interactive Mathematica Simulations in Chemical Engineering Courses
ERIC Educational Resources Information Center
Falconer, John L.; Nicodemus, Garret D.
2014-01-01
Interactive Mathematica simulations with graphical displays of system behavior are an excellent addition to chemical engineering courses. The Manipulate command in Mathematica creates on-screen controls that allow users to change system variables and see the graphical output almost instantaneously. They can be used both in and outside class. More…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yorkey, T.J.
This note describes how to get POSTSCRIPT files into T{sub E}X documents on a Sun computer using psifg. Several applications generating POSTSCRIPT files are used as examples. These applications are: Adobe Illustrator, Mathematica, View, Cricket Graph and MacDraw, and a scanned image. I assume the reader knows nothing about POSTSCRIPT, and does not want to learn anything about it.
The Mathlet Toolkit: Creating Dynamic Applets for Differential Equations and Dynamical Systems
ERIC Educational Resources Information Center
Decker, Robert
2011-01-01
Dynamic/interactive graphing applets can be used to supplement standard computer algebra systems such as Maple, Mathematica, Derive, or TI calculators, in courses such as Calculus, Differential Equations, and Dynamical Systems. The addition of this type of software can lead to discovery learning, with students developing their own conjectures, and…
Introduction to Mathematica® for Physicists
NASA Astrophysics Data System (ADS)
Grozin, Andrey
We were taught at calculus classes that integration is an art, not a science (in contrast to differentiation—even a monkey can be trained to take derivatives). And we were taught wrong. The Risch algorithm (which is known for decades) allows one to find, in a finite number of steps, if a given indefinite integral can be taken in elementary functions, and if so, to calculate it. This algorithm has been constructed in works by an American mathematician Risch near 1970; many cases were not analyzed completely in these works and were later considered by other mathematicians. The algorithm is very complicated, and no computer algebra system implements it fully. Its implementation in Mathematica is rather complete, even with extensions to some classes of special functions, but details are not publicly known. Strictly speaking, it is not quite an algorithm, because it contains algorithmically unsolvable subproblems, such as finding out if a given combination of elementary functions vanishes. But in practice computer algebra systems are quite good in solving such problems. Here we shall consider, at a very elementary level, the main ideas of the Risch algorithm; see [16] for more details.
Using Combinatorica/Mathematica for Student Projects in Random Graph Theory
ERIC Educational Resources Information Center
Pfaff, Thomas J.; Zaret, Michele
2006-01-01
We give an example of a student project that experimentally explores a topic in random graph theory. We use the "Combinatorica" package in "Mathematica" to estimate the minimum number of edges needed in a random graph to have a 50 percent chance that the graph is connected. We provide the "Mathematica" code and compare it to the known theoretical…
Discrete mathematics course supported by CAS MATHEMATICA
NASA Astrophysics Data System (ADS)
Ivanov, O. A.; Ivanova, V. V.; Saltan, A. A.
2017-08-01
In this paper, we discuss examples of assignments for a course in discrete mathematics for undergraduate students majoring in business informatics. We consider several problems with computer-based solutions and discuss general strategies for using computers in teaching mathematics and its applications. In order to evaluate the effectiveness of our approach, we conducted an anonymous survey. The results of the survey provide evidence that our approach contributes to high outcomes and aligns with the course aims and objectives.
Multi-loop Integrand Reduction with Computational Algebraic Geometry
NASA Astrophysics Data System (ADS)
Badger, Simon; Frellesvig, Hjalte; Zhang, Yang
2014-06-01
We discuss recent progress in multi-loop integrand reduction methods. Motivated by the possibility of an automated construction of multi-loop amplitudes via generalized unitarity cuts we describe a procedure to obtain a general parameterisation of any multi-loop integrand in a renormalizable gauge theory. The method relies on computational algebraic geometry techniques such as Gröbner bases and primary decomposition of ideals. We present some results for two and three loop amplitudes obtained with the help of the MACAULAY2 computer algebra system and the Mathematica package BASISDET.
ERIC Educational Resources Information Center
Asensio, Daniela A.; Barassi, Francisca J.; Zambon, Mariana T.; Mazza, Germán D.
2010-01-01
This paper describes the results of a pedagogical experience carried out at the University of Comahue, Argentina, with an interactive text (IT) concerning Homogeneous Chemical Reactors Analysis. The IT was built on the frame of the "Mathematica" software with the aim of providing students with a robust computational tool. Students'…
A Comparison Study between a Traditional and Experimental Program.
ERIC Educational Resources Information Center
Dogan, Hamide
This paper is part of a dissertation defended in January 2001 as part of the author's Ph.D. requirement. The study investigated the effects of use of Mathematica, a computer algebra system, in learning basic linear algebra concepts, It was done by means of comparing two first year linear algebra classes, one traditional and one Mathematica…
RSA cryptography and multi prime RSA cryptography
NASA Astrophysics Data System (ADS)
Sani, Nur Atiqah Abdul; Kamarulhaili, Hailiza
2017-08-01
RSA cryptography is one of the most powerful and popular cryptosystem which is being applied until now. There is one variant of RSA cryptography named Multi Prime RSA (MPRSA) cryptography. MPRSA cryptography is the improved version of RSA cryptography. We only need to modify a few steps in key generation part and apply the Chinese Remainder Theorem (CRT) in the decryption part to get the MPRSA algorithm. The focus of this research is to compare between the standard RSA cryptography and MPRSA cryptography in a few aspects. The research shows that MPRSA cryptography is more efficient than the RSA cryptography. Time complexity using Mathematica software is also conducted and it is proven that MPRSA cryptography has shorter time taken. It also implies the computational time is less than RSA cryptography. Mathematica software version 9.0 and a laptop HP ProBook 4331s are used to check the timing and to implement both algorithms.
LevelScheme: A level scheme drawing and scientific figure preparation system for Mathematica
NASA Astrophysics Data System (ADS)
Caprio, M. A.
2005-09-01
LevelScheme is a scientific figure preparation system for Mathematica. The main emphasis is upon the construction of level schemes, or level energy diagrams, as used in nuclear, atomic, molecular, and hadronic physics. LevelScheme also provides a general infrastructure for the preparation of publication-quality figures, including support for multipanel and inset plotting, customizable tick mark generation, and various drawing and labeling tasks. Coupled with Mathematica's plotting functions and powerful programming language, LevelScheme provides a flexible system for the creation of figures combining diagrams, mathematical plots, and data plots. Program summaryTitle of program:LevelScheme Catalogue identifier:ADVZ Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVZ Operating systems:Any which supports Mathematica; tested under Microsoft Windows XP, Macintosh OS X, and Linux Programming language used:Mathematica 4 Number of bytes in distributed program, including test and documentation:3 051 807 Distribution format:tar.gz Nature of problem:Creation of level scheme diagrams. Creation of publication-quality multipart figures incorporating diagrams and plots. Method of solution:A set of Mathematica packages has been developed, providing a library of level scheme drawing objects, tools for figure construction and labeling, and control code for producing the graphics.
Interactive Supercomputing’s Star-P Platform
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edelman, Alan; Husbands, Parry; Leibman, Steve
2006-09-19
The thesis of this extended abstract is simple. High productivity comes from high level infrastructures. To measure this, we introduce a methodology that goes beyond the tradition of timing software in serial and tuned parallel modes. We perform a classroom productivity study involving 29 students who have written a homework exercise in a low level language (MPI message passing) and a high level language (Star-P with MATLAB client). Our conclusions indicate what perhaps should be of little surprise: (1) the high level language is always far easier on the students than the low level language. (2) The early versions ofmore » the high level language perform inadequately compared to the tuned low level language, but later versions substantially catch up. Asymptotically, the analogy must hold that message passing is to high level language parallel programming as assembler is to high level environments such as MATLAB, Mathematica, Maple, or even Python. We follow the Kepner method that correctly realizes that traditional speedup numbers without some discussion of the human cost of reaching these numbers can fail to reflect the true human productivity cost of high performance computing. Traditional data compares low level message passing with serial computation. With the benefit of a high level language system in place, in our case Star-P running with MATLAB client, and with the benefit of a large data pool: 29 students, each running the same code ten times on three evolutions of the same platform, we can methodically demonstrate the productivity gains. To date we are not aware of any high level system as extensive and interoperable as Star-P, nor are we aware of an experiment of this kind performed with this volume of data.« less
ERIC Educational Resources Information Center
Ardiç, Mehmet Alper; Isleyen, Tevfik
2017-01-01
The purpose of this study is to determine the levels of high school mathematics teachers in achieving mathematics instruction via computer algebra systems and the reflections of these practices in the classroom. Three high school mathematics teachers employed at different types of school participated in the study. In the beginning of this…
Automatic calculation of supersymmetric renormalization group equations and loop corrections
NASA Astrophysics Data System (ADS)
Staub, Florian
2011-03-01
SARAH is a Mathematica package for studying supersymmetric models. It calculates for a given model the masses, tadpole equations and all vertices at tree-level. This information can be used by SARAH to write model files for CalcHep/ CompHep or FeynArts/ FormCalc. In addition, the second version of SARAH can derive the renormalization group equations for the gauge couplings, parameters of the superpotential and soft-breaking parameters at one- and two-loop level. Furthermore, it calculates the one-loop self-energies and the one-loop corrections to the tadpoles. SARAH can handle all N=1 SUSY models whose gauge sector is a direct product of SU(N) and U(1) gauge groups. The particle content of the model can be an arbitrary number of chiral superfields transforming as any irreducible representation with respect to the gauge groups. To implement a new model, the user has just to define the gauge sector, the particle, the superpotential and the field rotations to mass eigenstates. Program summaryProgram title: SARAH Catalogue identifier: AEIB_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIB_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 97 577 No. of bytes in distributed program, including test data, etc.: 2 009 769 Distribution format: tar.gz Programming language: Mathematica Computer: All systems that Mathematica is available for Operating system: All systems that Mathematica is available for Classification: 11.1, 11.6 Nature of problem: A supersymmetric model is usually characterized by the particle content, the gauge sector and the superpotential. It is a time consuming process to obtain all necessary information for phenomenological studies from these basic ingredients. Solution method: SARAH calculates the complete Lagrangian for a given model whose gauge sector can be any direct product of SU(N) gauge groups. The chiral superfields can transform as any, irreducible representation with respect to these gauge groups and it is possible to handle an arbitrary number of symmetry breakings or particle rotations. Also the gauge fixing terms can be specified. Using this information, SARAH derives the mass matrices and Feynman rules at tree-level and generates model files for CalcHep/CompHep and FeynArts/FormCalc. In addition, it can calculate the renormalization group equations at one- and two-loop level and the one-loop corrections to the one- and two-point functions. Unusual features: SARAH just needs the superpotential and gauge sector as input and not the complete Lagrangian. Therefore, the complete implementation of new models is done in some minutes. Running time: Measured CPU time for the evaluation of the MSSM on an Intel Q8200 with 2.33 GHz. Calculating the complete Lagrangian: 12 seconds. Calculating all vertices: 75 seconds. Calculating the one- and two-loop RGEs: 50 seconds. Calculating the one-loop corrections: 7 seconds. Writing a FeynArts file: 1 second. Writing a CalcHep/CompHep file: 6 seconds. Writing the LaTeX output: 1 second.
Package-X 2.0: A Mathematica package for the analytic calculation of one-loop integrals
NASA Astrophysics Data System (ADS)
Patel, Hiren H.
2017-09-01
This article summarizes new features and enhancements of the first major update of Package-X. Package-X 2.0 can now generate analytic expressions for arbitrarily high rank dimensionally regulated tensor integrals with up to four distinct propagators, each with arbitrary integer weight, near an arbitrary even number of spacetime dimensions, giving UV divergent, IR divergent, and finite parts at (almost) any real-valued kinematic point. Additionally, it can generate multivariable Taylor series expansions of these integrals around any non-singular kinematic point to arbitrary order. All special functions and abbreviations output by Package-X 2.0 support Mathematica's arbitrary precision evaluation capabilities to deal with issues of numerical stability. Finally, tensor algebraic routines of Package-X have been polished and extended to support open fermion chains both on and off shell. The documentation (equivalent to over 100 printed pages) is accessed through Mathematica's Wolfram Documentation Center and contains information on all Package-X symbols, with over 300 basic usage examples, 3 project-scale tutorials, and instructions on linking to FEYNCALC and LOOPTOOLS. Program files doi:http://dx.doi.org/10.17632/yfkwrd4d5t.1 Licensing provisions: CC by 4.0 Programming language: Mathematica (Wolfram Language) Journal reference of previous version: H. H. Patel, Comput. Phys. Commun 197, 276 (2015) Does the new version supersede the previous version?: Yes Summary of revisions: Extension to four point one-loop integrals with higher powers of denominator factors, separate extraction of UV and IR divergent parts, testing for power IR divergences, construction of Taylor series expansions of one-loop integrals, numerical evaluation with arbitrary precision arithmetic, manipulation of fermion chains, improved tensor algebraic routines, and much expanded documentation. Nature of problem: Analytic calculation of one-loop integrals in relativistic quantum field theory. Solution method: Passarino-Veltman reduction formula, Denner-Dittmaier reduction formulae, and additional algorithms described in the manuscript. Restrictions: One-loop integrals are limited to those involving no more than four denominator factors.
The high-energy physicistʼs guide to MathLink
NASA Astrophysics Data System (ADS)
Hahn, T.
2012-03-01
MathLink is Wolfram Research's protocol for communicating with the Mathematica Kernel and is used extensively in their own Notebook Frontends. The Mathematica Book insinuates that linking C programs with MathLink is straightforward but in practice there are quite a number of stumbling blocks, in particular in cross-language and cross-platform usage. This write-up tries to clarify the main issues and hopefully makes it easier for software authors to set up Mathematica interfacing in a portable way.
Remote detection of carbon monoxide by FTIR for simulating field detection in industrial process
NASA Astrophysics Data System (ADS)
Gao, Qiankun; Liu, Wenqing; Zhang, Yujun; Gao, Mingguang; Xu, Liang; Li, Xiangxian; Jin, Ling
2016-10-01
In order to monitor carbon monoxide in industrial production, we developed a passive gas radiation measurement system based on Fourier transform infrared spectroscopy and carried out infrared radiation measurement experiment of carbon monoxide detection in simulated industrial production environment by this system. The principle, condition, device and data processing method of the experiment are introduced in this paper. In order to solve the problem of light path jitter in the actual industrial field, we simulated the noise in the industrial environment. We combine the advantages of MATHEMATICA software in the aspects of graph processing and symbolic computation to data processing to improve the signal noise ratio and noise suppression. Based on the HITRAN database, the nonlinear least square fitting method was used to calculate the concentration of the CO spectra before and after the data processing. By comparing the calculated concentration, the data processed by MATHEMATICA is reliable and necessary in the industrial production environment.
NASA Technical Reports Server (NTRS)
Watson, A. B.; Solomon, J. A.
1997-01-01
Psychophysica is a set of software tools for psychophysical research. Functions are provided for calibrated visual displays, for fitting and plotting of psychometric functions, and for the QUEST adaptive staircase procedure. The functions are written in the Mathematica programming language.
Symbolic algebra approach to the calculation of intraocular lens power following cataract surgery
NASA Astrophysics Data System (ADS)
Hjelmstad, David P.; Sayegh, Samir I.
2013-03-01
We present a symbolic approach based on matrix methods that allows for the analysis and computation of intraocular lens power following cataract surgery. We extend the basic matrix approach corresponding to paraxial optics to include astigmatism and other aberrations. The symbolic approach allows for a refined analysis of the potential sources of errors ("refractive surprises"). We demonstrate the computation of lens powers including toric lenses that correct for both defocus (myopia, hyperopia) and astigmatism. A specific implementation in Mathematica allows an elegant and powerful method for the design and analysis of these intraocular lenses.
Agent-based modeling and systems dynamics model reproduction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
North, M. J.; Macal, C. M.
2009-01-01
Reproducibility is a pillar of the scientific endeavour. We view computer simulations as laboratories for electronic experimentation and therefore as tools for science. Recent studies have addressed model reproduction and found it to be surprisingly difficult to replicate published findings. There have been enough failed simulation replications to raise the question, 'can computer models be fully replicated?' This paper answers in the affirmative by reporting on a successful reproduction study using Mathematica, Repast and Swarm for the Beer Game supply chain model. The reproduction process was valuable because it demonstrated the original result's robustness across modelling methodologies and implementation environments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dias, Mafalda; Seery, David; Frazer, Jonathan, E-mail: m.dias@sussex.ac.uk, E-mail: j.frazer@sussex.ac.uk, E-mail: a.liddle@sussex.ac.uk
We describe how to apply the transport method to compute inflationary observables in a broad range of multiple-field models. The method is efficient and encompasses scenarios with curved field-space metrics, violations of slow-roll conditions and turns of the trajectory in field space. It can be used for an arbitrary mass spectrum, including massive modes and models with quasi-single-field dynamics. In this note we focus on practical issues. It is accompanied by a Mathematica code which can be used to explore suitable models, or as a basis for further development.
Specialized Color Function for Display of Signed Data
NASA Technical Reports Server (NTRS)
Kalb, Virginia
2008-01-01
This Mathematica script defines a color function to be used with Mathematica's plotting modules for differentiating data attaining both positive and negative values. Positive values are shown as shades of blue, and negative values are shown in red. The intensity of the color reflects the absolute value of the data value.
Equilibrium-Staged Separations Using Matlab and Mathematica
ERIC Educational Resources Information Center
Binous, Housam
2008-01-01
We show a new approach, based on the utilization of Matlab and Mathematica, for solving liquid-liquid extraction and binary distillation problems. In addition, the author shares his experience using these two softwares to teach equilibrium staged separations at the National Institute of Applied Sciences and Technology. (Contains 7 figures.)
Evidence Scan of Work Experience Programs. Mathematica Reference Number: 06747-100
ERIC Educational Resources Information Center
Sattar, Samina
2010-01-01
This study, being conducted through the Center for Improving Research Evidence (CIRE) at Mathematica Policy Research for the venture philanthropy organization REDF (formerly The Roberts Enterprise Development Fund), presents the evidence on the effectiveness of interventions that include work experience as a strategy to improve employment outcomes…
Advanced Chemistry Collection, 2nd Edition
NASA Astrophysics Data System (ADS)
2001-11-01
Software requirements are given in Table 3. Some programs have additional special requirements. Please see the individual program abstracts at JCE Online or the documentation included on the CD-ROM for more specific information. Table 3. General software requirements for the Advanced Chemistry Collection.
| Computer | System | Other Software(Required by one or more programs) |
| Mac OS compatible | System 7.6.1 or higher | Acrobat Reader (included)Mathcad; Mathematica;MacMolecule2; QuickTime 4; HyperCard Player |
| Windows Compatible | Windows 2000, 98, 95, NT 4 | Acrobat Reader (included)Mathcad; Mathematica;PCMolecule2; QuickTime 4;HyperChem; Excel |
New Double-Periodic Soliton Solutions for the (2+1)-Dimensional Breaking Soliton Equation
NASA Astrophysics Data System (ADS)
Liu, Jian-Guo; Tian, Yu
2018-05-01
Under investigation is the (2+1)-dimensional breaking soliton equation. Based on a special ansätz functions and the bilinear form, some entirely new double-periodic soliton solutions for the (2+1)-dimensional breaking soliton equation are presented. With the help of symbolic computation software Mathematica, many important and interesting properties for these obtained solutions are revealed with some figures. Supported by National Natural Science Foundation of China under Grant No. 61377067
Computerized proof techniques for undergraduates
NASA Astrophysics Data System (ADS)
Smith, Christopher J.; Tefera, Akalu; Zeleke, Aklilu
2012-12-01
The use of computer algebra systems such as Maple and Mathematica is becoming increasingly important and widespread in mathematics learning, teaching and research. In this article, we present computerized proof techniques of Gosper, Wilf-Zeilberger and Zeilberger that can be used for enhancing the teaching and learning of topics in discrete mathematics. We demonstrate by examples how one can use these computerized proof techniques to raise students' interests in the discovery and proof of mathematical identities and enhance their problem-solving skills.
Exploring Fourier Series and Gibbs Phenomenon Using Mathematica
ERIC Educational Resources Information Center
Ghosh, Jonaki B.
2011-01-01
This article describes a laboratory module on Fourier series and Gibbs phenomenon which was undertaken by 32 Year 12 students. It shows how the use of CAS played the role of an "amplifier" by making higher level mathematical concepts accessible to students of year 12. Using Mathematica students were able to visualise Fourier series of…
Computer-Based Mathematics Instructions for Engineering Students
NASA Technical Reports Server (NTRS)
Khan, Mustaq A.; Wall, Curtiss E.
1996-01-01
Almost every engineering course involves mathematics in one form or another. The analytical process of developing mathematical models is very important for engineering students. However, the computational process involved in the solution of some mathematical problems may be very tedious and time consuming. There is a significant amount of mathematical software such as Mathematica, Mathcad, and Maple designed to aid in the solution of these instructional problems. The use of these packages in classroom teaching can greatly enhance understanding, and save time. Integration of computer technology in mathematics classes, without de-emphasizing the traditional analytical aspects of teaching, has proven very successful and is becoming almost essential. Sample computer laboratory modules are developed for presentation in the classroom setting. This is accomplished through the use of overhead projectors linked to graphing calculators and computers. Model problems are carefully selected from different areas.
BCM-2.0 - The new version of computer code ;Basic Channeling with Mathematica©;
NASA Astrophysics Data System (ADS)
Abdrashitov, S. V.; Bogdanov, O. V.; Korotchenko, K. B.; Pivovarov, Yu. L.; Rozhkova, E. I.; Tukhfatullin, T. A.; Eikhorn, Yu. L.
2017-07-01
The new symbolic-numerical code devoted to investigation of the channeling phenomena in periodic potential of a crystal has been developed. The code has been written in Wolfram Language taking advantage of analytical programming method. Newly developed different packages were successfully applied to simulate scattering, radiation, electron-positron pair production and other effects connected with channeling of relativistic particles in aligned crystal. The result of the simulation has been validated against data from channeling experiments carried out at SAGA LS.
ERIC Educational Resources Information Center
Hutem, Artit; Kerdmee, Supoj
2013-01-01
The propose of this study is to study Physics Learning Achievement, projectile motion, using the Mathematica program of Faculty of Science and Technology Phetchabun Rajabhat University students, comparing with Faculty of Science and Technology Phetchabun Rajabhat University students who study the projectile motion experiment set. The samples are…
Robust blood-glucose control using Mathematica.
Kovács, Levente; Paláncz, Béla; Benyó, Balázs; Török, László; Benyó, Zoltán
2006-01-01
A robust control design on frequency domain using Mathematica is presented for regularization of glucose level in type I diabetes persons under intensive care. The method originally proposed under Mathematica by Helton and Merino, --now with an improved disturbance rejection constraint inequality--is employed, using a three-state minimal patient model. The robustness of the resulted high-order linear controller is demonstrated by nonlinear closed loop simulation in state-space, in case of standard meal disturbances and is compared with H infinity design implemented with the mu-toolbox of Matlab. The controller designed with model parameters represented the most favorable plant dynamics from the point of view of control purposes, can operate properly even in case of parameter values of the worst-case scenario.
Pycellerator: an arrow-based reaction-like modelling language for biological simulations.
Shapiro, Bruce E; Mjolsness, Eric
2016-02-15
We introduce Pycellerator, a Python library for reading Cellerator arrow notation from standard text files, conversion to differential equations, generating stand-alone Python solvers, and optionally running and plotting the solutions. All of the original Cellerator arrows, which represent reactions ranging from mass action, Michales-Menten-Henri (MMH) and Gene-Regulation (GRN) to Monod-Wyman-Changeaux (MWC), user defined reactions and enzymatic expansions (KMech), were previously represented with the Mathematica extended character set. These are now typed as reaction-like commands in ASCII text files that are read by Pycellerator, which includes a Python command line interface (CLI), a Python application programming interface (API) and an iPython notebook interface. Cellerator reaction arrows are now input in text files. The arrows are parsed by Pycellerator and translated into differential equations in Python, and Python code is automatically generated to solve the system. Time courses are produced by executing the auto-generated Python code. Users have full freedom to modify the solver and utilize the complete set of standard Python tools. The new libraries are completely independent of the old Cellerator software and do not require Mathematica. All software is available (GPL) from the github repository at https://github.com/biomathman/pycellerator/releases. Details, including installation instructions and a glossary of acronyms and terms, are given in the Supplementary information. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Promoting Active Learning: The Use of Computational Software Programs
NASA Astrophysics Data System (ADS)
Dickinson, Tom
The increased emphasis on active learning in essentially all disciplines is proving beneficial in terms of a student's depth of learning, retention, and completion of challenging courses. Formats labeled flipped, hybrid and blended facilitate face-to-face active learning. To be effective, students need to absorb a significant fraction of the course material prior to class, e.g., using online lectures and reading assignments. Getting students to assimilate and at least partially understand this material prior to class can be extremely difficult. As an aid to achieving this preparation as well as enhancing depth of understanding, we find the use of software programs such as Mathematica®or MatLab®, very helpful. We have written several Mathematica®applications and student exercises for use in a blended format two semester E&M course. Formats include tutorials, simulations, graded and non-graded quizzes, walk-through problems, exploration and interpretation exercises, and numerical solutions of complex problems. A good portion of this activity involves student-written code. We will discuss the efficacy of these applications, their role in promoting active learning, and the range of possible uses of this basic scheme in other classes.
Satellite Orbit Under Influence of a Drag - Analytical Approach
NASA Astrophysics Data System (ADS)
Martinović, M. M.; Šegan, S. D.
2017-12-01
The report studies some changes in orbital elements of the artificial satellites of Earth under influence of atmospheric drag. In order to develop possibilities of applying the results in many future cases, an analytical interpretation of the orbital element perturbations is given via useful, but very long expressions. The development is based on the TD88 air density model, recently upgraded with some additional terms. Some expressions and formulae were developed by the computer algebra system Mathematica and tested in some hypothetical cases. The results have good agreement with iterative (numerical) approach.
Interaction phenomenon to dimensionally reduced p-gBKP equation
NASA Astrophysics Data System (ADS)
Zhang, Runfa; Bilige, Sudao; Bai, Yuexing; Lü, Jianqing; Gao, Xiaoqing
2018-02-01
Based on searching the combining of quadratic function and exponential (or hyperbolic cosine) function from the Hirota bilinear form of the dimensionally reduced p-gBKP equation, eight class of interaction solutions are derived via symbolic computation with Mathematica. The submergence phenomenon, presented to illustrate the dynamical features concerning these obtained solutions, is observed by three-dimensional plots and density plots with particular choices of the involved parameters between the exponential (or hyperbolic cosine) function and the quadratic function. It is proved that the interference between the two solitary waves is inelastic.
An efficient quantum circuit analyser on qubits and qudits
NASA Astrophysics Data System (ADS)
Loke, T.; Wang, J. B.
2011-10-01
This paper presents a highly efficient decomposition scheme and its associated Mathematica notebook for the analysis of complicated quantum circuits comprised of single/multiple qubit and qudit quantum gates. In particular, this scheme reduces the evaluation of multiple unitary gate operations with many conditionals to just two matrix additions, regardless of the number of conditionals or gate dimensions. This improves significantly the capability of a quantum circuit analyser implemented in a classical computer. This is also the first efficient quantum circuit analyser to include qudit quantum logic gates.
Astronomy Education using the Web and a Computer Algebra System
NASA Astrophysics Data System (ADS)
Flurchick, K. M.; Culver, Roger B.; Griego, Ben
2013-04-01
The combination of a web server and a Computer Algebra System to provide students the ability to explore and investigate astronomical concepts presented in a class can help student understanding. This combination of technologies provides a framework to extend the classroom experience with independent student exploration. In this presentation we report on the developmen of this web based material and some initial results of students making use of the computational tools using webMathematica^TM. The material developed allow the student toanalyze and investigate a variety of astronomical phenomena, including topics such as the Runge-Lenz vector, descriptions of the orbits of some of the exo-planets, Bode' law and other topics related to celestial mechanics. The server based Computer Algebra System system allows for computations without installing software on the student's computer but provides a powerful environment to explore the various concepts. The current system is installed at North Carolina A&T State University and has been used in several undergraduate classes.
Lattice gas methods for computational aeroacoustics
NASA Technical Reports Server (NTRS)
Sparrow, Victor W.
1995-01-01
This paper presents the lattice gas solution to the category 1 problems of the ICASE/LaRC Workshop on Benchmark Problems in Computational Aeroacoustics. The first and second problems were solved for Delta t = Delta x = 1, and additionally the second problem was solved for Delta t = 1/4 and Delta x = 1/2. The results are striking: even for these large time and space grids the lattice gas numerical solutions are almost indistinguishable from the analytical solutions. A simple bug in the Mathematica code was found in the solutions submitted for comparison, and the comparison plots shown at the end of this volume show the bug. An Appendix to the present paper shows an example lattice gas solution with and without the bug.
Full thermomechanical coupling in modelling of micropolar thermoelasticity
NASA Astrophysics Data System (ADS)
Murashkin, E. V.; Radayev, Y. N.
2018-04-01
The present paper is devoted to plane harmonic waves of displacements and microrotations propagating in fully coupled thermoelastic continua. The analysis is carried out in the framework of linear conventional thermoelastic micropolar continuum model. The reduced energy balance equation and the special form of the Helmholtz free energy are discussed. The constitutive constants providing fully coupling of equations of motion and heat conduction are considered. The dispersion equation is derived and analysed in the form bi-cubic and bi-quadratic polynoms product. The equation are analyzed by the computer algebra system Mathematica. Algebraic forms expressed by complex multivalued square and cubic radicals are obtained for wavenumbers of transverse and longitudinal waves. The exact forms of wavenumbers of a plane harmonic coupled thermoelastic waves are computed.
SYMBMAT: Symbolic computation of quantum transition matrix elements
NASA Astrophysics Data System (ADS)
Ciappina, M. F.; Kirchner, T.
2012-08-01
We have developed a set of Mathematica notebooks to compute symbolically quantum transition matrices relevant for atomic ionization processes. The utilization of a symbolic language allows us to obtain analytical expressions for the transition matrix elements required in charged-particle and laser induced ionization of atoms. Additionally, by using a few simple commands, it is possible to export these symbolic expressions to standard programming languages, such as Fortran or C, for the subsequent computation of differential cross sections or other observables. One of the main drawbacks in the calculation of transition matrices is the tedious algebraic work required when initial states other than the simple hydrogenic 1s state need to be considered. Using these notebooks the work is dramatically reduced and it is possible to generate exact expressions for a large set of bound states. We present explicit examples of atomic collisions (in First Born Approximation and Distorted Wave Theory) and laser-matter interactions (within the Dipole and Strong Field Approximations and different gauges) using both hydrogenic wavefunctions and Slater-Type Orbitals with arbitrary nlm quantum numbers as initial states. Catalogue identifier: AEMI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEMI_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 71 628 No. of bytes in distributed program, including test data, etc.: 444 195 Distribution format: tar.gz Programming language: Mathematica Computer: Single machines using Linux or Windows (with cores with any clock speed, cache memory and bits in a word) Operating system: Any OS that supports Mathematica. The notebooks have been tested under Windows and Linux and with versions 6.x, 7.x and 8.x Classification: 2.6 Nature of problem: The notebooks generate analytical expressions for quantum transition matrix elements required in diverse atomic processes: ionization by ion, electron, or photon impact and ionization within the framework of strong field physics. In charged-particle collisions approaches based on perturbation theory enjoy widespread utilization. Accordingly, we have chosen the First Born Approximation and Distorted Wave theories as examples. In light-matter interactions, the main ingredient for many types of calculations is the dipole transition matrix in its different formulations, i.e. length, velocity, and acceleration gauges. In all these cases the transitions of interest occur between a bound state and a continuum state which can be described in different ways. With the notebooks developed in the present work it is possible to calculate transition matrix elements analytically for any set of quantum numbers nlm of initial hydrogenic states or Slater-Type Orbitals and for plane waves or Coulomb waves as final continuum states. Solution method: The notebooks employ symbolic computation to generate analytical expressions for transition matrix elements used in both collision and light-matter interaction physics. fba_hyd.nb - This notebook computes analytical expressions for the transition matrix of collision-induced ionization in the First Born Approximation (FBA). The transitions considered are from a bound hydrogenic state with arbitrary quantum numbers nlm to a continuum state represented by a plane wave (PW) or a Coulomb wave (CW). distorted_hyd.nb - This notebook computes analytical expressions for the transition matrix of collision-induced ionization in Distorted Wave (DW) theories. The transitions considered are from a (distorted) bound hydrogenic state with arbitrary quantum numbers nlm to a distorted-wave continuum state. The computations are based on scalar and vectorial integrals (see the text for details). dipoleLength_hyd.nb - This notebook computes analytical expressions for the dipole transition matrix in length gauge. The transitions considered are from a bound hydrogenic state with arbitrary quantum numbers nlm to a continuum state represented by a PW (the Strong Field Approximation (SFA)) or a CW (the Coulomb-Volkov Approximation (CVA)). dipoleVelocity_hyd.nb - This notebook computes analytical expressions for the dipole transition matrix in velocity gauge. The transitions considered are from a bound hydrogenic state with arbitrary quantum numbers nlm to a continuum state represented by a PW (the SFA) or a CW (the CVA). dipoleAcceleration_hyd.nb - This notebook computes analytical expressions for the dipole transition matrix in acceleration gauge. The transitions considered are from a bound hydrogenic state with arbitrary quantum numbers nlm to a continuum state represented by a PW (the SFA). For the case of the CVA we only include the transition from the 1s state to a continuum state represented by a CW. fba_STO.nb - This notebook computes analytical expressions for the transition matrix of collision-induced ionization in the FBA. The transitions considered are from a Slater-Type Orbital (STO) with arbitrary quantum numbers nlm to a continuum state represented by a PW or a CW. distorted_STO.nb - This notebook computes analytical expressions for the transition matrix of collision-induced ionization in DW theories. The transitions considered are from a (distorted) STO with arbitrary quantum numbers nlm to a distorted-wave continuum state. The computations are based on scalar and vectorial integrals (see the text for details). dipoleLength_STO.nb - This notebook computes analytical expressions for the dipole transition matrix in length gauge. The transitions considered are from an STO with arbitrary quantum numbers nlm to a continuum state represented by a PW (the SFA) or a CW (the CVA). dipoleVelocity_STO.nb - This notebook computes analytical expressions for the dipole transition matrix in velocity gauge. The transitions considered are from an STO with arbitrary quantum numbers nlm to a continuum state represented by a PW (the SFA) or a CW (the CVA). dipoleAcceleration_STO.nb - This notebook computes analytical expressions for the dipole transition matrix in acceleration gauge. The transitions considered are from an STO with arbitrary quantum numbers nlm to a continuum state represented by a PW (the SFA). The symbolic expressions obtained within each notebook can be exported to standard programming languages such as Fortran or C using the Format.m package (see the text and Ref. Sofroniou (1993) [16] for details). Running time: Computational times vary according to the transition matrix selected and quantum numbers nlm of the initial state used. The typical running time is several minutes, but it will take longer for large values of nlm.
Analytical fitting model for rough-surface BRDF.
Renhorn, Ingmar G E; Boreman, Glenn D
2008-08-18
A physics-based model is developed for rough surface BRDF, taking into account angles of incidence and scattering, effective index, surface autocovariance, and correlation length. Shadowing is introduced on surface correlation length and reflectance. Separate terms are included for surface scatter, bulk scatter and retroreflection. Using the FindFit function in Mathematica, the functional form is fitted to BRDF measurements over a wide range of incident angles. The model has fourteen fitting parameters; once these are fixed, the model accurately describes scattering data over two orders of magnitude in BRDF without further adjustment. The resulting analytical model is convenient for numerical computations.
Computing the Baker-Campbell-Hausdorff series and the Zassenhaus product
NASA Astrophysics Data System (ADS)
Weyrauch, Michael; Scholz, Daniel
2009-09-01
The Baker-Campbell-Hausdorff (BCH) series and the Zassenhaus product are of fundamental importance for the theory of Lie groups and their applications in physics and physical chemistry. Standard methods for the explicit construction of the BCH and Zassenhaus terms yield polynomial representations, which must be translated into the usually required commutator representation. We prove that a new translation proposed recently yields a correct representation of the BCH and Zassenhaus terms. This representation entails fewer terms than the well-known Dynkin-Specht-Wever representation, which is of relevance for practical applications. Furthermore, various methods for the computation of the BCH and Zassenhaus terms are compared, and a new efficient approach for the calculation of the Zassenhaus terms is proposed. Mathematica implementations for the most efficient algorithms are provided together with comparisons of efficiency.
Proceedings of the Fourth Annual U.S. Army Conference on Applied Statistics, 21-23 October 1998.
1999-11-01
1833) published a memoir Nouvelles mithodes pour la determination des cometes in which he introduced and named the method of least squares. In 1809...251,1972. 2. Sprott, D. A. "Gauss’s Contributions to Statistics." Historia Mathematica, vol. 5, pp. 183-203,1978. 3. Stigler, S. M. "An Attack on Gauss...Published by Legendre in 1820." Historia Mathematica. vol. 4, pp. 31-35, 1977. 4. Stigler, S. M. "Gauss and the Invention of Least Squares." The
NASA Astrophysics Data System (ADS)
Favata, Antonino; Micheletti, Andrea; Ryu, Seunghwa; Pugno, Nicola M.
2016-10-01
An analytical benchmark and a simple consistent Mathematica program are proposed for graphene and carbon nanotubes, that may serve to test any molecular dynamics code implemented with REBO potentials. By exploiting the benchmark, we checked results produced by LAMMPS (Large-scale Atomic/Molecular Massively Parallel Simulator) when adopting the second generation Brenner potential, we made evident that this code in its current implementation produces results which are offset from those of the benchmark by a significant amount, and provide evidence of the reason.
A Flow-Channel Analysis for the Mars Hopper
DOE Office of Scientific and Technical Information (OSTI.GOV)
W. Spencer Cooley
The Mars Hopper is an exploratory vehicle designed to fly on Mars using carbon dioxide from the Martian atmosphere as a rocket propellant. The propellent gasses are thermally heated while traversing a radioisotope ther- mal rocket (RTR) engine’s core. This core is comprised of a radioisotope surrounded by a heat capacitive material interspersed with tubes for the propellant to travel through. These tubes, or flow channels, can be manu- factured in various cross-sectional shapes such as a special four-point star or the traditional circle. Analytical heat transfer and computational fluid dynamics (CFD) anal- yses were performed using flow channels withmore » either a circle or a star cross- sectional shape. The nominal total inlet pressure was specified at 2,805,000 Pa; and the outlet pressure was set to 2,785,000 Pa. The CO2 inlet tem- perature was 300 K; and the channel wall was 1200 K. The steady-state CFD simulations computed the smooth-walled star shape’s outlet temper- ature to be 959 K on the finest mesh. The smooth-walled circle’s outlet temperature was 902 K. A circle with a surface roughness specification at 0.01 mm gave 946 K and at 0.1 mm yielded 989 K. The The effects of a slightly varied inlet pressure were also examined. The analytical calculations were based on the mass flow rates computed in the CFD simulations and provided significantly higher outlet temperature results while displaying the same comparison trends. Research relating to the flow channel heat transfer studies was also done. Mathematical methods to geometrically match the cross-sectional areas of the circle and star, along with a square and equilateral triangle, were derived. A Wolfram Mathematica 8 module was programmed to analyze CFD results using Richardson Extrapolation and calculate the grid convergence index (GCI). A Mathematica notebook, also composed, computes and graphs the bulk mean temperature along a flow channel’s length while the user dynam- ically provides the input variables, allowing their effects on the temperature to be more easily observed.« less
Two-spectral Yang-Baxter operators in topological quantum computation
NASA Astrophysics Data System (ADS)
Sanchez, William F.
2011-05-01
One of the current trends in quantum computing is the application of algebraic topological methods in the design of new algorithms and quantum computers, giving rise to topological quantum computing. One of the tools used in it is the Yang-Baxter equation whose solutions are interpreted as universal quantum gates. Lately, more general Yang-Baxter equations have been investigated, making progress as two-spectral equations and Yang-Baxter systems. This paper intends to apply these new findings to the field of topological quantum computation, more specifically, the proposition of the two-spectral Yang-Baxter operators as universal quantum gates for 2 qubits and 2 qutrits systems, obtaining 4x4 and 9x9 matrices respectively, and further elaboration of the corresponding Hamiltonian by the use of computer algebra software Mathematica® and its Qucalc package. In addition, possible physical systems to which the Yang-Baxter operators obtained can be applied are considered. In the present work it is demonstrated the utility of the Yang-Baxter equation to generate universal quantum gates and the power of computer algebra to design them; it is expected that these mathematical studies contribute to the further development of quantum computers
NASA Technical Reports Server (NTRS)
Shapiro, Bruce E.; Levchenko, Andre; Meyerowitz, Elliot M.; Wold, Barbara J.; Mjolsness, Eric D.
2003-01-01
Cellerator describes single and multi-cellular signal transduction networks (STN) with a compact, optionally palette-driven, arrow-based notation to represent biochemical reactions and transcriptional activation. Multi-compartment systems are represented as graphs with STNs embedded in each node. Interactions include mass-action, enzymatic, allosteric and connectionist models. Reactions are translated into differential equations and can be solved numerically to generate predictive time courses or output as systems of equations that can be read by other programs. Cellerator simulations are fully extensible and portable to any operating system that supports Mathematica, and can be indefinitely nested within larger data structures to produce highly scaleable models.
Integrand-level reduction of loop amplitudes by computational algebraic geometry methods
NASA Astrophysics Data System (ADS)
Zhang, Yang
2012-09-01
We present an algorithm for the integrand-level reduction of multi-loop amplitudes of renormalizable field theories, based on computational algebraic geometry. This algorithm uses (1) the Gröbner basis method to determine the basis for integrand-level reduction, (2) the primary decomposition of an ideal to classify all inequivalent solutions of unitarity cuts. The resulting basis and cut solutions can be used to reconstruct the integrand from unitarity cuts, via polynomial fitting techniques. The basis determination part of the algorithm has been implemented in the Mathematica package, BasisDet. The primary decomposition part can be readily carried out by algebraic geometry softwares, with the output of the package BasisDet. The algorithm works in both D = 4 and D = 4 - 2 ɛ dimensions, and we present some two and three-loop examples of applications of this algorithm.
Benard, Emmanuel; Michel, Christian J
2009-08-01
We present here the SEGM web server (Stochastic Evolution of Genetic Motifs) in order to study the evolution of genetic motifs both in the direct evolutionary sense (past-present) and in the inverse evolutionary sense (present-past). The genetic motifs studied can be nucleotides, dinucleotides and trinucleotides. As an example of an application of SEGM and to understand its functionalities, we give an analysis of inverse mutations of splice sites of human genome introns. SEGM is freely accessible at http://lsiit-bioinfo.u-strasbg.fr:8080/webMathematica/SEGM/SEGM.html directly or by the web site http://dpt-info.u-strasbg.fr/~michel/. To our knowledge, this SEGM web server is to date the only computational biology software in this evolutionary approach.
An Introduction to Quantum Theory
NASA Astrophysics Data System (ADS)
Greensite, Jeff
2017-02-01
Written in a lucid and engaging style, the author takes readers from an overview of classical mechanics and the historical development of quantum theory through to advanced topics. The mathematical aspects of quantum theory necessary for a firm grasp of the subject are developed in the early chapters, but an effort is made to motivate that formalism on physical grounds. Including animated figures and their respective Mathematica® codes, this book provides a complete and comprehensive text for students in physics, maths, chemistry and engineering needing an accessible introduction to quantum mechanics. Supplementary Mathematica codes available within Book Information
Regenerable biocide delivery unit, volume 2
NASA Technical Reports Server (NTRS)
Atwater, James E.; Wheeler, Richard R., Jr.
1992-01-01
Source code for programs dealing with the following topics are presented: (1) life cycle test stand-parametric test stand control (in BASIC); (2) simultaneous aqueous iodine equilibria-true equilibrium (in C); (3) simultaneous aqueous iodine equilibria-pseudo-equilibrium (in C); (4) pseudo-(fast)-equilibrium with iodide initially present (in C); (5) solution of simultaneous iodine rate expressions (Mathematica); (6) 2nd order kinetics of I2-formic acid in humidity condensate (Mathematica); (7) prototype RMCV onboard microcontroller (CAMBASIC); (8) prototype RAM data dump to PC (in BASIC); and (9) prototype real time data transfer to PC (in BASIC).
Computation in Classical Mechanics with Easy Java Simulations (EJS)
NASA Astrophysics Data System (ADS)
Cox, Anne J.
2006-12-01
Let your students enjoy creating animations and incorporating some computational physics into your Classical Mechanics course. This talk will demonstrate the use of an Open Source Physics package, Easy Java Simulations (EJS), in an already existing sophomore/junior level Classical Mechanics course. EJS allows for incremental introduction of computational physics into existing courses because it is easy to use (for instructors and students alike) and it is open source. Students can use this tool for numerical solutions to problems (as they can with commercial systems: Mathcad and Mathematica), but they can also generate their own animations. For example, students in Classical Mechanics use Lagrangian mechanics to solve a problem, and then use EJS not only to numerically solve the differential equations, but to show the associated motion (and check their answers). EJS, developed by Francisco Esquembre (http://fem.um.es/Ejs/), is built on the OpenSource Physics framework (http://www.opensourcephysics.org/) supported through NSF DUE0442581.
NASA Astrophysics Data System (ADS)
Wu, Dongmei; Wang, Zhongcheng
2006-03-01
According to Mickens [R.E. Mickens, Comments on a Generalized Galerkin's method for non-linear oscillators, J. Sound Vib. 118 (1987) 563], the general HB (harmonic balance) method is an approximation to the convergent Fourier series representation of the periodic solution of a nonlinear oscillator and not an approximation to an expansion in terms of a small parameter. Consequently, for a nonlinear undamped Duffing equation with a driving force Bcos(ωx), to find a periodic solution when the fundamental frequency is identical to ω, the corresponding Fourier series can be written as y˜(x)=∑n=1m acos[(2n-1)ωx]. How to calculate the coefficients of the Fourier series efficiently with a computer program is still an open problem. For HB method, by substituting approximation y˜(x) into force equation, expanding the resulting expression into a trigonometric series, then letting the coefficients of the resulting lowest-order harmonic be zero, one can obtain approximate coefficients of approximation y˜(x) [R.E. Mickens, Comments on a Generalized Galerkin's method for non-linear oscillators, J. Sound Vib. 118 (1987) 563]. But for nonlinear differential equations such as Duffing equation, it is very difficult to construct higher-order analytical approximations, because the HB method requires solving a set of algebraic equations for a large number of unknowns with very complex nonlinearities. To overcome the difficulty, forty years ago, Urabe derived a computational method for Duffing equation based on Galerkin procedure [M. Urabe, A. Reiter, Numerical computation of nonlinear forced oscillations by Galerkin's procedure, J. Math. Anal. Appl. 14 (1966) 107-140]. Dooren obtained an approximate solution of the Duffing oscillator with a special set of parameters by using Urabe's method [R. van Dooren, Stabilization of Cowell's classic finite difference method for numerical integration, J. Comput. Phys. 16 (1974) 186-192]. In this paper, in the frame of the general HB method, we present a new iteration algorithm to calculate the coefficients of the Fourier series. By using this new method, the iteration procedure starts with a(x)cos(ωx)+b(x)sin(ωx), and the accuracy may be improved gradually by determining new coefficients a,a,… will be produced automatically in an one-by-one manner. In all the stage of calculation, we need only to solve a cubic equation. Using this new algorithm, we develop a Mathematica program, which demonstrates following main advantages over the previous HB method: (1) it avoids solving a set of associate nonlinear equations; (2) it is easier to be implemented into a computer program, and produces a highly accurate solution with analytical expression efficiently. It is interesting to find that, generally, for a given set of parameters, a nonlinear Duffing equation can have three independent oscillation modes. For some sets of the parameters, it can have two modes with complex displacement and one with real displacement. But in some cases, it can have three modes, all of them having real displacement. Therefore, we can divide the parameters into two classes, according to the solution property: there is only one mode with real displacement and there are three modes with real displacement. This program should be useful to study the dynamically periodic behavior of a Duffing oscillator and can provide an approximate analytical solution with high-accuracy for testing the error behavior of newly developed numerical methods with a wide range of parameters. Program summaryTitle of program:AnalyDuffing.nb Catalogue identifier:ADWR_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADWR_v1_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing provisions:none Computer for which the program is designed and others on which it has been tested:the program has been designed for a microcomputer and been tested on the microcomputer. Computers:IBM PC Installations:the address(es) of your computer(s) Operating systems under which the program has been tested:Windows XP Programming language used:Software Mathematica 4.2, 5.0 and 5.1 No. of lines in distributed program, including test data, etc.:23 663 No. of bytes in distributed program, including test data, etc.:152 321 Distribution format:tar.gz Memory required to execute with typical data:51 712 Bytes No. of bits in a word: No. of processors used:1 Has the code been vectorized?:no Peripherals used:no Program Library subprograms used:no Nature of physical problem:To find an approximate solution with analytical expressions for the undamped nonlinear Duffing equation with periodic driving force when the fundamental frequency is identical to the driving force. Method of solution:In the frame of the general HB method, by using a new iteration algorithm to calculate the coefficients of the Fourier series, we can obtain an approximate analytical solution with high-accuracy efficiently. Restrictions on the complexity of the problem:For problems, which have a large driving frequency, the convergence may be a little slow, because more iterative times are needed. Typical running time:several seconds Unusual features of the program:For an undamped Duffing equation, it can provide all the solutions or the oscillation modes with real displacement for any interesting parameters, for the required accuracy, efficiently. The program can be used to study the dynamically periodic behavior of a nonlinear oscillator, and can provide a high-accurate approximate analytical solution for developing high-accurate numerical method.
Numerical simulation of NQR/NMR: Applications in quantum computing.
Possa, Denimar; Gaudio, Anderson C; Freitas, Jair C C
2011-04-01
A numerical simulation program able to simulate nuclear quadrupole resonance (NQR) as well as nuclear magnetic resonance (NMR) experiments is presented, written using the Mathematica package, aiming especially applications in quantum computing. The program makes use of the interaction picture to compute the effect of the relevant nuclear spin interactions, without any assumption about the relative size of each interaction. This makes the program flexible and versatile, being useful in a wide range of experimental situations, going from NQR (at zero or under small applied magnetic field) to high-field NMR experiments. Some conditions specifically required for quantum computing applications are implemented in the program, such as the possibility of use of elliptically polarized radiofrequency and the inclusion of first- and second-order terms in the average Hamiltonian expansion. A number of examples dealing with simple NQR and quadrupole-perturbed NMR experiments are presented, along with the proposal of experiments to create quantum pseudopure states and logic gates using NQR. The program and the various application examples are freely available through the link http://www.profanderson.net/files/nmr_nqr.php. Copyright © 2011 Elsevier Inc. All rights reserved.
Low cost paths to binary optics
NASA Technical Reports Server (NTRS)
Nelson, Arthur; Domash, Lawrence
1993-01-01
Application of binary optics has been limited to a few major laboratories because of the limited availability of fabrication facilities such as e-beam machines and the lack of standardized design software. Foster-Miller has attempted to identify low cost approaches to medium-resolution binary optics using readily available computer and fabrication tools, primarily for the use of students and experimenters in optical computing. An early version of our system, MacBEEP, made use of an optimized laser film recorder from the commercial typesetting industry with 10 micron resolution. This report is an update on our current efforts to design and build a second generation MacBEEP, which aims at 1 micron resolution and multiple phase levels. Trails included a low cost scanning electron microscope in microlithography mode, and alternative laser inscribers or photomask generators. Our current software approach is based on Mathematica and PostScript compatibility.
Numerical method for computing Maass cusp forms on triply punctured two-sphere
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chan, K. T.; Kamari, H. M.; Zainuddin, H.
2014-03-05
A quantum mechanical system on a punctured surface modeled on hyperbolic space has always been an important subject of research in mathematics and physics. This corresponding quantum system is governed by the Schrödinger equation whose solutions are the Maass waveforms. Spectral studies on these Maass waveforms are known to contain both continuous and discrete eigenvalues. The discrete eigenfunctions are usually called the Maass Cusp Forms (MCF) where their discrete eigenvalues are not known analytically. We introduce a numerical method based on Hejhal and Then algorithm using GridMathematica for computing MCF on a punctured surface with three cusps namely the triplymore » punctured two-sphere. We also report on a pullback algorithm for the punctured surface and a point locater algorithm to facilitate the complete pullback which are essential parts of the main algorithm.« less
Experimental validation of docking and capture using space robotics testbeds
NASA Technical Reports Server (NTRS)
Spofford, John; Schmitz, Eric; Hoff, William
1991-01-01
This presentation describes the application of robotic and computer vision systems to validate docking and capture operations for space cargo transfer vehicles. Three applications are discussed: (1) air bearing systems in two dimensions that yield high quality free-flying, flexible, and contact dynamics; (2) validation of docking mechanisms with misalignment and target dynamics; and (3) computer vision technology for target location and real-time tracking. All the testbeds are supported by a network of engineering workstations for dynamic and controls analyses. Dynamic simulation of multibody rigid and elastic systems are performed with the TREETOPS code. MATRIXx/System-Build and PRO-MATLAB/Simulab are the tools for control design and analysis using classical and modern techniques such as H-infinity and LQG/LTR. SANDY is a general design tool to optimize numerically a multivariable robust compensator with a user-defined structure. Mathematica and Macsyma are used to derive symbolically dynamic and kinematic equations.
UFO - The Universal FEYNRULES Output
NASA Astrophysics Data System (ADS)
Degrande, Céline; Duhr, Claude; Fuks, Benjamin; Grellscheid, David; Mattelaer, Olivier; Reiter, Thomas
2012-06-01
We present a new model format for automatized matrix-element generators, the so-called Universal FEYNRULES Output (UFO). The format is universal in the sense that it features compatibility with more than one single generator and is designed to be flexible, modular and agnostic of any assumption such as the number of particles or the color and Lorentz structures appearing in the interaction vertices. Unlike other model formats where text files need to be parsed, the information on the model is encoded into a PYTHON module that can easily be linked to other computer codes. We then describe an interface for the MATHEMATICA package FEYNRULES that allows for an automatic output of models in the UFO format.
NASA Astrophysics Data System (ADS)
Baskonus, Haci Mehmet; Sulaiman, Tukur Abdulkadir; Bulut, Hasan
2017-10-01
In this paper, with the help of Wolfram Mathematica 9 we employ the powerful sine-Gordon expansion method in investigating the solution structures of the two well known nonlinear evolution equations, namely; Calogero-Bogoyavlenskii-Schiff and Kadomtsev-Petviashvili hierarchy equations. We obtain new solutions with complex, hyperbolic and trigonometric function structures. All the obtained solutions in this paper verified their corresponding equations. We also plot the three- and two-dimensional graphics of all the obtained solutions in this paper by using the same program in Wolfram Mathematica 9. We finally submit a comprehensive conclusion.
NASA Astrophysics Data System (ADS)
Athron, Peter; Balazs, Csaba; Bringmann, Torsten; Buckley, Andy; Chrząszcz, Marcin; Conrad, Jan; Cornell, Jonathan M.; Dal, Lars A.; Dickinson, Hugh; Edsjö, Joakim; Farmer, Ben; Gonzalo, Tomás E.; Jackson, Paul; Krislock, Abram; Kvellestad, Anders; Lundberg, Johan; McKay, James; Mahmoudi, Farvah; Martinez, Gregory D.; Putze, Antje; Raklev, Are; Ripken, Joachim; Rogan, Christopher; Saavedra, Aldo; Savage, Christopher; Scott, Pat; Seo, Seon-Hee; Serra, Nicola; Weniger, Christoph; White, Martin; Wild, Sebastian
2018-02-01
In Ref. (GAMBIT Collaboration: Athron et. al., Eur. Phys. J. C. arXiv:1705.07908, 2017) we introduced the global-fitting framework GAMBIT. In this addendum, we describe a new minor version increment of this package. GAMBIT 1.1 includes full support for Mathematica backends, which we describe in some detail here. As an example, we backend SUSYHD (Vega and Villadoro, JHEP 07:159, 2015), which calculates the mass of the Higgs boson in the MSSM from effective field theory. We also describe updated likelihoods in PrecisionBit and DarkBit, and updated decay data included in DecayBit.
Exploratory analysis of environmental interactions in central California
De Cola, Lee; Falcone, Neil L.
1996-01-01
As part of its global change research program, the United States Geological Survey (USGS) has produced raster data that describe the land cover of the United States using a consistent format. The data consist of elevations, satellite measurements, computed vegetation indices, land cover classes, and ancillary political, topographic and hydrographic information. This open-file report uses some of these data to explore the environment of a (256-km)? region of central California. We present various visualizations of the data, multiscale correlations between topography and vegetation, a path analysis of more complex statistical interactions, and a map that portrays the influence of agriculture on the region's vegetation. An appendix contains C and Mathematica code used to generate the graphics and some of the analysis.
NASA Astrophysics Data System (ADS)
Ozbasaran, Hakan
Trusses have an important place amongst engineering structures due to many advantages such as high structural efficiency, fast assembly and easy maintenance. Iterative truss design procedures, which require analysis of a large number of candidate structural systems such as size, shape and topology optimization with stochastic methods, mostly lead the engineer to establish a link between the development platform and external structural analysis software. By increasing number of structural analyses, this (probably slow-response) link may climb to the top of the list of performance issues. This paper introduces a software for static, global member buckling and frequency analysis of 2D and 3D trusses to overcome this problem for Mathematica users.
HolT Hunter: Software for Identifying and Characterizing Low-Strain DNA Holliday Triangles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sherman W. B.
2012-06-05
Synthetic DNA nanostructures are most commonly held together via Holliday junctions. These junctions allow for a wide variety of different angles between the double helices they connect. Nevertheless, only constructs with a very limited selection of angles have been built, to date, because of the computational complexity of identifying structures that fit together with low strain at odd angles. I have developed an algorithm that finds over 95% of the possible solutions by breaking the problem down into two portions. First, there is a problem of how smooth rods can form triangles by lying across one another. This problem ismore » easily handled by numerical computation. Second, there is the question of how distorted DNA double helices would need to be to fit onto the rod structure. This strain is calculated directly. The algorithm has been implemented in a Mathematica 8 notebook called Holliday Triangle Hunter. A large database of solutions has been identified. Additional interface software is available to facilitate drawing and viewing models.« less
Wolfram technologies as an integrated scalable platform for interactive learning
NASA Astrophysics Data System (ADS)
Kaurov, Vitaliy
2012-02-01
We rely on technology profoundly with the prospect of even greater integration in the future. Well known challenges in education are a technology-inadequate curriculum and many software platforms that are difficult to scale or interconnect. We'll review an integrated technology, much of it free, that addresses these issues for individuals and small schools as well as for universities. Topics include: Mathematica, a programming environment that offers a diverse range of functionality; natural language programming for getting started quickly and accessing data from Wolfram|Alpha; quick and easy construction of interactive courseware and scientific applications; partnering with publishers to create interactive e-textbooks; course assistant apps for mobile platforms; the computable document format (CDF); teacher-student and student-student collaboration on interactive projects and web publishing at the Wolfram Demonstrations site.
Uncovering Oscillations, Complexity, and Chaos in Chemical Kinetics Using Mathematica
NASA Astrophysics Data System (ADS)
Ferreira, M. M. C.; Ferreira, W. C., Jr.; Lino, A. C. S.; Porto, M. E. G.
1999-06-01
Unlike reactions with no peculiar temporal behavior, in oscillatory reactions concentrations can rise and fall spontaneously in a cyclic or disorganized fashion. In this article, the software Mathematica is used for a theoretical study of kinetic mechanisms of oscillating and chaotic reactions. A first simple example is introduced through a three-step reaction, called the Lotka model, which exhibits a temporal behavior characterized by damped oscillations. The phase plane method of dynamic systems theory is introduced for a geometric interpretation of the reaction kinetics without solving the differential rate equations. The equations are later numerically solved using the built-in routine NDSolve and the results are plotted. The next example, still with a very simple mechanism, is the Lotka-Volterra model reaction, which oscillates indefinitely. The kinetic process and rate equations are also represented by a three-step reaction mechanism. The most important difference between this and the former reaction is that the undamped oscillation has two autocatalytic steps instead of one. The periods of oscillations are obtained by using the discrete Fourier transform (DFT)-a well-known tool in spectroscopy, although not so common in this context. In the last section, it is shown how a simple model of biochemical interactions can be useful to understand the complex behavior of important biological systems. The model consists of two allosteric enzymes coupled in series and activated by its own products. This reaction scheme is important for explaining many metabolic mechanisms, such as the glycolytic oscillations in muscles, yeast glycolysis, and the periodic synthesis of cyclic AMP. A few of many possible dynamic behaviors are exemplified through a prototype glycolytic enzymatic reaction proposed by Decroly and Goldbeter. By simply modifying the initial concentrations, limit cycles, chaos, and birhythmicity are computationally obtained and visualized.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-04
... Donald Howard, (410) 786-6764, Hospital Value-Based Purchasing (VBP) Program Issues. SUPPLEMENTARY... analyses performed by Brandeis University and Mathematica Policy Research together despite their slightly...
Another Program For Generating Interactive Graphics
NASA Technical Reports Server (NTRS)
Costenbader, Jay; Moleski, Walt; Szczur, Martha; Howell, David; Engelberg, Norm; Li, Tin P.; Misra, Dharitri; Miller, Philip; Neve, Leif; Wolf, Karl;
1991-01-01
VAX/Ultrix version of Transportable Applications Environment Plus (TAE+) computer program provides integrated, portable software environment for developing and running interactive window, text, and graphical-object-based application software systems. Enables programmer or nonprogrammer to construct easily custom software interface between user and application program and to move resulting interface program and its application program to different computers. When used throughout company for wide range of applications, makes both application program and computer seem transparent, with noticeable improvements in learning curve. Available in form suitable for following six different groups of computers: DEC VAX station and other VMS VAX computers, Macintosh II computers running AUX, Apollo Domain Series 3000, DEC VAX and reduced-instruction-set-computer workstations running Ultrix, Sun 3- and 4-series workstations running Sun OS and IBM RT/PC's and PS/2 computers running AIX, and HP 9000 S
A Sommerfeld toolbox for colored dark sectors
NASA Astrophysics Data System (ADS)
El Hedri, Sonia; Kaminska, Anna; de Vries, Maikel
2017-09-01
We present analytical formulas for the Sommerfeld corrections to the annihilation of massive colored particles into quarks and gluons through the strong interaction. These corrections are essential to accurately compute the dark matter relic density for coannihilation with colored partners. Our formulas allow us to compute the Sommerfeld effect, not only for the lowest term in the angular momentum expansion of the amplitude, but for all orders in the partial wave expansion. In particular, we carefully account for the effects of the spin of the annihilating particle on the symmetry of the two-particle wave function. This work focuses on strongly interacting particles of arbitrary spin in the triplet, sextet and octet color representations. For typical velocities during freeze-out, we find that including Sommerfeld corrections on the next-to-leading order partial wave leads to modifications of up to 10 to 20 percent on the total annihilation cross section. Complementary to QCD, we generalize our results to particles charged under an arbitrary unbroken SU( N) gauge group, as encountered in dark glueball models. In connection with this paper a Mathematica notebook is provided to compute the Sommerfeld corrections for colored particles up to arbitrary order in the angular momentum expansion.
Automation of the guiding center expansion
NASA Astrophysics Data System (ADS)
Burby, J. W.; Squire, J.; Qin, H.
2013-07-01
We report on the use of the recently developed Mathematica package VEST (Vector Einstein Summation Tools) to automatically derive the guiding center transformation. Our Mathematica code employs a recursive procedure to derive the transformation order-by-order. This procedure has several novel features. (1) It is designed to allow the user to easily explore the guiding center transformation's numerous non-unique forms or representations. (2) The procedure proceeds entirely in cartesian position and velocity coordinates, thereby producing manifestly gyrogauge invariant results; the commonly used perpendicular unit vector fields e1,e2 are never even introduced. (3) It is easy to apply in the derivation of higher-order contributions to the guiding center transformation without fear of human error. Our code therefore stands as a useful tool for exploring subtle issues related to the physics of toroidal momentum conservation in tokamaks.
Automation of The Guiding Center Expansion
DOE Office of Scientific and Technical Information (OSTI.GOV)
J. W. Burby, J. Squire and H. Qin
2013-03-19
We report on the use of the recently-developed Mathematica package VEST (Vector Einstein Summation Tools) to automatically derive the guiding center transformation. Our Mathematica code employs a recursive procedure to derive the transformation order-by-order. This procedure has several novel features. (1) It is designed to allow the user to easily explore the guiding center transformation's numerous nonunique forms or representations. (2) The procedure proceeds entirely in cartesian position and velocity coordinates, thereby producing manifestly gyrogauge invariant results; the commonly-used perpendicular unit vector fields e1, e2 are never even introduced. (3) It is easy to apply in the derivation of higher-ordermore » contributions to the guiding center transformation without fear of human error. Our code therefore stands as a useful tool for exploring subtle issues related to the physics of toroidal momentum conservation in tokamaks« less
Methods for Modeling Brassinosteroid-Mediated Signaling in Plant Development.
Frigola, David; Caño-Delgado, Ana I; Ibañes, Marta
2017-01-01
Mathematical modeling of biological processes is a useful tool to draw conclusions that are contained in the data, but not directly reachable, as well as to make predictions and select the most efficient follow-up experiments. Here we outline a method to model systems of a few proteins that interact transcriptionally and/or posttranscriptionally, by representing the system as Ordinary Differential Equations and to study the model dynamics and stationary states. We exemplify this method by focusing on the regulation by the brassinosteroid (BR) signaling component BRASSINOSTEROID INSENSITIVE1 ETHYL METHYL SULFONATE SUPPRESSOR1 (BES1) of BRAVO, a quiescence-regulating transcription factor expressed in the quiescent cells of Arabidopsis thaliana roots. The method to extract the stationary states and the dynamics is provided as a Mathematica code and requires basic knowledge of the Mathematica software to be executed.
NASA Astrophysics Data System (ADS)
Arshad, Muhammad; Lu, Dianchen; Wang, Jun
2017-07-01
In this paper, we pursue the general form of the fractional reduced differential transform method (DTM) to (N+1)-dimensional case, so that fractional order partial differential equations (PDEs) can be resolved effectively. The most distinct aspect of this method is that no prescribed assumptions are required, and the huge computational exertion is reduced and round-off errors are also evaded. We utilize the proposed scheme on some initial value problems and approximate numerical solutions of linear and nonlinear time fractional PDEs are obtained, which shows that the method is highly accurate and simple to apply. The proposed technique is thus an influential technique for solving the fractional PDEs and fractional order problems occurring in the field of engineering, physics etc. Numerical results are obtained for verification and demonstration purpose by using Mathematica software.
NASA Astrophysics Data System (ADS)
Ilhan, O. A.; Bulut, H.; Sulaiman, T. A.; Baskonus, H. M.
2018-02-01
In this study, the modified exp ( - Φ (η )) -expansion function method is used in constructing some solitary wave solutions to the Oskolkov-Benjamin-Bona-Mahony-Burgers, one-dimensional Oskolkov equations and the Dodd-Bullough-Mikhailov equation. We successfully construct some singular solitons and singular periodic waves solutions with the hyperbolic, trigonometric and exponential function structures to these three nonlinear models. Under the choice of some suitable values of the parameters involved, we plot the 2D and 3D graphics to some of the obtained solutions in this study. All the obtained solutions in this study verify their corresponding equation. We perform all the computations in this study with the help of the Wolfram Mathematica software. The obtained solutions in this study may be helpful in explaining some practical physical problems.
Linear homotopy solution of nonlinear systems of equations in geodesy
NASA Astrophysics Data System (ADS)
Paláncz, Béla; Awange, Joseph L.; Zaletnyik, Piroska; Lewis, Robert H.
2010-01-01
A fundamental task in geodesy is solving systems of equations. Many geodetic problems are represented as systems of multivariate polynomials. A common problem in solving such systems is improper initial starting values for iterative methods, leading to convergence to solutions with no physical meaning, or to convergence that requires global methods. Though symbolic methods such as Groebner bases or resultants have been shown to be very efficient, i.e., providing solutions for determined systems such as 3-point problem of 3D affine transformation, the symbolic algebra can be very time consuming, even with special Computer Algebra Systems (CAS). This study proposes the Linear Homotopy method that can be implemented easily in high-level computer languages like C++ and Fortran that are faster than CAS by at least two orders of magnitude. Using Mathematica, the power of Homotopy is demonstrated in solving three nonlinear geodetic problems: resection, GPS positioning, and affine transformation. The method enlarging the domain of convergence is found to be efficient, less sensitive to rounding of numbers, and has lower complexity compared to other local methods like Newton-Raphson.
A Computational Model for Predicting Gas Breakdown
NASA Astrophysics Data System (ADS)
Gill, Zachary
2017-10-01
Pulsed-inductive discharges are a common method of producing a plasma. They provide a mechanism for quickly and efficiently generating a large volume of plasma for rapid use and are seen in applications including propulsion, fusion power, and high-power lasers. However, some common designs see a delayed response time due to the plasma forming when the magnitude of the magnetic field in the thruster is at a minimum. New designs are difficult to evaluate due to the amount of time needed to construct a new geometry and the high monetary cost of changing the power generation circuit. To more quickly evaluate new designs and better understand the shortcomings of existing designs, a computational model is developed. This model uses a modified single-electron model as the basis for a Mathematica code to determine how the energy distribution in a system changes with regards to time and location. By analyzing this energy distribution, the approximate time and location of initial plasma breakdown can be predicted. The results from this code are then compared to existing data to show its validity and shortcomings. Missouri S&T APLab.
Scalar curvature in conformal geometry of Connes-Landi noncommutative manifolds
NASA Astrophysics Data System (ADS)
Liu, Yang
2017-11-01
We first propose a conformal geometry for Connes-Landi noncommutative manifolds and study the associated scalar curvature. The new scalar curvature contains its Riemannian counterpart as the commutative limit. Similar to the results on noncommutative two tori, the quantum part of the curvature consists of actions of the modular derivation through two local curvature functions. Explicit expressions for those functions are obtained for all even dimensions (greater than two). In dimension four, the one variable function shows striking similarity to the analytic functions of the characteristic classes appeared in the Atiyah-Singer local index formula, namely, it is roughly a product of the j-function (which defines the A ˆ -class of a manifold) and an exponential function (which defines the Chern character of a bundle). By performing two different computations for the variation of the Einstein-Hilbert action, we obtain deep internal relations between two local curvature functions. Straightforward verification for those relations gives a strong conceptual confirmation for the whole computational machinery we have developed so far, especially the Mathematica code hidden behind the paper.
Automation of the guiding center expansion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burby, J. W.; Squire, J.; Qin, H.
2013-07-15
We report on the use of the recently developed Mathematica package VEST (Vector Einstein Summation Tools) to automatically derive the guiding center transformation. Our Mathematica code employs a recursive procedure to derive the transformation order-by-order. This procedure has several novel features. (1) It is designed to allow the user to easily explore the guiding center transformation's numerous non-unique forms or representations. (2) The procedure proceeds entirely in cartesian position and velocity coordinates, thereby producing manifestly gyrogauge invariant results; the commonly used perpendicular unit vector fields e{sub 1},e{sub 2} are never even introduced. (3) It is easy to apply in themore » derivation of higher-order contributions to the guiding center transformation without fear of human error. Our code therefore stands as a useful tool for exploring subtle issues related to the physics of toroidal momentum conservation in tokamaks.« less
Automatic generation of user material subroutines for biomechanical growth analysis.
Young, Jonathan M; Yao, Jiang; Ramasubramanian, Ashok; Taber, Larry A; Perucchio, Renato
2010-10-01
The analysis of the biomechanics of growth and remodeling in soft tissues requires the formulation of specialized pseudoelastic constitutive relations. The nonlinear finite element analysis package ABAQUS allows the user to implement such specialized material responses through the coding of a user material subroutine called UMAT. However, hand coding UMAT subroutines is a challenge even for simple pseudoelastic materials and requires substantial time to debug and test the code. To resolve this issue, we develop an automatic UMAT code generation procedure for pseudoelastic materials using the symbolic mathematics package MATHEMATICA and extend the UMAT generator to include continuum growth. The performance of the automatically coded UMAT is tested by simulating the stress-stretch response of a material defined by a Fung-orthotropic strain energy function, subject to uniaxial stretching, equibiaxial stretching, and simple shear in ABAQUS. The MATHEMATICA UMAT generator is then extended to include continuum growth by adding a growth subroutine to the automatically generated UMAT. The MATHEMATICA UMAT generator correctly derives the variables required in the UMAT code, quickly providing a ready-to-use UMAT. In turn, the UMAT accurately simulates the pseudoelastic response. In order to test the growth UMAT, we simulate the growth-based bending of a bilayered bar with differing fiber directions in a nongrowing passive layer. The anisotropic passive layer, being topologically tied to the growing isotropic layer, causes the bending bar to twist laterally. The results of simulations demonstrate the validity of the automatically coded UMAT, used in both standardized tests of hyperelastic materials and for a biomechanical growth analysis.
The Weekly Fab Five: Things You Should Do Every Week To Keep Your Computer Running in Tip-Top Shape.
ERIC Educational Resources Information Center
Crispen, Patrick
2001-01-01
Describes five steps that school librarians should follow every week to keep their computers running at top efficiency. Explains how to update virus definitions; run Windows update; run ScanDisk to repair errors on the hard drive; run a disk defragmenter; and backup all data. (LRW)
By Stuart G. Baker, 2017 Introduction This software fits a zero-intercept random effects linear model to data on surrogate and true endpoints in previous trials. Requirement: Mathematica Version 11 or later. |
Computing Maximally Supersymmetric Scattering Amplitudes
NASA Astrophysics Data System (ADS)
Stankowicz, James Michael, Jr.
This dissertation reviews work in computing N = 4 super-Yang--Mills (sYM) and N = 8 maximally supersymmetric gravity (mSUGRA) scattering amplitudes in D = 4 spacetime dimensions in novel ways. After a brief introduction and overview in Ch. 1, the various techniques used to construct amplitudes in the remainder of the dissertation are discussed in Ch. 2. This includes several new concepts such as d log and pure integrand bases, as well as how to construct the amplitude using exactly one kinematic point where it vanishes. Also included in this chapter is an outline of the Mathematica package on shell diagrams and numerics.m (osdn) that was developed for the computations herein. The rest of the dissertation is devoted to explicit examples. In Ch. 3, the starting point is tree-level sYM amplitudes that have integral representations with residues that obey amplitude relations. These residues are shown to have corresponding residue numerators that allow a double copy prescription that results in mSUGRA residues. In Ch. 4, the two-loop four-point sYM amplitude is constructed in several ways, showcasing many of the techniques of Ch. 2; this includes an example of how to use osdn. The two-loop five-point amplitude is also presented in a pure integrand representation with comments on how it was constructed from one homogeneous cut of the amplitude. On-going work on the two-loop n-point amplitude is presented at the end of Ch. 4. In Ch. 5, the three-loop four-point amplitude is presented in the d log representation and in the pure integrand representation. In Ch. 6, there are several examples of four- through seven-loop planar diagrams that illustrate how considerations of the singularity structure of the amplitude underpin dual-conformal invariance. Taken with the previous examples, this is additional evidence that the structure known to exist in the planar sector extends to the full theory. At the end of this chapter is a proof that all mSUGRA amplitudes have a pole at infinity for (L ≥ 4)-loops. Finally in Ch. 7, the current status of ultraviolet divergences in the five-loop four-point mSUGRA amplitude is addressed. This includes a discussion of ongoing work aimed at resolving the mSUGRA finiteness question. The following Mathematica scripts are submitted with this dissertation: • on shell diagrams and numerics.m with dependencies: -- all_trees *.m -- external_kinematics_*_point.m -- rational_external_*_point.m where "*" is a wild-card string of any set of characters of any length -- either an integer or a number spelled out.
Program For Generating Interactive Displays
NASA Technical Reports Server (NTRS)
Costenbader, Jay; Moleski, Walt; Szczur, Martha; Howell, David; Engelberg, Norm; Li, Tin P.; Misra, Dharitri; Miller, Philip; Neve, Leif; Wolf, Karl;
1991-01-01
Sun/Unix version of Transportable Applications Environment Plus (TAE+) computer program provides integrated, portable software environment for developing and running interactive window, text, and graphical-object-based application software systems. Enables programmer or nonprogrammer to construct easily custom software interface between user and application program and to move resulting interface program and its application program to different computers. Plus viewed as productivity tool for application developers and application end users, who benefit from resultant consistent and well-designed user interface sheltering them from intricacies of computer. Available in form suitable for following six different groups of computers: DEC VAX station and other VMS VAX computers, Macintosh II computers running AUX, Apollo Domain Series 3000, DEC VAX and reduced-instruction-set-computer workstations running Ultrix, Sun 3- and 4-series workstations running Sun OS and IBM RT/PC and PS/2 compute
A Web-based Distributed Voluntary Computing Platform for Large Scale Hydrological Computations
NASA Astrophysics Data System (ADS)
Demir, I.; Agliamzanov, R.
2014-12-01
Distributed volunteer computing can enable researchers and scientist to form large parallel computing environments to utilize the computing power of the millions of computers on the Internet, and use them towards running large scale environmental simulations and models to serve the common good of local communities and the world. Recent developments in web technologies and standards allow client-side scripting languages to run at speeds close to native application, and utilize the power of Graphics Processing Units (GPU). Using a client-side scripting language like JavaScript, we have developed an open distributed computing framework that makes it easy for researchers to write their own hydrologic models, and run them on volunteer computers. Users will easily enable their websites for visitors to volunteer sharing their computer resources to contribute running advanced hydrological models and simulations. Using a web-based system allows users to start volunteering their computational resources within seconds without installing any software. The framework distributes the model simulation to thousands of nodes in small spatial and computational sizes. A relational database system is utilized for managing data connections and queue management for the distributed computing nodes. In this paper, we present a web-based distributed volunteer computing platform to enable large scale hydrological simulations and model runs in an open and integrated environment.
75 FR 29368 - Agency Information Collection Activities: Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-25
... made available to researchers on CD-ROM and on the World Wide Web. Mathematica Policy Research will... NATIONAL SCIENCE FOUNDATION Agency Information Collection Activities: Comment Request AGENCY: National Science Foundation. ACTION: Submission for OMB review; comment request. SUMMARY: The National...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herbert, John M.
1997-01-01
Rayleigh-Schroedinger perturbation theory is an effective and popular tool for describing low-lying vibrational and rotational states of molecules. This method, in conjunction with ab initio techniques for computation of electronic potential energy surfaces, can be used to calculate first-principles molecular vibrational-rotational energies to successive orders of approximation. Because of mathematical complexities, however, such perturbation calculations are rarely extended beyond the second order of approximation, although recent work by Herbert has provided a formula for the nth-order energy correction. This report extends that work and furnishes the remaining theoretical details (including a general formula for the Rayleigh-Schroedinger expansion coefficients) necessary formore » calculation of energy corrections to arbitrary order. The commercial computer algebra software Mathematica is employed to perform the prohibitively tedious symbolic manipulations necessary for derivation of generalized energy formulae in terms of universal constants, molecular constants, and quantum numbers. As a pedagogical example, a Hamiltonian operator tailored specifically to diatomic molecules is derived, and the perturbation formulae obtained from this Hamiltonian are evaluated for a number of such molecules. This work provides a foundation for future analyses of polyatomic molecules, since it demonstrates that arbitrary-order perturbation theory can successfully be applied with the aid of commercially available computer algebra software.« less
[Activities of the Department of Electrical Engineering, Howard University
NASA Technical Reports Server (NTRS)
Yalamanchili, Raj C.
1997-01-01
Theoretical derivations, computer analysis and test data are provided to demonstrate that the cavity model is a feasible one to analyze thin-substrate, rectangular-patch microstrip antennas. Seven separate antennas were tested. Most of the antennas were designed to resonate at L-band frequencies (1-2 GHz). One antenna was designed to resonate at an S-band (2-4 GHz) frequency of 2.025 GHz. All dielectric substrates were made of Duroid, and were of varying thicknesses and relative dielectric constant values. Theoretical derivations to calculate radiated free space electromagnetic fields and antenna input impedance were performed. MATHEMATICA 2.2 software was used to generate Smith Chart input impedance plots, normalized relative power radiation plots and to perform other numerical manipulations. Network Analyzer tests were used to verify the data from the computer programming (such as input impedance and VSWR). Finally, tests were performed in an anechoic chamber to measure receive-mode polar power patterns in the E and H planes. Agreement between computer analysis and test data is presented. The antenna with the thickest substrate (e(sub r) = 2.33,62 mils thick) showed the worst match to theoretical impedance data. This is anticipated due to the fact that the cavity model generally loses accuracy when the dielectric substrate thickness exceeds 5% of the antenna's free space wavelength. A method of reducing computer execution time for impedance calculations is also presented.
High effective inverse dynamics modelling for dual-arm robot
NASA Astrophysics Data System (ADS)
Shen, Haoyu; Liu, Yanli; Wu, Hongtao
2018-05-01
To deal with the problem of inverse dynamics modelling for dual arm robot, a recursive inverse dynamics modelling method based on decoupled natural orthogonal complement is presented. In this model, the concepts and methods of Decoupled Natural Orthogonal Complement matrices are used to eliminate the constraint forces in the Newton-Euler kinematic equations, and the screws is used to express the kinematic and dynamics variables. On this basis, the paper has developed a special simulation program with symbol software of Mathematica and conducted a simulation research on the a dual-arm robot. Simulation results show that the proposed method based on decoupled natural orthogonal complement can save an enormous amount of CPU time that was spent in computing compared with the recursive Newton-Euler kinematic equations and the results is correct and reasonable, which can verify the reliability and efficiency of the method.
Kratzer, Markus; Lasnik, Michael; Röhrig, Sören; Teichert, Christian; Deluca, Marco
2018-01-11
Lead zirconate titanate (PZT) is one of the prominent materials used in polycrystalline piezoelectric devices. Since the ferroelectric domain orientation is the most important parameter affecting the electromechanical performance, analyzing the domain orientation distribution is of great importance for the development and understanding of improved piezoceramic devices. Here, vector piezoresponse force microscopy (vector-PFM) has been applied in order to reconstruct the ferroelectric domain orientation distribution function of polished sections of device-ready polycrystalline lead zirconate titanate (PZT) material. A measurement procedure and a computer program based on the software Mathematica have been developed to automatically evaluate the vector-PFM data for reconstructing the domain orientation function. The method is tested on differently in-plane and out-of-plane poled PZT samples, and the results reveal the expected domain patterns and allow determination of the polarization orientation distribution function at high accuracy.
The effect of gas and fluid flows on nonlinear lateral vibrations of rotating drill strings
NASA Astrophysics Data System (ADS)
Khajiyeva, Lelya; Kudaibergenov, Askar; Kudaibergenov, Askat
2018-06-01
In this work we develop nonlinear mathematical models describing coupled lateral vibrations of a rotating drill string under the effect of external supersonic gas and internal fluid flows. An axial compressive load and a torque also affect the drill string. The mathematical models are derived by the use of Novozhilov's nonlinear theory of elasticity with implementation of Hamilton's variation principle. Expressions for the gas flow pressure are determined according to the piston theory. The fluid flow is considered as added mass inside the curved tube of the drill string. Using an algorithm developed in the Mathematica computation program on the basis of the Galerkin approach and the stiffness switching method the numerical solution of the obtained approximate differential equations is found. Influences of the external loads, drill string angular speed of rotation, parameters of the gas and fluid flows on the drill string vibrations are shown.
A Maxwell-Schrödinger solver for quantum optical few-level systems
NASA Astrophysics Data System (ADS)
Fleischhaker, Robert; Evers, Jörg
2011-03-01
The msprop program presented in this work is capable of solving the Maxwell-Schrödinger equations for one or several laser fields propagating through a medium of quantum optical few-level systems in one spatial dimension and in time. In particular, it allows to numerically treat systems in which a laser field interacts with the medium with both its electric and magnetic component at the same time. The internal dynamics of the few-level system is modeled by a quantum optical master equation which includes coherent processes due to optical transitions driven by the laser fields as well as incoherent processes due to decay and dephasing. The propagation dynamics of the laser fields is treated in slowly varying envelope approximation resulting in a first order wave equation for each laser field envelope function. The program employs an Adams predictor formula second order in time to integrate the quantum optical master equation and a Lax-Wendroff scheme second order in space and time to evolve the wave equations for the fields. The source function in the Lax-Wendroff scheme is specifically adapted to allow taking into account the simultaneous coupling of a laser field to the polarization and the magnetization of the medium. To reduce execution time, a customized data structure is implemented and explained. In three examples the features of the program are demonstrated and the treatment of a system with a phase-dependent cross coupling of the electric and magnetic field component of a laser field is shown. Program summaryProgram title: msprop Catalogue identifier: AEHR_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHR_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 507 625 No. of bytes in distributed program, including test data, etc.: 10 698 552 Distribution format: tar.gz Programming language: C (C99 standard), Mathematica, bash script, gnuplot script Computer: Tested on x86 architecture Operating system: Unix/Linux environment RAM: Less than 30 MB Classification: 2.5 External routines: Standard C math library, accompanying bash script uses gnuplot, bc (basic calculator), and convert (ImageMagick) Nature of problem: We consider a system of quantum optical few-level atoms exposed to several near-resonant continuous-wave or pulsed laser fields. The complexity of the problem arises from the combination of the coherent and incoherent time evolution of the atoms and its dependence on the spatially varying fields. In systems with a coupling to the electric and magnetic field component the simultaneous treatment of both field components poses an additional challenge. Studying the system dynamics requires solving the quantum optical master equation coupled to the wave equations governing the spatio-temporal dynamics of the fields [1,2]. Solution method: We numerically integrate the equations of motion using a second order Adams predictor method for the time evolution of the atomic density matrix and a second order Lax-Wendroff scheme for iterating the fields in space [3]. For the Lax-Wendroff scheme, the source function is adapted such that a simultaneous coupling to the polarization and the magnetization of the medium can be taken into account. Restrictions: The evolution of the fields is treated in slowly varying envelope approximation [2] such that variations of the fields in space and time must be on a scale larger than the wavelength and the optical cycle. Propagation is restricted to the forward direction and to one dimension. Concerning the description of the atomic system, only a finite number of basis states can be treated and the laser-driven transitions have to be near-resonant such that the rotating-wave approximation can be applied [2]. Unusual features: The program allows the dipole interaction of both the electric and the magnetic component of a laser field to be taken into account at the same time. Thus, a system with a phase-dependent cross coupling of electric and magnetic field component can be treated (see Section 4.2 and [4]). Concerning the implementation of the data structure, it has been optimized for faster memory access. Compared to using standard memory allocation methods, shorter run times are achieved (see Section 3.2). Additional comments: Three examples are given. They each include a readme file, a Mathematica notebook to generate the C-code form of the quantum optical master equation, a parameter file, a bash script which runs the program and converts the numerical data into a movie, two gnuplot scripts, and all files that are produced by running the bash script. Running time: For the first two examples the running time is less than a minute, the third example takes about 12 minutes. On a Pentium 4 (3 GHz) system, a rough estimate can be made with a value of 1 second per million grid points and per field variable.
NASA Astrophysics Data System (ADS)
Levi, Michele; Steinhoff, Jan
2017-12-01
We present a novel public package ‘EFTofPNG’ for high precision computation in the effective field theory of post-Newtonian (PN) gravity, including spins. We created this package in view of the timely need to publicly share automated computation tools, which integrate the various types of physics manifested in the expected increasing influx of gravitational wave (GW) data. Hence, we created a free and open source package, which is self-contained, modular, all-inclusive, and accessible to the classical gravity community. The ‘EFTofPNG’ Mathematica package also uses the power of the ‘xTensor’ package, suited for complicated tensor computation, where our coding also strategically approaches the generic generation of Feynman contractions, which is universal to all perturbation theories in physics, by efficiently treating n-point functions as tensors of rank n. The package currently contains four independent units, which serve as subsidiaries to the main one. Its final unit serves as a pipeline chain for the obtainment of the final GW templates, and provides the full computation of derivatives and physical observables of interest. The upcoming ‘EFTofPNG’ package version 1.0 should cover the point mass sector, and all the spin sectors, up to the fourth PN order, and the two-loop level. We expect and strongly encourage public development of the package to improve its efficiency, and to extend it to further PN sectors, and observables useful for the waveform modelling.
Truth & Beauty: Mathematics in Literature
ERIC Educational Resources Information Center
Cohen, Marion D.
2013-01-01
Today there are many categories of mathematics literature, including fiction and poetry. Mathematics fiction appears in such anthologies as "Fantasia Mathematica" (Fadiman 1958, 1997) and "The Mathematical Magpie" (Fadiman 1962, 1997). In addition, mathematics fiction is featured at http://kasmana.people.cofc.edu/MATHFICT.…
78 FR 72679 - Submission for OMB Review: Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-03
... OMB Review: Comment Request Title: RPG National Cross-Site Evaluation and Evaluation Technical..., activities, and services designed to increase well-being, improve permanency, and enhance the safety of... Research. The evaluation is being implemented by Mathematica Policy Research and its subcontractors, Walter...
DecouplingModes: Passive modes amplitudes
NASA Astrophysics Data System (ADS)
Shaw, J. Richard; Lewis, Antony
2018-01-01
DecouplingModes calculates the amplitude of the passive modes, which requires solving the Einstein equations on superhorizon scales sourced by the anisotropic stress from the magnetic fields (prior to neutrino decoupling), and the magnetic and neutrino stress (after decoupling). The code is available as a Mathematica notebook.
VetPop2001Adj is VA's new official estimate and projection of the veteran population as of 9-30-02. It revises and replaces the estimate and projection in VetPop2001. Mathematica has been actively involved since 1998 in designing and developing an actuarial model, VetPop2001 that...
Advanced compilation techniques in the PARADIGM compiler for distributed-memory multicomputers
NASA Technical Reports Server (NTRS)
Su, Ernesto; Lain, Antonio; Ramaswamy, Shankar; Palermo, Daniel J.; Hodges, Eugene W., IV; Banerjee, Prithviraj
1995-01-01
The PARADIGM compiler project provides an automated means to parallelize programs, written in a serial programming model, for efficient execution on distributed-memory multicomputers. .A previous implementation of the compiler based on the PTD representation allowed symbolic array sizes, affine loop bounds and array subscripts, and variable number of processors, provided that arrays were single or multi-dimensionally block distributed. The techniques presented here extend the compiler to also accept multidimensional cyclic and block-cyclic distributions within a uniform symbolic framework. These extensions demand more sophisticated symbolic manipulation capabilities. A novel aspect of our approach is to meet this demand by interfacing PARADIGM with a powerful off-the-shelf symbolic package, Mathematica. This paper describes some of the Mathematica routines that performs various transformations, shows how they are invoked and used by the compiler to overcome the new challenges, and presents experimental results for code involving cyclic and block-cyclic arrays as evidence of the feasibility of the approach.
System and method for controlling power consumption in a computer system based on user satisfaction
Yang, Lei; Dick, Robert P; Chen, Xi; Memik, Gokhan; Dinda, Peter A; Shy, Alex; Ozisikyilmaz, Berkin; Mallik, Arindam; Choudhary, Alok
2014-04-22
Systems and methods for controlling power consumption in a computer system. For each of a plurality of interactive applications, the method changes a frequency at which a processor of the computer system runs, receives an indication of user satisfaction, determines a relationship between the changed frequency and the user satisfaction of the interactive application, and stores the determined relationship information. The determined relationship can distinguish between different users and different interactive applications. A frequency may be selected from the discrete frequencies at which the processor of the computer system runs based on the determined relationship information for a particular user and a particular interactive application running on the processor of the computer system. The processor may be adapted to run at the selected frequency.
ERIC Educational Resources Information Center
Eckalbar, John C.
2002-01-01
Illustrates how principles and intermediate microeconomic students can gain an understanding for strategic price setting by playing a relatively large oligopoly game. Explains that the game extends to a continuous price space and outlines appropriate applications. Offers the Mathematica code to instructors so that the assumptions of the game can…
Estimating the Overdiagnosis Fraction in Cancer Screening | Division of Cancer Prevention
By Stuart G. Baker, 2017 Introduction This software supports the mathematical investigation into estimating the fraction of cancers detected on screening that are overdiagnosed. References Baker SG and Prorok PC. Estimating the overdiagnosis fraction in cancer screening. Requirement Mathematica Version 11 or later. |
76 FR 39394 - Notice of Proposed Information Collection Requests
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-06
... within the U.S. Department of Education (ED) has contracted with Decision Information Resources, Inc. and Mathematica Policy Research, Inc. to assess the procedures for collecting and reporting program performance and evaluation data for eleven ED grant programs. These audits and assessments will provide ED with...
Fourier Series and Elliptic Functions
ERIC Educational Resources Information Center
Fay, Temple H.
2003-01-01
Non-linear second-order differential equations whose solutions are the elliptic functions "sn"("t, k"), "cn"("t, k") and "dn"("t, k") are investigated. Using "Mathematica", high precision numerical solutions are generated. From these data, Fourier coefficients are determined yielding approximate formulas for these non-elementary functions that are…
Does Your Graphing Software Real-ly Work?
ERIC Educational Resources Information Center
Marchand, R. J.; McDevitt, T. J.; Bosse, Michael J.; Nandakumar, N. R.
2007-01-01
Many popular mathematical software products including Maple, Mathematica, Derive, Mathcad, Matlab, and some of the TI calculators produce incorrect graphs because they use complex arithmetic instead of "real" arithmetic. This article expounds on this issue, provides possible remedies for instructors to share with their students, and demonstrates…
Computational tools for Breakthrough Propulsion Physics: State of the art and future prospects
NASA Astrophysics Data System (ADS)
Maccone, Claudio
2000-01-01
To address problems in Breakthrough Propulsion Physics (BPP) one needs sheer computing capabilities. This is because General Relativity and Quantum Field Theory are so mathematically sophisticated that the amount of analytical calculations is prohibitive and one can hardly do all of them by hand. In this paper we make a comparative review of the main tensor calculus capabilities of the three most advanced and commercially available ``symbolic manipulator'' codes: Macsyma, Maple V and Mathematica. We also point out that currently one faces such a variety of different conventions in tensor calculus that it is difficult or impossible to compare results obtained by different scholars in General Relativity and Quantum Field Theory. Mathematical physicists, experimental physicists and engineers have each their own way of customizing tensors, especially by using the different metric signatures, different metric determinant signs, different definitions of the basic Riemann and Ricci tensors, and by adopting different systems of physical units. This chaos greatly hampers progress toward the chief NASA BPP goal: the design of the NASA Warp Drive. It is thus concluded that NASA should put order by establishing international standards in symbolic tensor calculus and enforcing anyone working in BPP to adopt these NASA BPP Standards. .
Combining Interactive Thermodynamics Simulations with Screencasts and Conceptests
ERIC Educational Resources Information Center
Falconer, John L.
2016-01-01
More than 40 interactive "Mathematica" simulations were prepared for chemical engineering thermodynamics, screencasts were prepared that explain how to use each simulation, and more than 100 ConcepTests were prepared that utilize the simulations. They are located on www.LearnChemE.com. The purposes of these simulations are to clarify…
Pedagogical View of Model Metabolic Cycles
ERIC Educational Resources Information Center
García-Herrero, Victor; Sillero, Antonio
2015-01-01
The main purpose of this study was to present a simplified view of model metabolic cycles. Although the models have been elaborated with the "Mathematica" Program, and using a system of differential equations, the main conclusions were presented in a rather intuitive way, easily understandable by students of general courses of…
Parameter Estimates in Differential Equation Models for Population Growth
ERIC Educational Resources Information Center
Winkel, Brian J.
2011-01-01
We estimate the parameters present in several differential equation models of population growth, specifically logistic growth models and two-species competition models. We discuss student-evolved strategies and offer "Mathematica" code for a gradient search approach. We use historical (1930s) data from microbial studies of the Russian biologist,…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-27
... Education Sciences/National Center for Education Statistics (IES), Department of Education (ED). ACTION... Education Sciences (IES) U.S. Department of Education and is being implemented by ICF International and its subcontractor, Mathematica Policy Research. This submission requests approval to recruit districts for the study...
Simulating three dimensional wave run-up over breakwaters covered by antifer units
NASA Astrophysics Data System (ADS)
Najafi-Jilani, A.; Niri, M. Zakiri; Naderi, Nader
2014-06-01
The paper presents the numerical analysis of wave run-up over rubble-mound breakwaters covered by antifer units using a technique integrating Computer-Aided Design (CAD) and Computational Fluid Dynamics (CFD) software. Direct application of Navier-Stokes equations within armour blocks, is used to provide a more reliable approach to simulate wave run-up over breakwaters. A well-tested Reynolds-averaged Navier-Stokes (RANS) Volume of Fluid (VOF) code (Flow-3D) was adopted for CFD computations. The computed results were compared with experimental data to check the validity of the model. Numerical results showed that the direct three dimensional (3D) simulation method can deliver accurate results for wave run-up over rubble mound breakwaters. The results showed that the placement pattern of antifer units had a great impact on values of wave run-up so that by changing the placement pattern from regular to double pyramid can reduce the wave run-up by approximately 30%. Analysis was done to investigate the influences of surface roughness, energy dissipation in the pores of the armour layer and reduced wave run-up due to inflow into the armour and stone layer.
WinSCP for Windows File Transfers | High-Performance Computing | NREL
WinSCP for Windows File Transfers WinSCP for Windows File Transfers WinSCP for can used to securely transfer files between your local computer running Microsoft Windows and a remote computer running Linux
RAPPORT: running scientific high-performance computing applications on the cloud.
Cohen, Jeremy; Filippis, Ioannis; Woodbridge, Mark; Bauer, Daniela; Hong, Neil Chue; Jackson, Mike; Butcher, Sarah; Colling, David; Darlington, John; Fuchs, Brian; Harvey, Matt
2013-01-28
Cloud computing infrastructure is now widely used in many domains, but one area where there has been more limited adoption is research computing, in particular for running scientific high-performance computing (HPC) software. The Robust Application Porting for HPC in the Cloud (RAPPORT) project took advantage of existing links between computing researchers and application scientists in the fields of bioinformatics, high-energy physics (HEP) and digital humanities, to investigate running a set of scientific HPC applications from these domains on cloud infrastructure. In this paper, we focus on the bioinformatics and HEP domains, describing the applications and target cloud platforms. We conclude that, while there are many factors that need consideration, there is no fundamental impediment to the use of cloud infrastructure for running many types of HPC applications and, in some cases, there is potential for researchers to benefit significantly from the flexibility offered by cloud platforms.
GEANT4 distributed computing for compact clusters
NASA Astrophysics Data System (ADS)
Harrawood, Brian P.; Agasthya, Greeshma A.; Lakshmanan, Manu N.; Raterman, Gretchen; Kapadia, Anuj J.
2014-11-01
A new technique for distribution of GEANT4 processes is introduced to simplify running a simulation in a parallel environment such as a tightly coupled computer cluster. Using a new C++ class derived from the GEANT4 toolkit, multiple runs forming a single simulation are managed across a local network of computers with a simple inter-node communication protocol. The class is integrated with the GEANT4 toolkit and is designed to scale from a single symmetric multiprocessing (SMP) machine to compact clusters ranging in size from tens to thousands of nodes. User designed 'work tickets' are distributed to clients using a client-server work flow model to specify the parameters for each individual run of the simulation. The new g4DistributedRunManager class was developed and well tested in the course of our Neutron Stimulated Emission Computed Tomography (NSECT) experiments. It will be useful for anyone running GEANT4 for large discrete data sets such as covering a range of angles in computed tomography, calculating dose delivery with multiple fractions or simply speeding the through-put of a single model.
Computational Methods for Feedback Controllers for Aerodynamics Flow Applications
2007-08-15
Iteration #, and y-translation by: »> Fy=[unf(:,8);runA(:,8);runB(:,8);runC(:,8);runD(:,S); runE (:,8)]; >> Oy-[unf(:,23) ;runA(:,23) ;runB(:,23) ;runC(:,23...runD(:,23) ; runE (:,23)]; >> Iter-[unf(:,1);runA(U ,l);runB(:,l);runC(:,l) ;runD(:,l); runE (:,l)]; >> plot(Fy) Cobalt version 4.0 €blso!,,tic,,. ř-21
Proposal for grid computing for nuclear applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Idris, Faridah Mohamad; Ismail, Saaidi; Haris, Mohd Fauzi B.
2014-02-12
The use of computer clusters for computational sciences including computational physics is vital as it provides computing power to crunch big numbers at a faster rate. In compute intensive applications that requires high resolution such as Monte Carlo simulation, the use of computer clusters in a grid form that supplies computational power to any nodes within the grid that needs computing power, has now become a necessity. In this paper, we described how the clusters running on a specific application could use resources within the grid, to run the applications to speed up the computing process.
Tellurium notebooks-An environment for reproducible dynamical modeling in systems biology.
Medley, J Kyle; Choi, Kiri; König, Matthias; Smith, Lucian; Gu, Stanley; Hellerstein, Joseph; Sealfon, Stuart C; Sauro, Herbert M
2018-06-01
The considerable difficulty encountered in reproducing the results of published dynamical models limits validation, exploration and reuse of this increasingly large biomedical research resource. To address this problem, we have developed Tellurium Notebook, a software system for model authoring, simulation, and teaching that facilitates building reproducible dynamical models and reusing models by 1) providing a notebook environment which allows models, Python code, and narrative to be intermixed, 2) supporting the COMBINE archive format during model development for capturing model information in an exchangeable format and 3) enabling users to easily simulate and edit public COMBINE-compliant models from public repositories to facilitate studying model dynamics, variants and test cases. Tellurium Notebook, a Python-based Jupyter-like environment, is designed to seamlessly inter-operate with these community standards by automating conversion between COMBINE standards formulations and corresponding in-line, human-readable representations. Thus, Tellurium brings to systems biology the strategy used by other literate notebook systems such as Mathematica. These capabilities allow users to edit every aspect of the standards-compliant models and simulations, run the simulations in-line, and re-export to standard formats. We provide several use cases illustrating the advantages of our approach and how it allows development and reuse of models without requiring technical knowledge of standards. Adoption of Tellurium should accelerate model development, reproducibility and reuse.
Value-Added Models for the Pittsburgh Public Schools
ERIC Educational Resources Information Center
Johnson, Matthew; Lipscomb, Stephen; Gill, Brian; Booker, Kevin; Bruch, Julie
2012-01-01
At the request of Pittsburgh Public Schools (PPS) and the Pittsburgh Federation of Teachers (PFT), Mathematica has developed value-added models (VAMs) that aim to estimate the contributions of individual teachers, teams of teachers, and schools to the achievement growth of their students. The authors' work in estimating value-added in Pittsburgh…
Software for the Application of Discrete Latent Structure Models to Item Response Data.
ERIC Educational Resources Information Center
Haertel, Edward H.
These FORTRAN programs and MATHEMATICA routines were developed in the course of a research project titled "Achievement and Assessment in School Science: Modeling and Mapping Ability and Performance." Their use is described in other publications from that project, including "Latent Traits or Latent States? The Role of Discrete Models…
The Effectiveness of Mandatory-Random Student Drug Testing. NCEE 2010-4025
ERIC Educational Resources Information Center
James-Burdumy, Susanne; Goesling, Brian; Deke, John; Einspruch, Eric
2010-01-01
To help assess the effects of school-based random drug testing programs, the U.S. Department of Education's Institute of Education Sciences (IES) contracted with RMC Research Corporation and Mathematica Policy Research to conduct an experimental evaluation of the Mandatory-Random Student Drug Testing (MRSDT) programs in 36 high schools within…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-21
... applicant must demonstrate in its application that its proposed PBCS is designed to assist high-need schools... implementation plan developed by the Institute of Education Sciences (IES) evaluator, Mathematica Policy Research...) A letter from the research office or research board of the participating LEA that expresses an...
Abstinence Programs Don't Work, Largest Study to Date Concludes
ERIC Educational Resources Information Center
Freking, Kevin
2007-01-01
This article reports on a study conducted by Mathematica Policy Research Inc. of students in four abstinence programs, as well as peers from the same communities who did not participate in the abstinence programs. A federally mandated report said that students who participated in sexual-abstinence education programs partially funded by the federal…
Applications of the Peng-Robinson Equation of State Using Mathematica
ERIC Educational Resources Information Center
Binous, Housam
2008-01-01
A single equation of state (EOS) such as the Peng-Robinson EOS can accurately describe both the liquid and vapor phase. We present several applications of this equation of state including adiabatic flash calculation, determination of the solubility of methanol in natural gas, and the calculation of high-pressure chemical equilibrium. The problems…
Analysis of Classes of Superlinear Semipositone Problems with Nonlinear Boundary Conditions
NASA Astrophysics Data System (ADS)
Morris, Quinn A.
We study positive radial solutions for classes of steady state reaction diffusion problems on the exterior of a ball with both Dirichlet and nonlinear boundary conditions. We consider p-Laplacian problems (p > 1) with reaction terms which are superlinear at infinity and semipositone. In the case p = 2, using variational methods, we establish the existence of a solution, and via detailed analysis of the Green's function, we prove the positivity of the solution. In the case p ≠ 2, we again use variational methods to establish the existence of a solution, but the positivity of the solution is achieved via sophisticated a priori estimates. In the case p ≠ 2, the Green's function analysis is no longer available. Our results significantly enhance the literature on superlinear semipositone problems. Finally, we provide algorithms for the numerical generation of exact bifurcation curves for one-dimensional problems. In the autonomous case, we extend and analyze a quadrature method, and using nonlinear solvers in Mathematica, generate bifurcation curves. In the nonautonomous case, we employ shooting methods in Mathematica to generate bifurcation curves.
SSL - THE SIMPLE SOCKETS LIBRARY
NASA Technical Reports Server (NTRS)
Campbell, C. E.
1994-01-01
The Simple Sockets Library (SSL) allows C programmers to develop systems of cooperating programs using Berkeley streaming Sockets running under the TCP/IP protocol over Ethernet. The SSL provides a simple way to move information between programs running on the same or different machines and does so with little overhead. The SSL can create three types of Sockets: namely a server, a client, and an accept Socket. The SSL's Sockets are designed to be used in a fashion reminiscent of the use of FILE pointers so that a C programmer who is familiar with reading and writing files will immediately feel comfortable with reading and writing with Sockets. The SSL consists of three parts: the library, PortMaster, and utilities. The user of the SSL accesses it by linking programs to the SSL library. The PortMaster initializes connections between clients and servers. The PortMaster also supports a "firewall" facility to keep out socket requests from unapproved machines. The "firewall" is a file which contains Internet addresses for all approved machines. There are three utilities provided with the SSL. SKTDBG can be used to debug programs that make use of the SSL. SPMTABLE lists the servers and port numbers on requested machine(s). SRMSRVR tells the PortMaster to forcibly remove a server name from its list. The package also includes two example programs: multiskt.c, which makes multiple accepts on one server, and sktpoll.c, which repeatedly attempts to connect a client to some server at one second intervals. SSL is a machine independent library written in the C-language for computers connected via Ethernet using the TCP/IP protocol. It has been successfully compiled and implemented on a variety of platforms, including Sun series computers running SunOS, DEC VAX series computers running VMS, SGI computers running IRIX, DECstations running ULTRIX, DEC alpha AXPs running OSF/1, IBM RS/6000 computers running AIX, IBM PC and compatibles running BSD/386 UNIX and HP Apollo 3000/4000/9000/400T computers running HP-UX. SSL requires 45K of RAM to run under SunOS and 80K of RAM to run under VMS. For use on IBM PC series computers and compatibles running DOS, SSL requires Microsoft C 6.0 and the Wollongong TCP/IP package. Source code for sample programs and debugging tools are provided. The documentation is available on the distribution medium in TeX and PostScript formats. The standard distribution medium for SSL is a .25 inch streaming magnetic tape cartridge (QIC-24) in UNIX tar format. It is also available on a 3.5 inch diskette in UNIX tar format and a 5.25 inch 360K MS-DOS format diskette. The SSL was developed in 1992 and was updated in 1993.
NASA Astrophysics Data System (ADS)
Alloul, Adam; Christensen, Neil D.; Degrande, Céline; Duhr, Claude; Fuks, Benjamin
2014-06-01
The program FEYNRULES is a MATHEMATICA package developed to facilitate the implementation of new physics theories into high-energy physics tools. Starting from a minimal set of information such as the model gauge symmetries, its particle content, parameters and Lagrangian, FEYNRULES provides all necessary routines to extract automatically from the Lagrangian (that can also be computed semi-automatically for supersymmetric theories) the associated Feynman rules. These can be further exported to several Monte Carlo event generators through dedicated interfaces, as well as translated into a PYTHON library, under the so-called UFO model format, agnostic of the model complexity, especially in terms of Lorentz and/or color structures appearing in the vertices or of number of external legs. In this work, we briefly report on the most recent new features that have been added to FEYNRULES, including full support for spin-1 fermions, a new module allowing for the automated diagonalization of the particle spectrum and a new set of routines dedicated to decay width calculations.
Serang, Oliver
2012-01-01
Linear programming (LP) problems are commonly used in analysis and resource allocation, frequently surfacing as approximations to more difficult problems. Existing approaches to LP have been dominated by a small group of methods, and randomized algorithms have not enjoyed popularity in practice. This paper introduces a novel randomized method of solving LP problems by moving along the facets and within the interior of the polytope along rays randomly sampled from the polyhedral cones defined by the bounding constraints. This conic sampling method is then applied to randomly sampled LPs, and its runtime performance is shown to compare favorably to the simplex and primal affine-scaling algorithms, especially on polytopes with certain characteristics. The conic sampling method is then adapted and applied to solve a certain quadratic program, which compute a projection onto a polytope; the proposed method is shown to outperform the proprietary software Mathematica on large, sparse QP problems constructed from mass spectometry-based proteomics. PMID:22952741
Numerical evaluation of multi-loop integrals for arbitrary kinematics with SecDec 2.0
NASA Astrophysics Data System (ADS)
Borowka, Sophia; Carter, Jonathon; Heinrich, Gudrun
2013-02-01
We present the program SecDec 2.0, which contains various new features. First, it allows the numerical evaluation of multi-loop integrals with no restriction on the kinematics. Dimensionally regulated ultraviolet and infrared singularities are isolated via sector decomposition, while threshold singularities are handled by a deformation of the integration contour in the complex plane. As an application, we present numerical results for various massive two-loop four-point diagrams. SecDec 2.0 also contains new useful features for the calculation of more general parameter integrals, related for example to phase space integrals. Program summaryProgram title: SecDec 2.0 Catalogue identifier: AEIR_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIR_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 156829 No. of bytes in distributed program, including test data, etc.: 2137907 Distribution format: tar.gz Programming language: Wolfram Mathematica, Perl, Fortran/C++. Computer: From a single PC to a cluster, depending on the problem. Operating system: Unix, Linux. RAM: Depending on the complexity of the problem Classification: 4.4, 5, 11.1. Catalogue identifier of previous version: AEIR_v1_0 Journal reference of previous version: Comput. Phys. Comm. 182(2011)1566 Does the new version supersede the previous version?: Yes Nature of problem: Extraction of ultraviolet and infrared singularities from parametric integrals appearing in higher order perturbative calculations in gauge theories. Numerical integration in the presence of integrable singularities (e.g., kinematic thresholds). Solution method: Algebraic extraction of singularities in dimensional regularization using iterated sector decomposition. This leads to a Laurent series in the dimensional regularization parameter ɛ, where the coefficients are finite integrals over the unit hypercube. Those integrals are evaluated numerically by Monte Carlo integration. The integrable singularities are handled by choosing a suitable integration contour in the complex plane, in an automated way. Reasons for new version: In the previous version the calculation of multi-scale integrals was restricted to the Euclidean region. Now multi-loop integrals with arbitrary physical kinematics can be evaluated. Another major improvement is the possibility of full parallelization. Summary of revisions: No restriction on the kinematics for multi-loop integrals. The integrand can be constructed from the topological cuts of the diagram. Possibility of full parallelization. Numerical integration of multi-loop integrals written in C++ rather than Fortran. Possibility to loop over ranges of parameters. Restrictions: Depending on the complexity of the problem, limited by memory and CPU time. The restriction that multi-scale integrals could only be evaluated at Euclidean points is superseded in version 2.0. Running time: Between a few minutes and several days, depending on the complexity of the problem. Test runs provided take only seconds.
Simulation of LHC events on a millions threads
NASA Astrophysics Data System (ADS)
Childers, J. T.; Uram, T. D.; LeCompte, T. J.; Papka, M. E.; Benjamin, D. P.
2015-12-01
Demand for Grid resources is expected to double during LHC Run II as compared to Run I; the capacity of the Grid, however, will not double. The HEP community must consider how to bridge this computing gap by targeting larger compute resources and using the available compute resources as efficiently as possible. Argonne's Mira, the fifth fastest supercomputer in the world, can run roughly five times the number of parallel processes that the ATLAS experiment typically uses on the Grid. We ported Alpgen, a serial x86 code, to run as a parallel application under MPI on the Blue Gene/Q architecture. By analysis of the Alpgen code, we reduced the memory footprint to allow running 64 threads per node, utilizing the four hardware threads available per core on the PowerPC A2 processor. Event generation and unweighting, typically run as independent serial phases, are coupled together in a single job in this scenario, reducing intermediate writes to the filesystem. By these optimizations, we have successfully run LHC proton-proton physics event generation at the scale of a million threads, filling two-thirds of Mira.
Running Jobs on the Peregrine System | High-Performance Computing | NREL
on the Peregrine high-performance computing (HPC) system. Running Different Types of Jobs Batch jobs scheduling policies - queue names, limits, etc. Requesting different node types Sample batch scripts
Computational Approaches to Simulation and Optimization of Global Aircraft Trajectories
NASA Technical Reports Server (NTRS)
Ng, Hok Kwan; Sridhar, Banavar
2016-01-01
This study examines three possible approaches to improving the speed in generating wind-optimal routes for air traffic at the national or global level. They are: (a) using the resources of a supercomputer, (b) running the computations on multiple commercially available computers and (c) implementing those same algorithms into NASAs Future ATM Concepts Evaluation Tool (FACET) and compares those to a standard implementation run on a single CPU. Wind-optimal aircraft trajectories are computed using global air traffic schedules. The run time and wait time on the supercomputer for trajectory optimization using various numbers of CPUs ranging from 80 to 10,240 units are compared with the total computational time for running the same computation on a single desktop computer and on multiple commercially available computers for potential computational enhancement through parallel processing on the computer clusters. This study also re-implements the trajectory optimization algorithm for further reduction of computational time through algorithm modifications and integrates that with FACET to facilitate the use of the new features which calculate time-optimal routes between worldwide airport pairs in a wind field for use with existing FACET applications. The implementations of trajectory optimization algorithms use MATLAB, Python, and Java programming languages. The performance evaluations are done by comparing their computational efficiencies and based on the potential application of optimized trajectories. The paper shows that in the absence of special privileges on a supercomputer, a cluster of commercially available computers provides a feasible approach for national and global air traffic system studies.
WinHPC System | High-Performance Computing | NREL
System WinHPC System NREL's WinHPC system is a computing cluster running the Microsoft Windows operating system. It allows users to run jobs requiring a Windows environment such as ANSYS and MATLAB
Study Finds Charter Networks Give No Clear Edge on Results
ERIC Educational Resources Information Center
Shah, Nirvi
2011-01-01
The author reports on a national study of middle school students in 40 charter networks which finds that, when it comes to having an impact on student achievement, results vary and, overall, charter students do not learn dramatically more than their counterparts in regular public schools. The findings from the research group Mathematica and the…
Evaluating ARRA Programs and Other Educational Reforms: A Guide for States
ERIC Educational Resources Information Center
Perez-Johnson, Irma; Walters, Kirk; Puma, Michael; Herman, Rebecca; Garet, Michael; Heppen, Jessica; Lemke, Mariann; Aladjem, Daniel; Amin, Samia; Burghardt, John
2011-01-01
The American Institutes for Research (AIR) and Mathematica Policy Research (MPR) developed this guide to help you consider evaluation issues likely to arise as you launch ARRA-funded initiatives and other educational reform activities. Many states are already involved in evaluation, so many of the ideas presented here may be familiar. The authors…
A Quantitative Methodology for Vetting Dark Network Intelligence Sources for Social Network Analysis
2012-06-01
first algorithm by Erdös and Rényi (Erdös & Renyi , 1959). This earliest algorithm suffers from the fact that its degree distribution is not scale...Fundamental Media Understanding. Norderstedt: atpress. Erdös, P., & Renyi , A. (1959). On random graphs. Publicationes Mathematicae , 6, 290- 297. Erdös, P
2010-11-30
Erdos- Renyi -Gilbert random graph [Erdos and Renyi , 1959; Gilbert, 1959], the Watts-Strogatz “small world” framework [Watts and Strogatz, 1998], and the...2003). Evolution of Networks. Oxford University Press, USA. Erdos, P. and Renyi , A. (1959). On Random Graphs. Publications Mathematicae, 6 290–297
ERIC Educational Resources Information Center
Darling-Hammond, Linda
2009-01-01
Recent findings from a Mathematica study comparing the performance of teachers prepared via alternative and traditional routes have been interpreted to suggest that policymakers and practitioners should expand the use of fast-entry alternative routes and seek teachers trained through such programs, as they presumably perform as well in the…
Examining Variation in Achievement Impacts across the KIPP Network of Charter Schools
ERIC Educational Resources Information Center
Tuttle, Christina Clark; Gleason, Philip; Furgeson, Joshua
2012-01-01
As a condition of its i3 grant, KIPP contracted with an independent evaluator (Mathematica) to address a key research question: does KIPP maintain its demonstrated effectiveness as it scales? While this question sounds simple enough in theory, it poses several methodological and practical challenges. This paper outlines some of those key…
ERIC Educational Resources Information Center
Lipscomb, Stephen; Gill, Brian; Booker, Kevin; Johnson, Matthew
2010-01-01
At the request of Pittsburgh Public Schools (PPS) and the Pittsburgh Federation of Teachers (PFT), Mathematica is developing value-added models (VAMs) that aim to estimate the contributions of individual teachers, teams of teachers, and schools to the achievement growth of their students. The analyses described in this report are intended as an…
ERIC Educational Resources Information Center
Mahavier, W. Ted
2002-01-01
Describes a two-semester numerical methods course that serves as a research experience for undergraduate students without requiring external funding or the modification of current curriculum. Uses an engineering problem to introduce students to constrained optimization via a variation of the traditional isoperimetric problem of finding the curve…
The Logical Heart of a Classic Proof Revisited: A Guide to Godel's "Incompleteness" Theorems
ERIC Educational Resources Information Center
Padula, Janice
2011-01-01
The study of Kurt Godel's proof of the "incompleteness" of a formal system such as "Principia Mathematica" is a great way to stimulate students' thinking and creative processes and interest in mathematics and its important developments. This article describes salient features of the proof together with ways to deal with potential difficulties for…
Equivariant Verlinde Algebra from Superconformal Index and Argyres-Seiberg Duality
NASA Astrophysics Data System (ADS)
Gukov, Sergei; Pei, Du; Yan, Wenbin; Ye, Ke
2018-02-01
In this paper, we show the equivalence between two seemingly distinct 2d TQFTs: one comes from the "Coulomb branch index" of the class S theory {T[Σ,G]} on {L(k,1) × S^1}, the other is the {^L G} "equivariant Verlinde formula", or equivalently partition function of {^L G_C} complex Chern-Simons theory on {Σ× S^1}. We first derive this equivalence using the M-theory geometry and show that the gauge groups appearing on the two sides are naturally G and its Langlands dual {^L G}. When G is not simply-connected, we provide a recipe of computing the index of {T[Σ,G]} as summation over the indices of T[Σ,\\tilde{G}] with non-trivial background 't Hooft fluxes, where \\tilde{G} is the universal cover of G. Then we check explicitly this relation between the Coulomb index and the equivariant Verlinde formula for {G=SU(2)} or SO(3). In the end, as an application of this newly found relation, we consider the more general case where G is SU( N) or PSU( N) and show that equivariant Verlinde algebra can be derived using field theory via (generalized) Argyres-Seiberg duality. We also attach a Mathematica notebook that can be used to compute the SU(3) equivariant Verlinde coefficients.
DOE Office of Scientific and Technical Information (OSTI.GOV)
March-Leuba, S.; Jansen, J.F.; Kress, R.L.
A new program package, Symbolic Manipulator Laboratory (SML), for the automatic generation of both kinematic and static manipulator models in symbolic form is presented. Critical design parameters may be identified and optimized using symbolic models as shown in the sample application presented for the Future Armor Rearm System (FARS) arm. The computer-aided development of the symbolic models yields equations with reduced numerical complexity. Important considerations have been placed on the closed form solutions simplification and on the user friendly operation. The main emphasis of this research is the development of a methodology which is implemented in a computer program capablemore » of generating symbolic kinematic and static forces models of manipulators. The fact that the models are obtained trigonometrically reduced is among the most significant results of this work and the most difficult to implement. Mathematica, a commercial program that allows symbolic manipulation, is used to implement the program package. SML is written such that the user can change any of the subroutines or create new ones easily. To assist the user, an on-line help has been written to make of SML a user friendly package. Some sample applications are presented. The design and optimization of the 5-degrees-of-freedom (DOF) FARS manipulator using SML is discussed. Finally, the kinematic and static models of two different 7-DOF manipulators are calculated symbolically.« less
NASA Astrophysics Data System (ADS)
Huang, Ching-Sheng; Yeh, Hund-Der
2016-11-01
This study introduces an analytical approach to estimate drawdown induced by well extraction in a heterogeneous confined aquifer with an irregular outer boundary. The aquifer domain is divided into a number of zones according to the zonation method for representing the spatial distribution of a hydraulic parameter field. The lateral boundary of the aquifer can be considered under the Dirichlet, Neumann or Robin condition at different parts of the boundary. Flow across the interface between two zones satisfies the continuities of drawdown and flux. Source points, each of which has an unknown volumetric rate representing the boundary effect on the drawdown, are allocated around the boundary of each zone. The solution of drawdown in each zone is expressed as a series in terms of the Theis equation with unknown volumetric rates from the source points. The rates are then determined based on the aquifer boundary conditions and the continuity requirements. The estimated aquifer drawdown by the present approach agrees well with a finite element solution developed based on the Mathematica function NDSolve. As compared with the existing numerical approaches, the present approach has a merit of directly computing the drawdown at any given location and time and therefore takes much less computing time to obtain the required results in engineering applications.
Analyzing Spacecraft Telecommunication Systems
NASA Technical Reports Server (NTRS)
Kordon, Mark; Hanks, David; Gladden, Roy; Wood, Eric
2004-01-01
Multi-Mission Telecom Analysis Tool (MMTAT) is a C-language computer program for analyzing proposed spacecraft telecommunication systems. MMTAT utilizes parameterized input and computational models that can be run on standard desktop computers to perform fast and accurate analyses of telecommunication links. MMTAT is easy to use and can easily be integrated with other software applications and run as part of almost any computational simulation. It is distributed as either a stand-alone application program with a graphical user interface or a linkable library with a well-defined set of application programming interface (API) calls. As a stand-alone program, MMTAT provides both textual and graphical output. The graphs make it possible to understand, quickly and easily, how telecommunication performance varies with variations in input parameters. A delimited text file that can be read by any spreadsheet program is generated at the end of each run. The API in the linkable-library form of MMTAT enables the user to control simulation software and to change parameters during a simulation run. Results can be retrieved either at the end of a run or by use of a function call at any time step.
2011-08-01
5 Figure 4 Architetural diagram of running Blender on Amazon EC2 through Nimbis...classification of streaming data. Example input images (top left). All digit prototypes (cluster centers) found, with size proportional to frequency (top...Figure 4 Architetural diagram of running Blender on Amazon EC2 through Nimbis 1 http
Design for Run-Time Monitor on Cloud Computing
NASA Astrophysics Data System (ADS)
Kang, Mikyung; Kang, Dong-In; Yun, Mira; Park, Gyung-Leen; Lee, Junghoon
Cloud computing is a new information technology trend that moves computing and data away from desktops and portable PCs into large data centers. The basic principle of cloud computing is to deliver applications as services over the Internet as well as infrastructure. A cloud is the type of a parallel and distributed system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources. The large-scale distributed applications on a cloud require adaptive service-based software, which has the capability of monitoring the system status change, analyzing the monitored information, and adapting its service configuration while considering tradeoffs among multiple QoS features simultaneously. In this paper, we design Run-Time Monitor (RTM) which is a system software to monitor the application behavior at run-time, analyze the collected information, and optimize resources on cloud computing. RTM monitors application software through library instrumentation as well as underlying hardware through performance counter optimizing its computing configuration based on the analyzed data.
Cloud Computing for Complex Performance Codes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Appel, Gordon John; Hadgu, Teklu; Klein, Brandon Thorin
This report describes the use of cloud computing services for running complex public domain performance assessment problems. The work consisted of two phases: Phase 1 was to demonstrate complex codes, on several differently configured servers, could run and compute trivial small scale problems in a commercial cloud infrastructure. Phase 2 focused on proving non-trivial large scale problems could be computed in the commercial cloud environment. The cloud computing effort was successfully applied using codes of interest to the geohydrology and nuclear waste disposal modeling community.
NASA Astrophysics Data System (ADS)
Myre, Joseph M.
Heterogeneous computing systems have recently come to the forefront of the High-Performance Computing (HPC) community's interest. HPC computer systems that incorporate special purpose accelerators, such as Graphics Processing Units (GPUs), are said to be heterogeneous. Large scale heterogeneous computing systems have consistently ranked highly on the Top500 list since the beginning of the heterogeneous computing trend. By using heterogeneous computing systems that consist of both general purpose processors and special- purpose accelerators, the speed and problem size of many simulations could be dramatically increased. Ultimately this results in enhanced simulation capabilities that allows, in some cases for the first time, the execution of parameter space and uncertainty analyses, model optimizations, and other inverse modeling techniques that are critical for scientific discovery and engineering analysis. However, simplifying the usage and optimization of codes for heterogeneous computing systems remains a challenge. This is particularly true for scientists and engineers for whom understanding HPC architectures and undertaking performance analysis may not be primary research objectives. To enable scientists and engineers to remain focused on their primary research objectives, a modular environment for geophysical inversion and run-time autotuning on heterogeneous computing systems is presented. This environment is composed of three major components: 1) CUSH---a framework for reducing the complexity of programming heterogeneous computer systems, 2) geophysical inversion routines which can be used to characterize physical systems, and 3) run-time autotuning routines designed to determine configurations of heterogeneous computing systems in an attempt to maximize the performance of scientific and engineering codes. Using three case studies, a lattice-Boltzmann method, a non-negative least squares inversion, and a finite-difference fluid flow method, it is shown that this environment provides scientists and engineers with means to reduce the programmatic complexity of their applications, to perform geophysical inversions for characterizing physical systems, and to determine high-performing run-time configurations of heterogeneous computing systems using a run-time autotuner.
Nonlinear Analysis of a Bolted Marine Riser Connector Using NASTRAN Substructuring
NASA Technical Reports Server (NTRS)
Fox, G. L.
1984-01-01
Results of an investigation of the behavior of a bolted, flange type marine riser connector is reported. The method used to account for the nonlinear effect of connector separation due to bolt preload and axial tension load is described. The automated multilevel substructing capability of COSMIC/NASTRAN was employed at considerable savings in computer run time. Simplified formulas for computer resources, i.e., computer run times for modules SDCOMP, FBS, and MPYAD, as well as disk storage space, are presented. Actual run time data on a VAX-11/780 is compared with the formulas presented.
Scalable computing for evolutionary genomics.
Prins, Pjotr; Belhachemi, Dominique; Möller, Steffen; Smant, Geert
2012-01-01
Genomic data analysis in evolutionary biology is becoming so computationally intensive that analysis of multiple hypotheses and scenarios takes too long on a single desktop computer. In this chapter, we discuss techniques for scaling computations through parallelization of calculations, after giving a quick overview of advanced programming techniques. Unfortunately, parallel programming is difficult and requires special software design. The alternative, especially attractive for legacy software, is to introduce poor man's parallelization by running whole programs in parallel as separate processes, using job schedulers. Such pipelines are often deployed on bioinformatics computer clusters. Recent advances in PC virtualization have made it possible to run a full computer operating system, with all of its installed software, on top of another operating system, inside a "box," or virtual machine (VM). Such a VM can flexibly be deployed on multiple computers, in a local network, e.g., on existing desktop PCs, and even in the Cloud, to create a "virtual" computer cluster. Many bioinformatics applications in evolutionary biology can be run in parallel, running processes in one or more VMs. Here, we show how a ready-made bioinformatics VM image, named BioNode, effectively creates a computing cluster, and pipeline, in a few steps. This allows researchers to scale-up computations from their desktop, using available hardware, anytime it is required. BioNode is based on Debian Linux and can run on networked PCs and in the Cloud. Over 200 bioinformatics and statistical software packages, of interest to evolutionary biology, are included, such as PAML, Muscle, MAFFT, MrBayes, and BLAST. Most of these software packages are maintained through the Debian Med project. In addition, BioNode contains convenient configuration scripts for parallelizing bioinformatics software. Where Debian Med encourages packaging free and open source bioinformatics software through one central project, BioNode encourages creating free and open source VM images, for multiple targets, through one central project. BioNode can be deployed on Windows, OSX, Linux, and in the Cloud. Next to the downloadable BioNode images, we provide tutorials online, which empower bioinformaticians to install and run BioNode in different environments, as well as information for future initiatives, on creating and building such images.
Fingerprinting Communication and Computation on HPC Machines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peisert, Sean
2010-06-02
How do we identify what is actually running on high-performance computing systems? Names of binaries, dynamic libraries loaded, or other elements in a submission to a batch queue can give clues, but binary names can be changed, and libraries provide limited insight and resolution on the code being run. In this paper, we present a method for"fingerprinting" code running on HPC machines using elements of communication and computation. We then discuss how that fingerprint can be used to determine if the code is consistent with certain other types of codes, what a user usually runs, or what the user requestedmore » an allocation to do. In some cases, our techniques enable us to fingerprint HPC codes using runtime MPI data with a high degree of accuracy.« less
SARAH 3.2: Dirac gauginos, UFO output, and more
NASA Astrophysics Data System (ADS)
Staub, Florian
2013-07-01
SARAH is a Mathematica package optimized for the fast, efficient and precise study of supersymmetric models beyond the MSSM: a new model can be defined in a short form and all vertices are derived. This allows SARAH to create model files for FeynArts/FormCalc, CalcHep/CompHep and WHIZARD/O'Mega. The newest version of SARAH now provides the possibility to create model files in the UFO format which is supported by MadGraph 5, MadAnalysis 5, GoSam, and soon by Herwig++. Furthermore, SARAH also calculates the mass matrices, RGEs and 1-loop corrections to the mass spectrum. This information is used to write source code for SPheno in order to create a precision spectrum generator for the given model. This spectrum-generator-generator functionality as well as the output of WHIZARD and CalcHep model files has seen further improvement in this version. Also models including Dirac gauginos are supported with the new version of SARAH, and additional checks for the consistency of the implementation of new models have been created. Program summaryProgram title:SARAH Catalogue identifier: AEIB_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIB_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 3 22 411 No. of bytes in distributed program, including test data, etc.: 3 629 206 Distribution format: tar.gz Programming language: Mathematica. Computer: All for which Mathematica is available. Operating system: All for which Mathematica is available. Classification: 11.1, 11.6. Catalogue identifier of previous version: AEIB_v1_0 Journal reference of previous version: Comput. Phys. Comm. 182 (2011) 808 Does the new version supersede the previous version?: Yes, the new version includes all known features of the previous version but also provides the new features mentioned below. Nature of problem: To use Madgraph for new models it is necessary to provide the corresponding model files which include all information about the interactions of the model. However, the derivation of the vertices for a given model and putting those into model files which can be used with Madgraph is usually very time consuming. Dirac gauginos are not present in the minimal supersymmetric standard model (MSSM) or many extensions of it. Dirac mass terms for vector superfields lead to new structures in the supersymmetric (SUSY) Lagrangian (bilinear mass term between gaugino and matter fermion as well as new D-terms) and modify also the SUSY renormalization group equations (RGEs). The Dirac character of gauginos can change the collider phenomenology. In addition, they come with an extended Higgs sector for which a precise calculation of the 1-loop masses has not happened so far. Solution method: SARAH calculates the complete Lagrangian for a given model whose gauge sector can be any direct product of SU(N) gauge groups. The chiral superfields can transform as any, irreducible representation with respect to these gauge groups and it is possible to handle an arbitrary number of symmetry breakings or particle rotations. Also the gauge fixing is automatically added. Using this information, SARAH derives all vertices for a model. These vertices can be exported to model files in the UFO which is supported by Madgraph and other codes like GoSam, MadAnalysis or ALOHA. The user can also study models with Dirac gauginos. In that case SARAH includes all possible terms in the Lagrangian stemming from the new structures and can also calculate the RGEs. The entire impact of these terms is then taken into account in the output of SARAH to UFO, CalcHep, WHIZARD, FeynArts and SPheno. Reasons for new version: SARAH provides, with this version, the possibility of creating model files in the UFO format. The UFO format is supposed to become a standard format for model files which should be supported by many different tools in the future. Also models with Dirac gauginos were not supported in earlier versions. Summary of revisions: Support of models with Dirac gauginos. Output of model files in the UFO format, speed improvement in the output of WHIZARD model files, CalcHep output supports the internal diagonalization of mass matrices, output of control files for LHPC spectrum plotter, support of generalized PDG numbering scheme PDG.IX, improvement of the calculation of the decay widths and branching ratios with SPheno, the calculation of new low energy observables are added to the SPheno output, the handling of gauge fixing terms has been significantly simplified. Restrictions: SARAH can only derive the Lagrangian in an automatized way for N=1 SUSY models, but not for those with more SUSY generators. Furthermore, SARAH supports only renormalizable operators in the output of model files in the UFO format and also for CalcHep, FeynArts and WHIZARD. Also color sextets are not yet included in the model files for Monte Carlo tools. Dimension 5 operators are only supported in the calculation of the RGEs and mass matrices. Unusual features: SARAH does not need the Lagrangian of a model as input to calculate the vertices. The gauge structure, particle and content and superpotential as well as rotations stemming from gauge symmetry breaking are sufficient. All further information is derived by SARAH on its own. Therefore, the model files are very short and the implementation of new models is fast and easy. In addition, the implementation of a model can be checked for physical and formal consistency. In addition, SARAH can generate Fortran code for a full 1-loop analysis of the mass spectrum in the context for Dirac gauginos. Running time: Measured CPU time for the evaluation of the MSSM using a Lenovo Thinkpad X220 with i7 processor (2.53 GHz). Calculating the complete Lagrangian: 9 s. Calculating all vertices: 51 s. Output of the UFO model files: 49 s.
Robot computer problem solving system
NASA Technical Reports Server (NTRS)
Becker, J. D.; Merriam, E. W.
1974-01-01
The conceptual, experimental, and practical phases of developing a robot computer problem solving system are outlined. Robot intelligence, conversion of the programming language SAIL to run under the THNEX monitor, and the use of the network to run several cooperating jobs at different sites are discussed.
Active Nodal Task Seeking for High-Performance, Ultra-Dependable Computing
1994-07-01
implementation. Figure 1 shows a hardware organization of ANTS: stand-alone computing nodes inter - connected by buses. 2.1 Run Time Partitioning The...nodes in 14 respond to changing loads [27] or system reconfiguration [26]. Existing techniques are all source-initiated or server-initiated [27]. 5.1...short-running task segments. The task segments must be short-running in order that processors will become avalable often enough to satisfy changing
ERIC Educational Resources Information Center
Calinger, Ronald, Ed.
This book brings together papers by scholars from around the globe on the historiography and history of mathematics and their integration with mathematical pedagogy. Of the three articles in Part 1, "Historiography and Sources", one identifies research trends in the history of mathematics, the second discusses the centrality of problems, and the…
ERIC Educational Resources Information Center
Glazerman, Steven; Seif, Elizabeth; Baxter, Gail
2008-01-01
This report examines the career trajectories of those who have successfully completed the Passport to Teaching certification offered by the American Board for Certification of Teacher Excellence (ABCTE) during its first years of existence. To elicit information on the career choices of Passport alumni, Mathematica Policy Research, Inc. (MPR)…
ERIC Educational Resources Information Center
Ross, Christine; Sama-Miller, Emily; Roberts, Lily
2018-01-01
The "Integrated Approaches to Supporting Child Development and Improving Family Economic Security" project was conducted by Mathematica Policy Research and Northwestern University for the Office of Planning, Research and Evaluation (OPRE), in the Administration for Children and Families (ACF) at the U.S. Department of Health and Human…
Restricted Closed Shell Hartree Fock Roothaan Matrix Method Applied to Helium Atom Using Mathematica
ERIC Educational Resources Information Center
Acosta, César R.; Tapia, J. Alejandro; Cab, César
2014-01-01
Slater type orbitals were used to construct the overlap and the Hamiltonian core matrices; we also found the values of the bi-electron repulsion integrals. The Hartree Fock Roothaan approximation process starts with setting an initial guess value for the elements of the density matrix; with these matrices we constructed the initial Fock matrix.…
The Roles of Visualization and Symbolism in the Potential and Actual Infinity of the Limit Process
ERIC Educational Resources Information Center
Kidron, Ivy; Tall, David
2015-01-01
A teaching experiment-using Mathematica to investigate the convergence of sequence of functions visually as a sequence of objects (graphs) converging onto a fixed object (the graph of the limit function)-is here used to analyze how the approach can support the dynamic blending of visual and symbolic representations that has the potential to lead…
Approximation of the exponential integral (well function) using sampling methods
NASA Astrophysics Data System (ADS)
Baalousha, Husam Musa
2015-04-01
Exponential integral (also known as well function) is often used in hydrogeology to solve Theis and Hantush equations. Many methods have been developed to approximate the exponential integral. Most of these methods are based on numerical approximations and are valid for a certain range of the argument value. This paper presents a new approach to approximate the exponential integral. The new approach is based on sampling methods. Three different sampling methods; Latin Hypercube Sampling (LHS), Orthogonal Array (OA), and Orthogonal Array-based Latin Hypercube (OA-LH) have been used to approximate the function. Different argument values, covering a wide range, have been used. The results of sampling methods were compared with results obtained by Mathematica software, which was used as a benchmark. All three sampling methods converge to the result obtained by Mathematica, at different rates. It was found that the orthogonal array (OA) method has the fastest convergence rate compared with LHS and OA-LH. The root mean square error RMSE of OA was in the order of 1E-08. This method can be used with any argument value, and can be used to solve other integrals in hydrogeology such as the leaky aquifer integral.
Parallel computing for automated model calibration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burke, John S.; Danielson, Gary R.; Schulz, Douglas A.
2002-07-29
Natural resources model calibration is a significant burden on computing and staff resources in modeling efforts. Most assessments must consider multiple calibration objectives (for example magnitude and timing of stream flow peak). An automated calibration process that allows real time updating of data/models, allowing scientists to focus effort on improving models is needed. We are in the process of building a fully featured multi objective calibration tool capable of processing multiple models cheaply and efficiently using null cycle computing. Our parallel processing and calibration software routines have been generically, but our focus has been on natural resources model calibration. Somore » far, the natural resources models have been friendly to parallel calibration efforts in that they require no inter-process communication, only need a small amount of input data and only output a small amount of statistical information for each calibration run. A typical auto calibration run might involve running a model 10,000 times with a variety of input parameters and summary statistical output. In the past model calibration has been done against individual models for each data set. The individual model runs are relatively fast, ranging from seconds to minutes. The process was run on a single computer using a simple iterative process. We have completed two Auto Calibration prototypes and are currently designing a more feature rich tool. Our prototypes have focused on running the calibration in a distributed computing cross platform environment. They allow incorporation of?smart? calibration parameter generation (using artificial intelligence processing techniques). Null cycle computing similar to SETI@Home has also been a focus of our efforts. This paper details the design of the latest prototype and discusses our plans for the next revision of the software.« less
The Impact and Promise of Open-Source Computational Material for Physics Teaching
NASA Astrophysics Data System (ADS)
Christian, Wolfgang
2017-01-01
A computer-based modeling approach to teaching must be flexible because students and teachers have different skills and varying levels of preparation. Learning how to run the ``software du jour'' is not the objective for integrating computational physics material into the curriculum. Learning computational thinking, how to use computation and computer-based visualization to communicate ideas, how to design and build models, and how to use ready-to-run models to foster critical thinking is the objective. Our computational modeling approach to teaching is a research-proven pedagogy that predates computers. It attempts to enhance student achievement through the Modeling Cycle. This approach was pioneered by Robert Karplus and the SCIS Project in the 1960s and 70s and later extended by the Modeling Instruction Program led by Jane Jackson and David Hestenes at Arizona State University. This talk describes a no-cost open-source computational approach aligned with a Modeling Cycle pedagogy. Our tools, curricular material, and ready-to-run examples are freely available from the Open Source Physics Collection hosted on the AAPT-ComPADRE digital library. Examples will be presented.
Colt: an experiment in wormhole run-time reconfiguration
NASA Astrophysics Data System (ADS)
Bittner, Ray; Athanas, Peter M.; Musgrove, Mark
1996-10-01
Wormhole run-time reconfiguration (RTR) is an attempt to create a refined computing paradigm for high performance computational tasks. By combining concepts from field programmable gate array (FPGA) technologies with data flow computing, the Colt/Stallion architecture achieves high utilization of hardware resources, and facilitates rapid run-time reconfiguration. Targeted mainly at DSP-type operations, the Colt integrated circuit -- a prototype wormhole RTR device -- compares favorably to contemporary DSP alternatives in terms of silicon area consumed per unit computation and in computing performance. Although emphasis has been placed on signal processing applications, general purpose computation has not been overlooked. Colt is a prototype that defines an architecture not only at the chip level but also in terms of an overall system design. As this system is realized, the concept of wormhole RTR will be applied to numerical computation and DSP applications including those common to image processing, communications systems, digital filters, acoustic processing, real-time control systems and simulation acceleration.
Automated Development of Accurate Algorithms and Efficient Codes for Computational Aeroacoustics
NASA Technical Reports Server (NTRS)
Goodrich, John W.; Dyson, Rodger W.
1999-01-01
The simulation of sound generation and propagation in three space dimensions with realistic aircraft components is a very large time dependent computation with fine details. Simulations in open domains with embedded objects require accurate and robust algorithms for propagation, for artificial inflow and outflow boundaries, and for the definition of geometrically complex objects. The development, implementation, and validation of methods for solving these demanding problems is being done to support the NASA pillar goals for reducing aircraft noise levels. Our goal is to provide algorithms which are sufficiently accurate and efficient to produce usable results rapidly enough to allow design engineers to study the effects on sound levels of design changes in propulsion systems, and in the integration of propulsion systems with airframes. There is a lack of design tools for these purposes at this time. Our technical approach to this problem combines the development of new, algorithms with the use of Mathematica and Unix utilities to automate the algorithm development, code implementation, and validation. We use explicit methods to ensure effective implementation by domain decomposition for SPMD parallel computing. There are several orders of magnitude difference in the computational efficiencies of the algorithms which we have considered. We currently have new artificial inflow and outflow boundary conditions that are stable, accurate, and unobtrusive, with implementations that match the accuracy and efficiency of the propagation methods. The artificial numerical boundary treatments have been proven to have solutions which converge to the full open domain problems, so that the error from the boundary treatments can be driven as low as is required. The purpose of this paper is to briefly present a method for developing highly accurate algorithms for computational aeroacoustics, the use of computer automation in this process, and a brief survey of the algorithms that have resulted from this work. A review of computational aeroacoustics has recently been given by Lele.
Virtualization and cloud computing in dentistry.
Chow, Frank; Muftu, Ali; Shorter, Richard
2014-01-01
The use of virtualization and cloud computing has changed the way we use computers. Virtualization is a method of placing software called a hypervisor on the hardware of a computer or a host operating system. It allows a guest operating system to run on top of the physical computer with a virtual machine (i.e., virtual computer). Virtualization allows multiple virtual computers to run on top of one physical computer and to share its hardware resources, such as printers, scanners, and modems. This increases the efficient use of the computer by decreasing costs (e.g., hardware, electricity administration, and management) since only one physical computer is needed and running. This virtualization platform is the basis for cloud computing. It has expanded into areas of server and storage virtualization. One of the commonly used dental storage systems is cloud storage. Patient information is encrypted as required by the Health Insurance Portability and Accountability Act (HIPAA) and stored on off-site private cloud services for a monthly service fee. As computer costs continue to increase, so too will the need for more storage and processing power. Virtual and cloud computing will be a method for dentists to minimize costs and maximize computer efficiency in the near future. This article will provide some useful information on current uses of cloud computing.
Framework for architecture-independent run-time reconfigurable applications
NASA Astrophysics Data System (ADS)
Lehn, David I.; Hudson, Rhett D.; Athanas, Peter M.
2000-10-01
Configurable Computing Machines (CCMs) have emerged as a technology with the computational benefits of custom ASICs as well as the flexibility and reconfigurability of general-purpose microprocessors. Significant effort from the research community has focused on techniques to move this reconfigurability from a rapid application development tool to a run-time tool. This requires the ability to change the hardware design while the application is executing and is known as Run-Time Reconfiguration (RTR). Widespread acceptance of run-time reconfigurable custom computing depends upon the existence of high-level automated design tools. Such tools must reduce the designers effort to port applications between different platforms as the architecture, hardware, and software evolves. A Java implementation of a high-level application framework, called Janus, is presented here. In this environment, developers create Java classes that describe the structural behavior of an application. The framework allows hardware and software modules to be freely mixed and interchanged. A compilation phase of the development process analyzes the structure of the application and adapts it to the target platform. Janus is capable of structuring the run-time behavior of an application to take advantage of the memory and computational resources available.
Progress in Machine Learning Studies for the CMS Computing Infrastructure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bonacorsi, Daniele; Kuznetsov, Valentin; Magini, Nicolo
Here, computing systems for LHC experiments developed together with Grids worldwide. While a complete description of the original Grid-based infrastructure and services for LHC experiments and its recent evolutions can be found elsewhere, it is worth to mention here the scale of the computing resources needed to fulfill the needs of LHC experiments in Run-1 and Run-2 so far.
Progress in Machine Learning Studies for the CMS Computing Infrastructure
Bonacorsi, Daniele; Kuznetsov, Valentin; Magini, Nicolo; ...
2017-12-06
Here, computing systems for LHC experiments developed together with Grids worldwide. While a complete description of the original Grid-based infrastructure and services for LHC experiments and its recent evolutions can be found elsewhere, it is worth to mention here the scale of the computing resources needed to fulfill the needs of LHC experiments in Run-1 and Run-2 so far.
Multitasking the code ARC3D. [for computational fluid dynamics
NASA Technical Reports Server (NTRS)
Barton, John T.; Hsiung, Christopher C.
1986-01-01
The CRAY multitasking system was developed in order to utilize all four processors and sharply reduce the wall clock run time. This paper describes the techniques used to modify the computational fluid dynamics code ARC3D for this run and analyzes the achieved speedup. The ARC3D code solves either the Euler or thin-layer N-S equations using an implicit approximate factorization scheme. Results indicate that multitask processing can be used to achieve wall clock speedup factors of over three times, depending on the nature of the program code being used. Multitasking appears to be particularly advantageous for large-memory problems running on multiple CPU computers.
NASA Technical Reports Server (NTRS)
Eberhardt, D. S.; Baganoff, D.; Stevens, K.
1984-01-01
Implicit approximate-factored algorithms have certain properties that are suitable for parallel processing. A particular computational fluid dynamics (CFD) code, using this algorithm, is mapped onto a multiple-instruction/multiple-data-stream (MIMD) computer architecture. An explanation of this mapping procedure is presented, as well as some of the difficulties encountered when trying to run the code concurrently. Timing results are given for runs on the Ames Research Center's MIMD test facility which consists of two VAX 11/780's with a common MA780 multi-ported memory. Speedups exceeding 1.9 for characteristic CFD runs were indicated by the timing results.
Design and Development of a Run-Time Monitor for Multi-Core Architectures in Cloud Computing
Kang, Mikyung; Kang, Dong-In; Crago, Stephen P.; Park, Gyung-Leen; Lee, Junghoon
2011-01-01
Cloud computing is a new information technology trend that moves computing and data away from desktops and portable PCs into large data centers. The basic principle of cloud computing is to deliver applications as services over the Internet as well as infrastructure. A cloud is a type of parallel and distributed system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources. The large-scale distributed applications on a cloud require adaptive service-based software, which has the capability of monitoring system status changes, analyzing the monitored information, and adapting its service configuration while considering tradeoffs among multiple QoS features simultaneously. In this paper, we design and develop a Run-Time Monitor (RTM) which is a system software to monitor the application behavior at run-time, analyze the collected information, and optimize cloud computing resources for multi-core architectures. RTM monitors application software through library instrumentation as well as underlying hardware through a performance counter optimizing its computing configuration based on the analyzed data. PMID:22163811
Design and development of a run-time monitor for multi-core architectures in cloud computing.
Kang, Mikyung; Kang, Dong-In; Crago, Stephen P; Park, Gyung-Leen; Lee, Junghoon
2011-01-01
Cloud computing is a new information technology trend that moves computing and data away from desktops and portable PCs into large data centers. The basic principle of cloud computing is to deliver applications as services over the Internet as well as infrastructure. A cloud is a type of parallel and distributed system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources. The large-scale distributed applications on a cloud require adaptive service-based software, which has the capability of monitoring system status changes, analyzing the monitored information, and adapting its service configuration while considering tradeoffs among multiple QoS features simultaneously. In this paper, we design and develop a Run-Time Monitor (RTM) which is a system software to monitor the application behavior at run-time, analyze the collected information, and optimize cloud computing resources for multi-core architectures. RTM monitors application software through library instrumentation as well as underlying hardware through a performance counter optimizing its computing configuration based on the analyzed data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hong, Tianzhen; Buhl, Fred; Haves, Philip
2008-09-20
EnergyPlus is a new generation building performance simulation program offering many new modeling capabilities and more accurate performance calculations integrating building components in sub-hourly time steps. However, EnergyPlus runs much slower than the current generation simulation programs. This has become a major barrier to its widespread adoption by the industry. This paper analyzed EnergyPlus run time from comprehensive perspectives to identify key issues and challenges of speeding up EnergyPlus: studying the historical trends of EnergyPlus run time based on the advancement of computers and code improvements to EnergyPlus, comparing EnergyPlus with DOE-2 to understand and quantify the run time differences,more » identifying key simulation settings and model features that have significant impacts on run time, and performing code profiling to identify which EnergyPlus subroutines consume the most amount of run time. This paper provides recommendations to improve EnergyPlus run time from the modeler?s perspective and adequate computing platforms. Suggestions of software code and architecture changes to improve EnergyPlus run time based on the code profiling results are also discussed.« less
Compressed quantum computation using a remote five-qubit quantum computer
NASA Astrophysics Data System (ADS)
Hebenstreit, M.; Alsina, D.; Latorre, J. I.; Kraus, B.
2017-05-01
The notion of compressed quantum computation is employed to simulate the Ising interaction of a one-dimensional chain consisting of n qubits using the universal IBM cloud quantum computer running on log2(n ) qubits. The external field parameter that controls the quantum phase transition of this model translates into particular settings of the quantum gates that generate the circuit. We measure the magnetization, which displays the quantum phase transition, on a two-qubit system, which simulates a four-qubit Ising chain, and show its agreement with the theoretical prediction within a certain error. We also discuss the relevant point of how to assess errors when using a cloud quantum computer with a limited amount of runs. As a solution, we propose to use validating circuits, that is, to run independent controlled quantum circuits of similar complexity to the circuit of interest.
Self-Scheduling Parallel Methods for Multiple Serial Codes with Application to WOPWOP
NASA Technical Reports Server (NTRS)
Long, Lyle N.; Brentner, Kenneth S.
2000-01-01
This paper presents a scheme for efficiently running a large number of serial jobs on parallel computers. Two examples are given of computer programs that run relatively quickly, but often they must be run numerous times to obtain all the results needed. It is very common in science and engineering to have codes that are not massive computing challenges in themselves, but due to the number of instances that must be run, they do become large-scale computing problems. The two examples given here represent common problems in aerospace engineering: aerodynamic panel methods and aeroacoustic integral methods. The first example simply solves many systems of linear equations. This is representative of an aerodynamic panel code where someone would like to solve for numerous angles of attack. The complete code for this first example is included in the appendix so that it can be readily used by others as a template. The second example is an aeroacoustics code (WOPWOP) that solves the Ffowcs Williams Hawkings equation to predict the far-field sound due to rotating blades. In this example, one quite often needs to compute the sound at numerous observer locations, hence parallelization is utilized to automate the noise computation for a large number of observers.
ERIC Educational Resources Information Center
Seftor, Neil S.; Mamun, Arif; Schirm, Allen
2009-01-01
This last report from Mathematica's evaluation of Upward Bound analyzes data from the final round of survey and transcript data collection as well as administrative records from the National Student Clearinghouse and the federal Student Financial Aid files. It provides the first estimates of the effects of Upward Bound on postsecondary completion.…
By Stuart G. Baker The program requires Mathematica 7.01.0 The key function is Classify [datalist,options] where datalist={data, genename, dataname} data ={matrix for class 0, matrix for class 1}, matrix is gene expression by specimen genename a list of names of genes, dataname ={name of data set, name of class0, name of class1} |
ERIC Educational Resources Information Center
Cohen, Rhoda; KewalRamani, Angelina; Nogales, Renee; Ohls, James; Sinclair, Michael
2004-01-01
This report describes research that Mathematica Policy Research, Inc. (MPR) has conducted for the U.S. Department of Agriculture (USDA), Food and Nutrition Service (FNS), to develop methods to track the use of "competitive foods" in schools over time. Competitive foods are foods from a la carte cafeteria sales, vending machines, school stores,…
The Plotting Library http://astroplotlib.stsci.edu
NASA Astrophysics Data System (ADS)
Úbeda, L.
2014-05-01
astroplotlib is a multi-language astronomical library of plots. It is a collection of software templates that are useful to create paper-quality figures. All current templates are coded in IDL, some in Python and Mathematica. This free resource supported at Space Telescope Science Institute allows users to download any plot and customize it to their own needs. It is also intended as an educational tool.
On the Analysis and Construction of the Butterfly Curve Using "Mathematica"[R
ERIC Educational Resources Information Center
Geum, Y. H.; Kim, Y. I.
2008-01-01
The butterfly curve was introduced by Temple H. Fay in 1989 and defined by the polar curve r = e[superscript cos theta] minus 2 cos 4 theta plus sin[superscript 5] (theta divided by 12). In this article, we develop the mathematical model of the butterfly curve and analyse its geometric properties. In addition, we draw the butterfly curve and…
ERIC Educational Resources Information Center
Glazerman, Steven; Myers, David
2004-01-01
In October 2002, the Institute of Education Sciences (IES) contracted with Mathematica Policy Research, Inc. (MRP) to help identify issues pertinent to the evaluation of Title I and to propose feasible evaluation design strategies. This design effort took its lead from two sources: (1) the Title I Independent Review Panel (IRP); and (2) a more…
ERIC Educational Resources Information Center
Mathematica Policy Research, Inc., 2015
2015-01-01
This master data collection protocol describes the data that Mathematica collected for the Race to the Top-Early Learning Challenge Study of Tiered Quality Rating and Improvement Systems. This study was conducted for the Department of Education's Institute of Education Sciences. The data were collected from reviews of applications, documents, and…
Hybrid Topological Lie-Hamiltonian Learning in Evolving Energy Landscapes
NASA Astrophysics Data System (ADS)
Ivancevic, Vladimir G.; Reid, Darryn J.
2015-11-01
In this Chapter, a novel bidirectional algorithm for hybrid (discrete + continuous-time) Lie-Hamiltonian evolution in adaptive energy landscape-manifold is designed and its topological representation is proposed. The algorithm is developed within a geometrically and topologically extended framework of Hopfield's neural nets and Haken's synergetics (it is currently designed in Mathematica, although with small changes it could be implemented in Symbolic C++ or any other computer algebra system). The adaptive energy manifold is determined by the Hamiltonian multivariate cost function H, based on the user-defined vehicle-fleet configuration matrix W, which represents the pseudo-Riemannian metric tensor of the energy manifold. Search for the global minimum of H is performed using random signal differential Hebbian adaptation. This stochastic gradient evolution is driven (or, pulled-down) by `gravitational forces' defined by the 2nd Lie derivatives of H. Topological changes of the fleet matrix W are observed during the evolution and its topological invariant is established. The evolution stops when the W-topology breaks down into several connectivity-components, followed by topology-breaking instability sequence (i.e., a series of phase transitions).
Simon, Laurent; Ospina, Juan
2016-07-25
Three-dimensional solute transport was investigated for a spherical device with a release hole. The governing equation was derived using the Fick's second law. A mixed Neumann-Dirichlet condition was imposed at the boundary to represent diffusion through a small region on the surface of the device. The cumulative percentage of drug released was calculated in the Laplace domain and represented by the first term of an infinite series of Legendre and modified Bessel functions of the first kind. Application of the Zakian algorithm yielded the time-domain closed-form expression. The first-order solution closely matched a numerical solution generated by Mathematica(®). The proposed method allowed computation of the characteristic time. A larger surface pore resulted in a smaller effective time constant. The agreement between the numerical solution and the semi-analytical method improved noticeably as the size of the orifice increased. It took four time constants for the device to release approximately ninety-eight of its drug content. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Martin, Russel A.; Ahumada, Albert J., Jr.; Larimer, James O.
1992-01-01
This paper describes the design and operation of a new simulation model for color matrix display development. It models the physical structure, the signal processing, and the visual perception of static displays, to allow optimization of display design parameters through image quality measures. The model is simple, implemented in the Mathematica computer language, and highly modular. Signal processing modules operate on the original image. The hardware modules describe backlights and filters, the pixel shape, and the tiling of the pixels over the display. Small regions of the displayed image can be visualized on a CRT. Visual perception modules assume static foveal images. The image is converted into cone catches and then into luminance, red-green, and blue-yellow images. A Haar transform pyramid separates the three images into spatial frequency and direction-specific channels. The channels are scaled by weights taken from human contrast sensitivity measurements of chromatic and luminance mechanisms at similar frequencies and orientations. Each channel provides a detectability measure. These measures allow the comparison of images displayed on prospective devices and, by that, the optimization of display designs.
Marchetti, Luca; Manca, Vincenzo
2015-04-15
MpTheory Java library is an open-source project collecting a set of objects and algorithms for modeling observed dynamics by means of the Metabolic P (MP) theory, that is, a mathematical theory introduced in 2004 for modeling biological dynamics. By means of the library, it is possible to model biological systems both at continuous and at discrete time. Moreover, the library comprises a set of regression algorithms for inferring MP models starting from time series of observations. To enhance the modeling experience, beside a pure Java usage, the library can be directly used within the most popular computing environments, such as MATLAB, GNU Octave, Mathematica and R. The library is open-source and licensed under the GNU Lesser General Public License (LGPL) Version 3.0. Source code, binaries and complete documentation are available at http://mptheory.scienze.univr.it. luca.marchetti@univr.it, marchetti@cosbi.eu Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Reviving the shear-free perfect fluid conjecture in general relativity
NASA Astrophysics Data System (ADS)
Sikhonde, Muzikayise E.; Dunsby, Peter K. S.
2017-12-01
Employing a Mathematica symbolic computer algebra package called xTensor, we present (1+3) -covariant special case proofs of the shear-free perfect fluid conjecture in general relativity. We first present the case where the pressure is constant, and where the acceleration is parallel to the vorticity vector. These cases were first presented in their covariant form by Senovilla et al. We then provide a covariant proof for the case where the acceleration and vorticity vectors are orthogonal, which leads to the existence of a Killing vector along the vorticity. This Killing vector satisfies the new constraint equations resulting from the vanishing of the shear. Furthermore, it is shown that in order for the conjecture to be true, this Killing vector must have a vanishing spatially projected directional covariant derivative along the velocity vector field. This in turn implies the existence of another basic vector field along the direction of the vorticity for the conjecture to hold. Finally, we show that in general, there exists a basic vector field parallel to the acceleration for which the conjecture is true.
Counterfactual quantum computation through quantum interrogation
NASA Astrophysics Data System (ADS)
Hosten, Onur; Rakher, Matthew T.; Barreiro, Julio T.; Peters, Nicholas A.; Kwiat, Paul G.
2006-02-01
The logic underlying the coherent nature of quantum information processing often deviates from intuitive reasoning, leading to surprising effects. Counterfactual computation constitutes a striking example: the potential outcome of a quantum computation can be inferred, even if the computer is not run. Relying on similar arguments to interaction-free measurements (or quantum interrogation), counterfactual computation is accomplished by putting the computer in a superposition of `running' and `not running' states, and then interfering the two histories. Conditional on the as-yet-unknown outcome of the computation, it is sometimes possible to counterfactually infer information about the solution. Here we demonstrate counterfactual computation, implementing Grover's search algorithm with an all-optical approach. It was believed that the overall probability of such counterfactual inference is intrinsically limited, so that it could not perform better on average than random guesses. However, using a novel `chained' version of the quantum Zeno effect, we show how to boost the counterfactual inference probability to unity, thereby beating the random guessing limit. Our methods are general and apply to any physical system, as illustrated by a discussion of trapped-ion systems. Finally, we briefly show that, in certain circumstances, counterfactual computation can eliminate errors induced by decoherence.
4273π: Bioinformatics education on low cost ARM hardware
2013-01-01
Background Teaching bioinformatics at universities is complicated by typical computer classroom settings. As well as running software locally and online, students should gain experience of systems administration. For a future career in biology or bioinformatics, the installation of software is a useful skill. We propose that this may be taught by running the course on GNU/Linux running on inexpensive Raspberry Pi computer hardware, for which students may be granted full administrator access. Results We release 4273π, an operating system image for Raspberry Pi based on Raspbian Linux. This includes minor customisations for classroom use and includes our Open Access bioinformatics course, 4273π Bioinformatics for Biologists. This is based on the final-year undergraduate module BL4273, run on Raspberry Pi computers at the University of St Andrews, Semester 1, academic year 2012–2013. Conclusions 4273π is a means to teach bioinformatics, including systems administration tasks, to undergraduates at low cost. PMID:23937194
4273π: bioinformatics education on low cost ARM hardware.
Barker, Daniel; Ferrier, David Ek; Holland, Peter Wh; Mitchell, John Bo; Plaisier, Heleen; Ritchie, Michael G; Smart, Steven D
2013-08-12
Teaching bioinformatics at universities is complicated by typical computer classroom settings. As well as running software locally and online, students should gain experience of systems administration. For a future career in biology or bioinformatics, the installation of software is a useful skill. We propose that this may be taught by running the course on GNU/Linux running on inexpensive Raspberry Pi computer hardware, for which students may be granted full administrator access. We release 4273π, an operating system image for Raspberry Pi based on Raspbian Linux. This includes minor customisations for classroom use and includes our Open Access bioinformatics course, 4273π Bioinformatics for Biologists. This is based on the final-year undergraduate module BL4273, run on Raspberry Pi computers at the University of St Andrews, Semester 1, academic year 2012-2013. 4273π is a means to teach bioinformatics, including systems administration tasks, to undergraduates at low cost.
Statistical fingerprinting for malware detection and classification
Prowell, Stacy J.; Rathgeb, Christopher T.
2015-09-15
A system detects malware in a computing architecture with an unknown pedigree. The system includes a first computing device having a known pedigree and operating free of malware. The first computing device executes a series of instrumented functions that, when executed, provide a statistical baseline that is representative of the time it takes the software application to run on a computing device having a known pedigree. A second computing device executes a second series of instrumented functions that, when executed, provides an actual time that is representative of the time the known software application runs on the second computing device. The system detects malware when there is a difference in execution times between the first and the second computing devices.
HEP Computing Tools, Grid and Supercomputers for Genome Sequencing Studies
NASA Astrophysics Data System (ADS)
De, K.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Novikov, A.; Poyda, A.; Tertychnyy, I.; Wenaus, T.
2017-10-01
PanDA - Production and Distributed Analysis Workload Management System has been developed to address ATLAS experiment at LHC data processing and analysis challenges. Recently PanDA has been extended to run HEP scientific applications on Leadership Class Facilities and supercomputers. The success of the projects to use PanDA beyond HEP and Grid has drawn attention from other compute intensive sciences such as bioinformatics. Recent advances of Next Generation Genome Sequencing (NGS) technology led to increasing streams of sequencing data that need to be processed, analysed and made available for bioinformaticians worldwide. Analysis of genomes sequencing data using popular software pipeline PALEOMIX can take a month even running it on the powerful computer resource. In this paper we will describe the adaptation the PALEOMIX pipeline to run it on a distributed computing environment powered by PanDA. To run pipeline we split input files into chunks which are run separately on different nodes as separate inputs for PALEOMIX and finally merge output file, it is very similar to what it done by ATLAS to process and to simulate data. We dramatically decreased the total walltime because of jobs (re)submission automation and brokering within PanDA. Using software tools developed initially for HEP and Grid can reduce payload execution time for Mammoths DNA samples from weeks to days.
Running R Statistical Computing Environment Software on the Peregrine
for the development of new statistical methodologies and enjoys a large user base. Please consult the distribution details. Natural language support but running in an English locale R is a collaborative project programming paradigms to better leverage modern HPC systems. The CRAN task view for High Performance Computing
Benhammouda, Brahim
2016-01-01
Since 1980, the Adomian decomposition method (ADM) has been extensively used as a simple powerful tool that applies directly to solve different kinds of nonlinear equations including functional, differential, integro-differential and algebraic equations. However, for differential-algebraic equations (DAEs) the ADM is applied only in four earlier works. There, the DAEs are first pre-processed by some transformations like index reductions before applying the ADM. The drawback of such transformations is that they can involve complex algorithms, can be computationally expensive and may lead to non-physical solutions. The purpose of this paper is to propose a novel technique that applies the ADM directly to solve a class of nonlinear higher-index Hessenberg DAEs systems efficiently. The main advantage of this technique is that; firstly it avoids complex transformations like index reductions and leads to a simple general algorithm. Secondly, it reduces the computational work by solving only linear algebraic systems with a constant coefficient matrix at each iteration, except for the first iteration where the algebraic system is nonlinear (if the DAE is nonlinear with respect to the algebraic variable). To demonstrate the effectiveness of the proposed technique, we apply it to a nonlinear index-three Hessenberg DAEs system with nonlinear algebraic constraints. This technique is straightforward and can be programmed in Maple or Mathematica to simulate real application problems.
High Resolution Nature Runs and the Big Data Challenge
NASA Technical Reports Server (NTRS)
Webster, W. Phillip; Duffy, Daniel Q.
2015-01-01
NASA's Global Modeling and Assimilation Office at Goddard Space Flight Center is undertaking a series of very computationally intensive Nature Runs and a downscaled reanalysis. The nature runs use the GEOS-5 as an Atmospheric General Circulation Model (AGCM) while the reanalysis uses the GEOS-5 in Data Assimilation mode. This paper will present computational challenges from three runs, two of which are AGCM and one is downscaled reanalysis using the full DAS. The nature runs will be completed at two surface grid resolutions, 7 and 3 kilometers and 72 vertical levels. The 7 km run spanned 2 years (2005-2006) and produced 4 PB of data while the 3 km run will span one year and generate 4 BP of data. The downscaled reanalysis (MERRA-II Modern-Era Reanalysis for Research and Applications) will cover 15 years and generate 1 PB of data. Our efforts to address the big data challenges of climate science, we are moving toward a notion of Climate Analytics-as-a-Service (CAaaS), a specialization of the concept of business process-as-a-service that is an evolving extension of IaaS, PaaS, and SaaS enabled by cloud computing. In this presentation, we will describe two projects that demonstrate this shift. MERRA Analytic Services (MERRA/AS) is an example of cloud-enabled CAaaS. MERRA/AS enables MapReduce analytics over MERRA reanalysis data collection by bringing together the high-performance computing, scalable data management, and a domain-specific climate data services API. NASA's High-Performance Science Cloud (HPSC) is an example of the type of compute-storage fabric required to support CAaaS. The HPSC comprises a high speed Infinib and network, high performance file systems and object storage, and a virtual system environments specific for data intensive, science applications. These technologies are providing a new tier in the data and analytic services stack that helps connect earthbound, enterprise-level data and computational resources to new customers and new mobility-driven applications and modes of work. In our experience, CAaaS lowers the barriers and risk to organizational change, fosters innovation and experimentation, and provides the agility required to meet our customers' increasing and changing needs
Mount, D W; Conrad, B
1986-01-01
We have previously described programs for a variety of types of sequence analysis (1-4). These programs have now been integrated into a single package. They are written in the standard C programming language and run on virtually any computer system with a C compiler, such as the IBM/PC and other computers running under the MS/DOS and UNIX operating systems. The programs are widely distributed and may be obtained from the authors as described below. PMID:3753780
NASA Astrophysics Data System (ADS)
Decyk, Viktor K.; Dauger, Dean E.
We have constructed a parallel cluster consisting of Apple Macintosh G4 computers running both Classic Mac OS as well as the Unix-based Mac OS X, and have achieved very good performance on numerically intensive, parallel plasma particle-in-cell simulations. Unlike other Unix-based clusters, no special expertise in operating systems is required to build and run the cluster. This enables us to move parallel computing from the realm of experts to the mainstream of computing.
Casero-Alonso, V; López-Fidalgo, J; Torsney, B
2017-01-01
Binary response models are used in many real applications. For these models the Fisher information matrix (FIM) is proportional to the FIM of a weighted simple linear regression model. The same is also true when the weight function has a finite integral. Thus, optimal designs for one binary model are also optimal for the corresponding weighted linear regression model. The main objective of this paper is to provide a tool for the construction of MV-optimal designs, minimizing the maximum of the variances of the estimates, for a general design space. MV-optimality is a potentially difficult criterion because of its nondifferentiability at equal variance designs. A methodology for obtaining MV-optimal designs where the design space is a compact interval [a, b] will be given for several standard weight functions. The methodology will allow us to build a user-friendly computer tool based on Mathematica to compute MV-optimal designs. Some illustrative examples will show a representation of MV-optimal designs in the Euclidean plane, taking a and b as the axes. The applet will be explained using two relevant models. In the first one the case of a weighted linear regression model is considered, where the weight function is directly chosen from a typical family. In the second example a binary response model is assumed, where the probability of the outcome is given by a typical probability distribution. Practitioners can use the provided applet to identify the solution and to know the exact support points and design weights. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
User's instructions for the cardiovascular Walters model
NASA Technical Reports Server (NTRS)
Croston, R. C.
1973-01-01
The model is a combined, steady-state cardiovascular and thermal model. It was originally developed for interactive use, but was converted to batch mode simulation for the Sigma 3 computer. The model has the purpose to compute steady-state circulatory and thermal variables in response to exercise work loads and environmental factors. During a computer simulation run, several selected variables are printed at each time step. End conditions are also printed at the completion of the run.
A Quantum Computing Approach to Model Checking for Advanced Manufacturing Problems
2014-07-01
amount of time. In summary, the tool we developed succeeded in allowing us to produce good solutions for optimization problems that did not fit ...We compared the value of the objective obtained in each run with the known optimal value, and used this information to compute the probability of ...success for each given instance. Then we used this information to compute the expected number of repetitions (or runs) needed to obtain the optimal
NASA Astrophysics Data System (ADS)
Chen, Xiuhong; Huang, Xianglei; Jiao, Chaoyi; Flanner, Mark G.; Raeker, Todd; Palen, Brock
2017-01-01
The suites of numerical models used for simulating climate of our planet are usually run on dedicated high-performance computing (HPC) resources. This study investigates an alternative to the usual approach, i.e. carrying out climate model simulations on commercially available cloud computing environment. We test the performance and reliability of running the CESM (Community Earth System Model), a flagship climate model in the United States developed by the National Center for Atmospheric Research (NCAR), on Amazon Web Service (AWS) EC2, the cloud computing environment by Amazon.com, Inc. StarCluster is used to create virtual computing cluster on the AWS EC2 for the CESM simulations. The wall-clock time for one year of CESM simulation on the AWS EC2 virtual cluster is comparable to the time spent for the same simulation on a local dedicated high-performance computing cluster with InfiniBand connections. The CESM simulation can be efficiently scaled with the number of CPU cores on the AWS EC2 virtual cluster environment up to 64 cores. For the standard configuration of the CESM at a spatial resolution of 1.9° latitude by 2.5° longitude, increasing the number of cores from 16 to 64 reduces the wall-clock running time by more than 50% and the scaling is nearly linear. Beyond 64 cores, the communication latency starts to outweigh the benefit of distributed computing and the parallel speedup becomes nearly unchanged.
Decidability of formal theories and hyperincursivity theory
NASA Astrophysics Data System (ADS)
Grappone, Arturo G.
2000-05-01
This paper shows the limits of the Proof Standard Theory (briefly, PST) and gives some ideas of how to build a proof anticipatory theory (briefly, PAT) that has no such limits. Also, this paper considers that Gödel's proof of the undecidability of Principia Mathematica formal theory is not valid for axiomatic theories that use a PAT to build their proofs because the (hyper)incursive functions are self-representable.
Image Analysis Using Quantum Entropy Scale Space and Diffusion Concepts
2009-11-01
images using a combination of analytic methods and prototype Matlab and Mathematica programs. We investigated concepts of generalized entropy and...Schmidt strength from quantum logic gate decomposition. This form of entropy gives a measure of the nonlocal content of an entangling logic gate...11 We recall that the Schmidt number is an indicator of entanglement , but not a measure of entanglement . For instance, let us compare
Effective Measurement of Reliability of Repairable USAF Systems
2012-09-01
Hansen presented a course, Concepts and Models for Repairable Systems Reliability, at the 2009 Centro de Investigacion en Mathematicas ( CIMAT ). The...recurrent event by calculating the mean quantity of recurrent events of the population of systems at risk at that point in time. The number of systems at... risk is the number of systems that are operating and providing information. [9] Information can be obscured by data censoring and truncation. One
Digges, Leonard (c 1520-c 1559) and Digges, Thomas (1545/6-95)
NASA Astrophysics Data System (ADS)
Murdin, P.
2000-11-01
Both were English astronomers, opticians and military engineers. Thomas was born in Wotton, Kent, England, and incorporated his father's work on optics and ballistics into his own publications. He was tutored by JOHN DEE. In 1573 Thomas Digges published Alae seu Scalae Mathematicae, a work on the position of the supernova of 1572, showing it had no parallax, i.e. was at a great distance, beyond t...
NASA Astrophysics Data System (ADS)
Gencoglu, Muharrem Tuncay; Baskonus, Haci Mehmet; Bulut, Hasan
2017-01-01
The main aim of this manuscript is to obtain numerical solutions for the nonlinear model of interpersonal relationships with time fractional derivative. The variational iteration method is theoretically implemented and numerically conducted only to yield the desired solutions. Numerical simulations of desired solutions are plotted by using Wolfram Mathematica 9. The authors would like to thank the reviewers for their comments that help improve the manuscript.
ERIC Educational Resources Information Center
Chaplin, Duncan; Bleeker, Martha; Booker, Kevin
2010-01-01
Roads to Success (RTS) is a school and career planning program designed to be implemented for 45 minutes per week in grades 7 through 12. Researchers at Mathematica Policy Research used a random assignment design to estimate the impacts of receiving RTS in grades 7 and 8. More than half of the students in these schools were eligible for free or…
Running Batch Jobs on Peregrine | High-Performance Computing | NREL
Using Resource Feature to Request Different Node Types Peregrine has several types of compute nodes incompatibility and get the job running. More information about requesting different node types in Peregrine is available. Queues In order to meet the needs of different types of jobs, nodes on Peregrine are available
Host-Nation Operations: Soldier Training on Governance (HOST-G) Training Support Package
2011-07-01
restricted this webpage from running scripts or ActiveX controls that could access your computer. Click here for options…” • If this occurs, select that...scripts and ActiveX controls can be useful, but active content might also harm your computer. Are you sure you want to let this file run active
24 CFR 15.110 - What fees will HUD charge?
Code of Federal Regulations, 2013 CFR
2013-04-01
... duplicating machinery. The computer run time includes the cost of operating a central processing unit for that... Applies. (6) Computer run time (includes only mainframe search time not printing) The direct cost of... estimated fee is more than $250.00 or you have a history of failing to pay FOIA fees to HUD in a timely...
NASA Technical Reports Server (NTRS)
Roberts, Floyd E., III
1994-01-01
Software provides for control and acquisition of data from optical pyrometer. There are six individual programs in PYROLASER package. Provides quick and easy way to set up, control, and program standard Pyrolaser. Temperature and emisivity measurements either collected as if Pyrolaser in manual operating mode or displayed on real-time strip charts and stored in standard spreadsheet format for posttest analysis. Shell supplied to allow macros, which are test-specific, added to system easily. Written using Labview software for use on Macintosh-series computers running System 6.0.3 or later, Sun Sparc-series computers running Open-Windows 3.0 or MIT's X Window System (X11R4 or X11R5), and IBM PC or compatible computers running Microsoft Windows 3.1 or later.
NASA Technical Reports Server (NTRS)
1972-01-01
The IDAPS (Image Data Processing System) is a user-oriented, computer-based, language and control system, which provides a framework or standard for implementing image data processing applications, simplifies set-up of image processing runs so that the system may be used without a working knowledge of computer programming or operation, streamlines operation of the image processing facility, and allows multiple applications to be run in sequence without operator interaction. The control system loads the operators, interprets the input, constructs the necessary parameters for each application, and cells the application. The overlay feature of the IBSYS loader (IBLDR) provides the means of running multiple operators which would otherwise overflow core storage.
Identification of Program Signatures from Cloud Computing System Telemetry Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nichols, Nicole M.; Greaves, Mark T.; Smith, William P.
Malicious cloud computing activity can take many forms, including running unauthorized programs in a virtual environment. Detection of these malicious activities while preserving the privacy of the user is an important research challenge. Prior work has shown the potential viability of using cloud service billing metrics as a mechanism for proxy identification of malicious programs. Previously this novel detection method has been evaluated in a synthetic and isolated computational environment. In this paper we demonstrate the ability of billing metrics to identify programs, in an active cloud computing environment, including multiple virtual machines running on the same hypervisor. The openmore » source cloud computing platform OpenStack, is used for private cloud management at Pacific Northwest National Laboratory. OpenStack provides a billing tool (Ceilometer) to collect system telemetry measurements. We identify four different programs running on four virtual machines under the same cloud user account. Programs were identified with up to 95% accuracy. This accuracy is dependent on the distinctiveness of telemetry measurements for the specific programs we tested. Future work will examine the scalability of this approach for a larger selection of programs to better understand the uniqueness needed to identify a program. Additionally, future work should address the separation of signatures when multiple programs are running on the same virtual machine.« less
Providing Assistive Technology Applications as a Service Through Cloud Computing.
Mulfari, Davide; Celesti, Antonio; Villari, Massimo; Puliafito, Antonio
2015-01-01
Users with disabilities interact with Personal Computers (PCs) using Assistive Technology (AT) software solutions. Such applications run on a PC that a person with a disability commonly uses. However the configuration of AT applications is not trivial at all, especially whenever the user needs to work on a PC that does not allow him/her to rely on his / her AT tools (e.g., at work, at university, in an Internet point). In this paper, we discuss how cloud computing provides a valid technological solution to enhance such a scenario.With the emergence of cloud computing, many applications are executed on top of virtual machines (VMs). Virtualization allows us to achieve a software implementation of a real computer able to execute a standard operating system and any kind of application. In this paper we propose to build personalized VMs running AT programs and settings. By using the remote desktop technology, our solution enables users to control their customized virtual desktop environment by means of an HTML5-based web interface running on any computer equipped with a browser, whenever they are.
MATH77 - A LIBRARY OF MATHEMATICAL SUBPROGRAMS FOR FORTRAN 77, RELEASE 4.0
NASA Technical Reports Server (NTRS)
Lawson, C. L.
1994-01-01
MATH77 is a high quality library of ANSI FORTRAN 77 subprograms implementing contemporary algorithms for the basic computational processes of science and engineering. The portability of MATH77 meets the needs of present-day scientists and engineers who typically use a variety of computing environments. Release 4.0 of MATH77 contains 454 user-callable and 136 lower-level subprograms. Usage of the user-callable subprograms is described in 69 sections of the 416 page users' manual. The topics covered by MATH77 are indicated by the following list of chapter titles in the users' manual: Mathematical Functions, Pseudo-random Number Generation, Linear Systems of Equations and Linear Least Squares, Matrix Eigenvalues and Eigenvectors, Matrix Vector Utilities, Nonlinear Equation Solving, Curve Fitting, Table Look-Up and Interpolation, Definite Integrals (Quadrature), Ordinary Differential Equations, Minimization, Polynomial Rootfinding, Finite Fourier Transforms, Special Arithmetic , Sorting, Library Utilities, Character-based Graphics, and Statistics. Besides subprograms that are adaptations of public domain software, MATH77 contains a number of unique packages developed by the authors of MATH77. Instances of the latter type include (1) adaptive quadrature, allowing for exceptional generality in multidimensional cases, (2) the ordinary differential equations solver used in spacecraft trajectory computation for JPL missions, (3) univariate and multivariate table look-up and interpolation, allowing for "ragged" tables, and providing error estimates, and (4) univariate and multivariate derivative-propagation arithmetic. MATH77 release 4.0 is a subroutine library which has been carefully designed to be usable on any computer system that supports the full ANSI standard FORTRAN 77 language. It has been successfully implemented on a CRAY Y/MP computer running UNICOS, a UNISYS 1100 computer running EXEC 8, a DEC VAX series computer running VMS, a Sun4 series computer running SunOS, a Hewlett-Packard 720 computer running HP-UX, a Macintosh computer running MacOS, and an IBM PC compatible computer running MS-DOS. Accompanying the library is a set of 196 "demo" drivers that exercise all of the user-callable subprograms. The FORTRAN source code for MATH77 comprises 109K lines of code in 375 files with a total size of 4.5Mb. The demo drivers comprise 11K lines of code and 418K. Forty-four percent of the lines of the library code and 29% of those in the demo code are comment lines. The standard distribution medium for MATH77 is a .25 inch streaming magnetic tape cartridge in UNIX tar format. It is also available on a 9track 1600 BPI magnetic tape in VAX BACKUP format and a TK50 tape cartridge in VAX BACKUP format. An electronic copy of the documentation is included on the distribution media. Previous releases of MATH77 have been used over a number of years in a variety of JPL applications. MATH77 Release 4.0 was completed in 1992. MATH77 is a copyrighted work with all copyright vested in NASA.
Computer Simulation of Great Lakes-St. Lawrence Seaway Icebreaker Requirements.
1980-01-01
of Run No. 1 for Taconite Task Command ... ....... 6-41 6.22d Results of Run No. I for Oil Can Task Command ........ ... 6-42 6.22e Results of Run No...Port and Period for Run No. 2 ... .. ... ... 6-47 6.23c Results of Run No. 2 for Taconite Task Command ... ....... 6-48 6.23d Results of Run No. 2 for...6-53 6.24b Predicted Icebreaker Fleet by Home Port and Period for Run No. 3 6-54 6.24c Results of Run No. 3 for Taconite Task Command. ....... 6
The rid-redundant procedure in C-Prolog
NASA Technical Reports Server (NTRS)
Chen, Huo-Yan; Wah, Benjamin W.
1987-01-01
C-Prolog can conveniently be used for logical inferences on knowledge bases. However, as similar to many search methods using backward chaining, a large number of redundant computation may be produced in recursive calls. To overcome this problem, the 'rid-redundant' procedure was designed to rid all redundant computations in running multi-recursive procedures. Experimental results obtained for C-Prolog on the Vax 11/780 computer show that there is an order of magnitude improvement in the running time and solvable problem size.
An Upgrade of the Aeroheating Software ''MINIVER''
NASA Technical Reports Server (NTRS)
Louderback, Pierce
2013-01-01
Detailed computational modeling: CFO often used to create and execute computational domains. Increasing complexity when moving from 20 to 30 geometries. Computational time increased as finer grids are used (accuracy). Strong tool, but takes time to set up and run. MINIVER: Uses theoretical and empirical correlations. Orders of magnitude faster to set up and run. Not as accurate as CFO, but gives reasonable estimations. MINIVER's Drawbacks: Rigid command-line interface. Lackluster, unorganized documentation. No central control; multiple versions exist and have diverged.
A Functional Description of the Geophysical Data Acquisition System
1990-08-10
less than 50 SPS nor greater than 250 SPS 3.0 SENSORS/TRANSDUCERS 3.1 CHAPTER OVERVIEW Most of the research supported by GDAS has primarily involved two...signal for the computer. The SRUN signal from the computer is fed to a retriggerable oneshot multivibrator on the board. SRUN consists of a pulse train...that is present when the computer is running. The oneshot output drives the RUN lamp on the front panel. Finally, one pin on the board edge connector is
Network support for system initiated checkpoints
Chen, Dong; Heidelberger, Philip
2013-01-29
A system, method and computer program product for supporting system initiated checkpoints in parallel computing systems. The system and method generates selective control signals to perform checkpointing of system related data in presence of messaging activity associated with a user application running at the node. The checkpointing is initiated by the system such that checkpoint data of a plurality of network nodes may be obtained even in the presence of user applications running on highly parallel computers that include ongoing user messaging activity.
Convergence properties of simple genetic algorithms
NASA Technical Reports Server (NTRS)
Bethke, A. D.; Zeigler, B. P.; Strauss, D. M.
1974-01-01
The essential parameters determining the behaviour of genetic algorithms were investigated. Computer runs were made while systematically varying the parameter values. Results based on the progress curves obtained from these runs are presented along with results based on the variability of the population as the run progresses.
Modeling Subsurface Reactive Flows Using Leadership-Class Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mills, Richard T; Hammond, Glenn; Lichtner, Peter
2009-01-01
We describe our experiences running PFLOTRAN - a code for simulation of coupled hydro-thermal-chemical processes in variably saturated, non-isothermal, porous media - on leadership-class supercomputers, including initial experiences running on the petaflop incarnation of Jaguar, the Cray XT5 at the National Center for Computational Sciences at Oak Ridge National Laboratory. PFLOTRAN utilizes fully implicit time-stepping and is built on top of the Portable, Extensible Toolkit for Scientific Computation (PETSc). We discuss some of the hurdles to 'at scale' performance with PFLOTRAN and the progress we have made in overcoming them on leadership-class computer architectures.
Williams, Paul T
2012-01-01
Current physical activity recommendations assume that different activities can be exchanged to produce the same weight-control benefits so long as total energy expended remains the same (exchangeability premise). To this end, they recommend calculating energy expenditure as the product of the time spent performing each activity and the activity's metabolic equivalents (MET), which may be summed to achieve target levels. The validity of the exchangeability premise was assessed using data from the National Runners' Health Study. Physical activity dose was compared to body mass index (BMI) and body circumferences in 33,374 runners who reported usual distance run and pace, and usual times spent running and other exercises per week. MET hours per day (METhr/d) from running was computed from: a) time and intensity, and b) reported distance run (1.02 MET • hours per km). When computed from time and intensity, the declines (slope±SE) per METhr/d were significantly greater (P<10(-15)) for running than non-running exercise for BMI (slopes±SE, male: -0.12 ± 0.00 vs. 0.00±0.00; female: -0.12 ± 0.00 vs. -0.01 ± 0.01 kg/m(2) per METhr/d) and waist circumference (male: -0.28 ± 0.01 vs. -0.07±0.01; female: -0. 31±0.01 vs. -0.05 ± 0.01 cm per METhr/d). Reported METhr/d of running was 38% to 43% greater when calculated from time and intensity than distance. Moreover, the declines per METhr/d run were significantly greater when estimated from reported distance for BMI (males: -0.29 ± 0.01; females: -0.27 ± 0.01 kg/m(2) per METhr/d) and waist circumference (males: -0.67 ± 0.02; females: -0.69 ± 0.02 cm per METhr/d) than when computed from time and intensity (cited above). The exchangeability premise was not supported for running vs. non-running exercise. Moreover, distance-based running prescriptions may provide better weight control than time-based prescriptions for running or other activities. Additional longitudinal studies and randomized clinical trials are required to verify these results prospectively.
A PICKSC Science Gateway for enabling the common plasma physicist to run kinetic software
NASA Astrophysics Data System (ADS)
Hu, Q.; Winjum, B. J.; Zonca, A.; Youn, C.; Tsung, F. S.; Mori, W. B.
2017-10-01
Computer simulations offer tremendous opportunities for studying plasmas, ranging from simulations for students that illuminate fundamental educational concepts to research-level simulations that advance scientific knowledge. Nevertheless, there is a significant hurdle to using simulation tools. Users must navigate codes and software libraries, determine how to wrangle output into meaningful plots, and oftentimes confront a significant cyberinfrastructure with powerful computational resources. Science gateways offer a Web-based environment to run simulations without needing to learn or manage the underlying software and computing cyberinfrastructure. We discuss our progress on creating a Science Gateway for the Particle-in-Cell and Kinetic Simulation Software Center that enables users to easily run and analyze kinetic simulations with our software. We envision that this technology could benefit a wide range of plasma physicists, both in the use of our simulation tools as well as in its adaptation for running other plasma simulation software. Supported by NSF under Grant ACI-1339893 and by the UCLA Institute for Digital Research and Education.
Creating a Parallel Version of VisIt for Microsoft Windows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whitlock, B J; Biagas, K S; Rawson, P L
2011-12-07
VisIt is a popular, free interactive parallel visualization and analysis tool for scientific data. Users can quickly generate visualizations from their data, animate them through time, manipulate them, and save the resulting images or movies for presentations. VisIt was designed from the ground up to work on many scales of computers from modest desktops up to massively parallel clusters. VisIt is comprised of a set of cooperating programs. All programs can be run locally or in client/server mode in which some run locally and some run remotely on compute clusters. The VisIt program most able to harness today's computing powermore » is the VisIt compute engine. The compute engine is responsible for reading simulation data from disk, processing it, and sending results or images back to the VisIt viewer program. In a parallel environment, the compute engine runs several processes, coordinating using the Message Passing Interface (MPI) library. Each MPI process reads some subset of the scientific data and filters the data in various ways to create useful visualizations. By using MPI, VisIt has been able to scale well into the thousands of processors on large computers such as dawn and graph at LLNL. The advent of multicore CPU's has made parallelism the 'new' way to achieve increasing performance. With today's computers having at least 2 cores and in many cases up to 8 and beyond, it is more important than ever to deploy parallel software that can use that computing power not only on clusters but also on the desktop. We have created a parallel version of VisIt for Windows that uses Microsoft's MPI implementation (MSMPI) to process data in parallel on the Windows desktop as well as on a Windows HPC cluster running Microsoft Windows Server 2008. Initial desktop parallel support for Windows was deployed in VisIt 2.4.0. Windows HPC cluster support has been completed and will appear in the VisIt 2.5.0 release. We plan to continue supporting parallel VisIt on Windows so our users will be able to take full advantage of their multicore resources.« less
NASA Astrophysics Data System (ADS)
Varela Rodriguez, F.
2011-12-01
The control system of each of the four major Experiments at the CERN Large Hadron Collider (LHC) is distributed over up to 160 computers running either Linux or Microsoft Windows. A quick response to abnormal situations of the computer infrastructure is crucial to maximize the physics usage. For this reason, a tool was developed to supervise, identify errors and troubleshoot such a large system. Although the monitoring of the performance of the Linux computers and their processes was available since the first versions of the tool, it is only recently that the software package has been extended to provide similar functionality for the nodes running Microsoft Windows as this platform is the most commonly used in the LHC detector control systems. In this paper, the architecture and the functionality of the Windows Management Instrumentation (WMI) client developed to provide centralized monitoring of the nodes running different flavour of the Microsoft platform, as well as the interface to the SCADA software of the control systems are presented. The tool is currently being commissioned by the Experiments and it has already proven to be very efficient optimize the running systems and to detect misbehaving processes or nodes.
Computational steering of GEM based detector simulations
NASA Astrophysics Data System (ADS)
Sheharyar, Ali; Bouhali, Othmane
2017-10-01
Gas based detector R&D relies heavily on full simulation of detectors and their optimization before final prototypes can be built and tested. These simulations in particular those with complex scenarios such as those involving high detector voltages or gas with larger gains are computationally intensive may take several days or weeks to complete. These long-running simulations usually run on the high-performance computers in batch mode. If the results lead to unexpected behavior, then the simulation might be rerun with different parameters. However, the simulations (or jobs) may have to wait in a queue until they get a chance to run again because the supercomputer is a shared resource that maintains a queue of other user programs as well and executes them as time and priorities permit. It may result in inefficient resource utilization and increase in the turnaround time for the scientific experiment. To overcome this issue, the monitoring of the behavior of a simulation, while it is running (or live), is essential. In this work, we employ the computational steering technique by coupling the detector simulations with a visualization package named VisIt to enable the exploration of the live data as it is produced by the simulation.
CERN openlab: Engaging industry for innovation in the LHC Run 3-4 R&D programme
NASA Astrophysics Data System (ADS)
Girone, M.; Purcell, A.; Di Meglio, A.; Rademakers, F.; Gunne, K.; Pachou, M.; Pavlou, S.
2017-10-01
LHC Run3 and Run4 represent an unprecedented challenge for HEP computing in terms of both data volume and complexity. New approaches are needed for how data is collected and filtered, processed, moved, stored and analysed if these challenges are to be met with a realistic budget. To develop innovative techniques we are fostering relationships with industry leaders. CERN openlab is a unique resource for public-private partnership between CERN and leading Information Communication and Technology (ICT) companies. Its mission is to accelerate the development of cutting-edge solutions to be used by the worldwide HEP community. In 2015, CERN openlab started its phase V with a strong focus on tackling the upcoming LHC challenges. Several R&D programs are ongoing in the areas of data acquisition, networks and connectivity, data storage architectures, computing provisioning, computing platforms and code optimisation and data analytics. This paper gives an overview of the various innovative technologies that are currently being explored by CERN openlab V and discusses the long-term strategies that are pursued by the LHC communities with the help of industry in closing the technological gap in processing and storage needs expected in Run3 and Run4.
NASA Technical Reports Server (NTRS)
Yang, Guowei; Pasareanu, Corina S.; Khurshid, Sarfraz
2012-01-01
This paper introduces memoized symbolic execution (Memoise), a novel approach for more efficient application of forward symbolic execution, which is a well-studied technique for systematic exploration of program behaviors based on bounded execution paths. Our key insight is that application of symbolic execution often requires several successive runs of the technique on largely similar underlying problems, e.g., running it once to check a program to find a bug, fixing the bug, and running it again to check the modified program. Memoise introduces a trie-based data structure that stores the key elements of a run of symbolic execution. Maintenance of the trie during successive runs allows re-use of previously computed results of symbolic execution without the need for re-computing them as is traditionally done. Experiments using our prototype embodiment of Memoise show the benefits it holds in various standard scenarios of using symbolic execution, e.g., with iterative deepening of exploration depth, to perform regression analysis, or to enhance coverage.
Simple, efficient allocation of modelling runs on heterogeneous clusters with MPI
Donato, David I.
2017-01-01
In scientific modelling and computation, the choice of an appropriate method for allocating tasks for parallel processing depends on the computational setting and on the nature of the computation. The allocation of independent but similar computational tasks, such as modelling runs or Monte Carlo trials, among the nodes of a heterogeneous computational cluster is a special case that has not been specifically evaluated previously. A simulation study shows that a method of on-demand (that is, worker-initiated) pulling from a bag of tasks in this case leads to reliably short makespans for computational jobs despite heterogeneity both within and between cluster nodes. A simple reference implementation in the C programming language with the Message Passing Interface (MPI) is provided.
Evaluating the Efficacy of the Cloud for Cluster Computation
NASA Technical Reports Server (NTRS)
Knight, David; Shams, Khawaja; Chang, George; Soderstrom, Tom
2012-01-01
Computing requirements vary by industry, and it follows that NASA and other research organizations have computing demands that fall outside the mainstream. While cloud computing made rapid inroads for tasks such as powering web applications, performance issues on highly distributed tasks hindered early adoption for scientific computation. One venture to address this problem is Nebula, NASA's homegrown cloud project tasked with delivering science-quality cloud computing resources. However, another industry development is Amazon's high-performance computing (HPC) instances on Elastic Cloud Compute (EC2) that promises improved performance for cluster computation. This paper presents results from a series of benchmarks run on Amazon EC2 and discusses the efficacy of current commercial cloud technology for running scientific applications across a cluster. In particular, a 240-core cluster of cloud instances achieved 2 TFLOPS on High-Performance Linpack (HPL) at 70% of theoretical computational performance. The cluster's local network also demonstrated sub-100 ?s inter-process latency with sustained inter-node throughput in excess of 8 Gbps. Beyond HPL, a real-world Hadoop image processing task from NASA's Lunar Mapping and Modeling Project (LMMP) was run on a 29 instance cluster to process lunar and Martian surface images with sizes on the order of tens of gigapixels. These results demonstrate that while not a rival of dedicated supercomputing clusters, commercial cloud technology is now a feasible option for moderately demanding scientific workloads.
Third-rank chromatic aberrations of electron lenses.
Liu, Zhixiong
2018-02-01
In this paper the third-rank chromatic aberration coefficients of round electron lenses are analytically derived and numerically calculated by Mathematica. Furthermore, the numerical results are cross-checked by the differential algebraic (DA) method, which verifies that all the formulas for the third-rank chromatic aberration coefficients are completely correct. It is hoped that this work would be helpful for further chromatic aberration correction in electron microscopy. Copyright © 2017 Elsevier B.V. All rights reserved.
Cuba: Multidimensional numerical integration library
NASA Astrophysics Data System (ADS)
Hahn, Thomas
2016-08-01
The Cuba library offers four independent routines for multidimensional numerical integration: Vegas, Suave, Divonne, and Cuhre. The four algorithms work by very different methods, and can integrate vector integrands and have very similar Fortran, C/C++, and Mathematica interfaces. Their invocation is very similar, making it easy to cross-check by substituting one method by another. For further safeguarding, the output is supplemented by a chi-square probability which quantifies the reliability of the error estimate.
A PARAMETRIC STUDY OF BCS RF SURFACE IMPEDANCE WITH MAGNETIC FIELD USING THE XIAO CODE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reece, Charles E.; Xiao, Binping
2013-09-01
A recent new analysis of field-dependent BCS rf surface impedance based on moving Cooper pairs has been presented.[1] Using this analysis coded in Mathematica TM, survey calculations have been completed which examine the sensitivities of this surface impedance to variation of the BCS material parameters and temperature. The results present a refined description of the "best theoretical" performance available to potential applications with corresponding materials.
Controlling Laboratory Processes From A Personal Computer
NASA Technical Reports Server (NTRS)
Will, H.; Mackin, M. A.
1991-01-01
Computer program provides natural-language process control from IBM PC or compatible computer. Sets up process-control system that either runs without operator or run by workers who have limited programming skills. Includes three smaller programs. Two of them, written in FORTRAN 77, record data and control research processes. Third program, written in Pascal, generates FORTRAN subroutines used by other two programs to identify user commands with device-driving routines written by user. Also includes set of input data allowing user to define user commands to be executed by computer. Requires personal computer operating under MS-DOS with suitable hardware interfaces to all controlled devices. Also requires FORTRAN 77 compiler and device drivers written by user.
WinHPC System Programming | High-Performance Computing | NREL
Programming WinHPC System Programming Learn how to build and run an MPI (message passing interface (mpi.h) and library (msmpi.lib) are. To build from the command line, run... Start > Intel Software Development Tools > Intel C++ Compiler Professional... > C++ Build Environment for applications running
Computer-based testing of the modified essay question: the Singapore experience.
Lim, Erle Chuen-Hian; Seet, Raymond Chee-Seong; Oh, Vernon M S; Chia, Boon-Lock; Aw, Marion; Quak, Seng-Hock; Ong, Benjamin K C
2007-11-01
The modified essay question (MEQ), featuring an evolving case scenario, tests a candidate's problem-solving and reasoning ability, rather than mere factual recall. Although it is traditionally conducted as a pen-and-paper examination, our university has run the MEQ using computer-based testing (CBT) since 2003. We describe our experience with running the MEQ examination using the IVLE, or integrated virtual learning environment (https://ivle.nus.edu.sg), provide a blueprint for universities intending to conduct computer-based testing of the MEQ, and detail how our MEQ examination has evolved since its inception. An MEQ committee, comprising specialists in key disciplines from the departments of Medicine and Paediatrics, was formed. We utilized the IVLE, developed for our university in 1998, as the online platform on which we ran the MEQ. We calculated the number of man-hours (academic and support staff) required to run the MEQ examination, using either a computer-based or pen-and-paper format. With the support of our university's information technology (IT) specialists, we have successfully run the MEQ examination online, twice a year, since 2003. Initially, we conducted the examination with short-answer questions only, but have since expanded the MEQ examination to include multiple-choice and extended matching questions. A total of 1268 man-hours was spent in preparing for, and running, the MEQ examination using CBT, compared to 236.5 man-hours to run it using a pen-and-paper format. Despite being more labour-intensive, our students and staff prefer CBT to the pen-and-paper format. The MEQ can be conducted using a computer-based testing scenario, which offers several advantages over a pen-and-paper format. We hope to increase the number of questions and incorporate audio and video files, featuring clinical vignettes, to the MEQ examination in the near future.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kopp, H.J.; Mortensen, G.A.
1978-04-01
Approximately 60% of the full CDC 6600/7600 Datatran 2.0 capability was made operational on IBM 360/370 equipment. Sufficient capability was made operational to demonstrate adequate performance for modular program linking applications. Also demonstrated were the basic capabilities and performance required to support moderate-sized data base applications and moderately active scratch input/output applications. Approximately one to two calendar years are required to develop DATATRAN 2.0 capabilities fully for the entire spectrum of applications proposed. Included in the next stage of conversion should be syntax checking and syntax conversion features that would foster greater FORTRAN compatibility between IBM and CDC developed modules.more » The batch portion of the JOSHUA Modular System, which was developed by Savannah River Laboratory to run on an IBM computer, was examined for the feasibility of conversion to run on a Control Data Corporation (CDC) computer. Portions of the JOSHUA Precompiler were changed so as to be operable on the CDC computer. The Data Manager and Batch Monitor were also examined for conversion feasibility, but no changes were made in them. It appears to be feasible to convert the batch portion of the JOSHUA Modular System to run on a CDC computer with an estimated additional two to three man-years of effort. 9 tables.« less
Identifying the impact of G-quadruplexes on Affymetrix 3' arrays using cloud computing.
Memon, Farhat N; Owen, Anne M; Sanchez-Graillet, Olivia; Upton, Graham J G; Harrison, Andrew P
2010-01-15
A tetramer quadruplex structure is formed by four parallel strands of DNA/ RNA containing runs of guanine. These quadruplexes are able to form because guanine can Hoogsteen hydrogen bond to other guanines, and a tetrad of guanines can form a stable arrangement. Recently we have discovered that probes on Affymetrix GeneChips that contain runs of guanine do not measure gene expression reliably. We associate this finding with the likelihood that quadruplexes are forming on the surface of GeneChips. In order to cope with the rapidly expanding size of GeneChip array datasets in the public domain, we are exploring the use of cloud computing to replicate our experiments on 3' arrays to look at the effect of the location of G-spots (runs of guanines). Cloud computing is a recently introduced high-performance solution that takes advantage of the computational infrastructure of large organisations such as Amazon and Google. We expect that cloud computing will become widely adopted because it enables bioinformaticians to avoid capital expenditure on expensive computing resources and to only pay a cloud computing provider for what is used. Moreover, as well as financial efficiency, cloud computing is an ecologically-friendly technology, it enables efficient data-sharing and we expect it to be faster for development purposes. Here we propose the advantageous use of cloud computing to perform a large data-mining analysis of public domain 3' arrays.
Katz, Jonathan E
2017-01-01
Laboratories tend to be amenable environments for long-term reliable operation of scientific measurement equipment. Indeed, it is not uncommon to find equipment 5, 10, or even 20+ years old still being routinely used in labs. Unfortunately, the Achilles heel for many of these devices is the control/data acquisition computer. Often these computers run older operating systems (e.g., Windows XP) and, while they might only use standard network, USB or serial ports, they require proprietary software to be installed. Even if the original installation disks can be found, it is a burdensome process to reinstall and is fraught with "gotchas" that can derail the process-lost license keys, incompatible hardware, forgotten configuration settings, etc. If you have running legacy instrumentation, the computer is the ticking time bomb waiting to put a halt to your operation.In this chapter, I describe how to virtualize your currently running control computer. This virtualized computer "image" is easy to maintain, easy to back up and easy to redeploy. I have used this multiple times in my own lab to greatly improve the robustness of my legacy devices.After completing the steps in this chapter, you will have your original control computer as well as a virtual instance of that computer with all the software installed ready to control your hardware should your original computer ever be decommissioned.
Resource Efficient Hardware Architecture for Fast Computation of Running Max/Min Filters
Torres-Huitzil, Cesar
2013-01-01
Running max/min filters on rectangular kernels are widely used in many digital signal and image processing applications. Filtering with a k × k kernel requires of k 2 − 1 comparisons per sample for a direct implementation; thus, performance scales expensively with the kernel size k. Faster computations can be achieved by kernel decomposition and using constant time one-dimensional algorithms on custom hardware. This paper presents a hardware architecture for real-time computation of running max/min filters based on the van Herk/Gil-Werman (HGW) algorithm. The proposed architecture design uses less computation and memory resources than previously reported architectures when targeted to Field Programmable Gate Array (FPGA) devices. Implementation results show that the architecture is able to compute max/min filters, on 1024 × 1024 images with up to 255 × 255 kernels, in around 8.4 milliseconds, 120 frames per second, at a clock frequency of 250 MHz. The implementation is highly scalable for the kernel size with good performance/area tradeoff suitable for embedded applications. The applicability of the architecture is shown for local adaptive image thresholding. PMID:24288456
Energy Frontier Research With ATLAS: Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Butler, John; Black, Kevin; Ahlen, Steve
2016-06-14
The Boston University (BU) group is playing key roles across the ATLAS experiment: in detector operations, the online trigger, the upgrade, computing, and physics analysis. Our team has been critical to the maintenance and operations of the muon system since its installation. During Run 1 we led the muon trigger group and that responsibility continues into Run 2. BU maintains and operates the ATLAS Northeast Tier 2 computing center. We are actively engaged in the analysis of ATLAS data from Run 1 and Run 2. Physics analyses we have contributed to include Standard Model measurements (W and Z cross sections,more » t\\bar{t} differential cross sections, WWW^* production), evidence for the Higgs decaying to \\tau^+\\tau^-, and searches for new phenomena (technicolor, Z' and W', vector-like quarks, dark matter).« less
Automatic Data Filter Customization Using a Genetic Algorithm
NASA Technical Reports Server (NTRS)
Mandrake, Lukas
2013-01-01
This work predicts whether a retrieval algorithm will usefully determine CO2 concentration from an input spectrum of GOSAT (Greenhouse Gases Observing Satellite). This was done to eliminate needless runtime on atmospheric soundings that would never yield useful results. A space of 50 dimensions was examined for predictive power on the final CO2 results. Retrieval algorithms are frequently expensive to run, and wasted effort defeats requirements and expends needless resources. This algorithm could be used to help predict and filter unneeded runs in any computationally expensive regime. Traditional methods such as the Fischer discriminant analysis and decision trees can attempt to predict whether a sounding will be properly processed. However, this work sought to detect a subsection of the dimensional space that can be simply filtered out to eliminate unwanted runs. LDAs (linear discriminant analyses) and other systems examine the entire data and judge a "best fit," giving equal weight to complex and problematic regions as well as simple, clear-cut regions. In this implementation, a genetic space of "left" and "right" thresholds outside of which all data are rejected was defined. These left/right pairs are created for each of the 50 input dimensions. A genetic algorithm then runs through countless potential filter settings using a JPL computer cluster, optimizing the tossed-out data s yield (proper vs. improper run removal) and number of points tossed. This solution is robust to an arbitrary decision boundary within the data and avoids the global optimization problem of whole-dataset fitting using LDA or decision trees. It filters out runs that would not have produced useful CO2 values to save needless computation. This would be an algorithmic preprocessing improvement to any computationally expensive system.
Open-source meteor detection software for low-cost single-board computers
NASA Astrophysics Data System (ADS)
Vida, D.; Zubović, D.; Šegon, D.; Gural, P.; Cupec, R.
2016-01-01
This work aims to overcome the current price threshold of meteor stations which can sometimes deter meteor enthusiasts from owning one. In recent years small card-sized computers became widely available and are used for numerous applications. To utilize such computers for meteor work, software which can run on them is needed. In this paper we present a detailed description of newly-developed open-source software for fireball and meteor detection optimized for running on low-cost single board computers. Furthermore, an update on the development of automated open-source software which will handle video capture, fireball and meteor detection, astrometry and photometry is given.
How to Build an AppleSeed: A Parallel Macintosh Cluster for Numerically Intensive Computing
NASA Astrophysics Data System (ADS)
Decyk, V. K.; Dauger, D. E.
We have constructed a parallel cluster consisting of a mixture of Apple Macintosh G3 and G4 computers running the Mac OS, and have achieved very good performance on numerically intensive, parallel plasma particle-incell simulations. A subset of the MPI message-passing library was implemented in Fortran77 and C. This library enabled us to port code, without modification, from other parallel processors to the Macintosh cluster. Unlike Unix-based clusters, no special expertise in operating systems is required to build and run the cluster. This enables us to move parallel computing from the realm of experts to the main stream of computing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Venkata, Manjunath Gorentla; Aderholdt, William F
The pre-exascale systems are expected to have a significant amount of hierarchical and heterogeneous on-node memory, and this trend of system architecture in extreme-scale systems is expected to continue into the exascale era. along with hierarchical-heterogeneous memory, the system typically has a high-performing network ad a compute accelerator. This system architecture is not only effective for running traditional High Performance Computing (HPC) applications (Big-Compute), but also for running data-intensive HPC applications and Big-Data applications. As a consequence, there is a growing desire to have a single system serve the needs of both Big-Compute and Big-Data applications. Though the system architecturemore » supports the convergence of the Big-Compute and Big-Data, the programming models and software layer have yet to evolve to support either hierarchical-heterogeneous memory systems or the convergence. A programming abstraction to address this problem. The programming abstraction is implemented as a software library and runs on pre-exascale and exascale systems supporting current and emerging system architecture. Using distributed data-structures as a central concept, it provides (1) a simple, usable, and portable abstraction for hierarchical-heterogeneous memory and (2) a unified programming abstraction for Big-Compute and Big-Data applications.« less
User's guide to the Octopus computer network (the SHOC manual)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schneider, C.; Thompson, D.; Whitten, G.
1977-07-18
This guide explains how to enter, run, and debug programs on the Octopus network. It briefly describes the network's operation, and directs the reader to other documents for further information. It stresses those service programs that will be most useful in the long run; ''quick'' methods that have little flexibility are not discussed. The Octopus timesharing network gives the user access to four CDC 7600 computers, two CDC STAR computers, and a broad array of peripheral equipment, from any of 800 or so remote terminals. 16 figures, 7 tables.
User's guide to the Octopus computer network (the SHOC manual)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schneider, C.; Thompson, D.; Whitten, G.
1976-10-07
This guide explains how to enter, run, and debug programs on the Octopus network. It briefly describes the network's operation, and directs the reader to other documents for further information. It stresses those service programs that will be most useful in the long run; ''quick'' methods that have little flexibility are not discussed. The Octopus timesharing network gives the user access to four CDC 7600 computers, two CDC STAR computers, and a broad array of peripheral equipment, from any of 800 or so remote terminals. 8 figures, 4 tables.
User's guide to the Octopus computer network (the SHOC manual)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schneider, C.; Thompson, D.; Whitten, G.
1975-06-02
This guide explains how to enter, run, and debug programs on the Octopus network. It briefly describes the network's operation, and directs the reader to other documents for further information. It stresses those service programs that will be most useful in the long run; ''quick'' methods that have little flexibility are not discussed. The Octopus timesharing network gives the user access to four CDC 7600 computers and a broad array of peripheral equipment, from any of 800 remote terminals. Octopus will soon include the Laboratory's STAR-100 computers. 9 figures, 5 tables. (auth)
Massively parallel quantum computer simulator
NASA Astrophysics Data System (ADS)
De Raedt, K.; Michielsen, K.; De Raedt, H.; Trieu, B.; Arnold, G.; Richter, M.; Lippert, Th.; Watanabe, H.; Ito, N.
2007-01-01
We describe portable software to simulate universal quantum computers on massive parallel computers. We illustrate the use of the simulation software by running various quantum algorithms on different computer architectures, such as a IBM BlueGene/L, a IBM Regatta p690+, a Hitachi SR11000/J1, a Cray X1E, a SGI Altix 3700 and clusters of PCs running Windows XP. We study the performance of the software by simulating quantum computers containing up to 36 qubits, using up to 4096 processors and up to 1 TB of memory. Our results demonstrate that the simulator exhibits nearly ideal scaling as a function of the number of processors and suggest that the simulation software described in this paper may also serve as benchmark for testing high-end parallel computers.
JAX Colony Management System (JCMS): an extensible colony and phenotype data management system.
Donnelly, Chuck J; McFarland, Mike; Ames, Abigail; Sundberg, Beth; Springer, Dave; Blauth, Peter; Bult, Carol J
2010-04-01
The Jackson Laboratory Colony Management System (JCMS) is a software application for managing data and information related to research mouse colonies, associated biospecimens, and experimental protocols. JCMS runs directly on computers that run one of the PC Windows operating systems, but can be accessed via web browser interfaces from any computer running a Windows, Macintosh, or Linux operating system. JCMS can be configured for a single user or multiple users in small- to medium-size work groups. The target audience for JCMS includes laboratory technicians, animal colony managers, and principal investigators. The application provides operational support for colony management and experimental workflows, sample and data tracking through transaction-based data entry forms, and date-driven work reports. Flexible query forms allow researchers to retrieve database records based on user-defined criteria. Recent advances in handheld computers with integrated barcode readers, middleware technologies, web browsers, and wireless networks add to the utility of JCMS by allowing real-time access to the database from any networked computer.
The NEST Dry-Run Mode: Efficient Dynamic Analysis of Neuronal Network Simulation Code.
Kunkel, Susanne; Schenck, Wolfram
2017-01-01
NEST is a simulator for spiking neuronal networks that commits to a general purpose approach: It allows for high flexibility in the design of network models, and its applications range from small-scale simulations on laptops to brain-scale simulations on supercomputers. Hence, developers need to test their code for various use cases and ensure that changes to code do not impair scalability. However, running a full set of benchmarks on a supercomputer takes up precious compute-time resources and can entail long queuing times. Here, we present the NEST dry-run mode, which enables comprehensive dynamic code analysis without requiring access to high-performance computing facilities. A dry-run simulation is carried out by a single process, which performs all simulation steps except communication as if it was part of a parallel environment with many processes. We show that measurements of memory usage and runtime of neuronal network simulations closely match the corresponding dry-run data. Furthermore, we demonstrate the successful application of the dry-run mode in the areas of profiling and performance modeling.
The NEST Dry-Run Mode: Efficient Dynamic Analysis of Neuronal Network Simulation Code
Kunkel, Susanne; Schenck, Wolfram
2017-01-01
NEST is a simulator for spiking neuronal networks that commits to a general purpose approach: It allows for high flexibility in the design of network models, and its applications range from small-scale simulations on laptops to brain-scale simulations on supercomputers. Hence, developers need to test their code for various use cases and ensure that changes to code do not impair scalability. However, running a full set of benchmarks on a supercomputer takes up precious compute-time resources and can entail long queuing times. Here, we present the NEST dry-run mode, which enables comprehensive dynamic code analysis without requiring access to high-performance computing facilities. A dry-run simulation is carried out by a single process, which performs all simulation steps except communication as if it was part of a parallel environment with many processes. We show that measurements of memory usage and runtime of neuronal network simulations closely match the corresponding dry-run data. Furthermore, we demonstrate the successful application of the dry-run mode in the areas of profiling and performance modeling. PMID:28701946
ATLAS@Home: Harnessing Volunteer Computing for HEP
NASA Astrophysics Data System (ADS)
Adam-Bourdarios, C.; Cameron, D.; Filipčič, A.; Lancon, E.; Wu, W.; ATLAS Collaboration
2015-12-01
A recent common theme among HEP computing is exploitation of opportunistic resources in order to provide the maximum statistics possible for Monte Carlo simulation. Volunteer computing has been used over the last few years in many other scientific fields and by CERN itself to run simulations of the LHC beams. The ATLAS@Home project was started to allow volunteers to run simulations of collisions in the ATLAS detector. So far many thousands of members of the public have signed up to contribute their spare CPU cycles for ATLAS, and there is potential for volunteer computing to provide a significant fraction of ATLAS computing resources. Here we describe the design of the project, the lessons learned so far and the future plans.
New developments in FeynCalc 9.0
NASA Astrophysics Data System (ADS)
Shtabovenko, Vladyslav; Mertig, Rolf; Orellana, Frederik
2016-10-01
In this note we report on the new version of FEYNCALC, a MATHEMATICA package for symbolic semi-automatic evaluation of Feynman diagrams and algebraic expressions in quantum field theory. The main features of version 9.0 are: improved tensor reduction and partial fractioning of loop integrals, new functions for using FEYNCALC together with tools for reduction of scalar loop integrals using integration-by-parts (IBP) identities, better interface to FEYNARTS and support for SU(N) generators with explicit fundamental indices.
Processing digital images and calculation of beam emittance (pepper-pot method for the Krion source)
NASA Astrophysics Data System (ADS)
Alexandrov, V. S.; Donets, E. E.; Nyukhalova, E. V.; Kaminsky, A. K.; Sedykh, S. N.; Tuzikov, A. V.; Philippov, A. V.
2016-12-01
Programs for the pre-processing of photographs of beam images on the mask based on Wolfram Mathematica and Origin software are described. Angles of rotation around the axis and in the vertical plane are taken into account in the generation of the file with image coordinates. Results of the emittance calculation by the Pep_emit program written in Visual Basic using the generated file in the test mode are presented.
Self-stress control of real civil engineering tensegrity structures
NASA Astrophysics Data System (ADS)
Kłosowska, Joanna; Obara, Paulina; Gilewski, Wojciech
2018-01-01
The paper introduces the impact of the self-stress level on the behaviour of the tensegrity truss structures. Displacements for real civil engineering tensegrity structures are analysed. Full-scale tensegrity tower Warnow Tower which consists of six Simplex trusses is considered in this paper. Three models consisting of one, two and six modules are analysed. The analysis is performed by the second and third order theory. Mathematica software and Sofistik programme is applied to the analysis.
Algorithms to evaluate multiple sums for loop computations
NASA Astrophysics Data System (ADS)
Anzai, C.; Sumino, Y.
2013-03-01
We present algorithms to evaluate two types of multiple sums, which appear in higher-order loop computations. We consider expansions of a generalized hyper-geometric-type sums, sum _{n_1,\\cdots,n_N} Γ ({a}_1\\cdot {n}+c_1) Γ ({a}_2\\cdot {n}+c_2) \\cdots Γ ({a}_P\\cdot {n}+c_P) / Γ ({b_1\\cdot {n}+d_1) Γ ({b}_2\\cdot {n}+d_2) \\cdots Γ ({b}_Q\\cdot {n}+d_Q) } x_1^{n_1}\\cdots x_N^{n_N} with {a}_i \\cdot {n} = sum _{j=1}^N a_{ij}n_j, etc., in a small parameter ɛ around rational values of ci,di's. Type I sum corresponds to the case where, in the limit ɛ → 0, the summand reduces to a rational function of nj's times x_1^{n_1}\\cdots x_N^{n_N}; ci,di's can depend on an external integer index. Type II sum is a double sum (N = 2), where ci, di's are half-integers or integers as ɛ → 0 and xi = 1; we consider some specific cases where at most six Γ functions remain in the limit ɛ → 0. The algorithms enable evaluations of arbitrary expansion coefficients in ɛ in terms of Z-sums and multiple polylogarithms (generalized multiple zeta values). We also present applications of these algorithms. In particular, Type I sums can be used to generate a new class of relations among generalized multiple zeta values. We provide a Mathematica package, in which these algorithms are implemented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
March-Leuba, S.; Jansen, J.F.; Kress, R.L.
1992-08-01
A new program package, Symbolic Manipulator Laboratory (SML), for the automatic generation of both kinematic and static manipulator models in symbolic form is presented. Critical design parameters may be identified and optimized using symbolic models as shown in the sample application presented for the Future Armor Rearm System (FARS) arm. The computer-aided development of the symbolic models yields equations with reduced numerical complexity. Important considerations have been placed on the closed form solutions simplification and on the user friendly operation. The main emphasis of this research is the development of a methodology which is implemented in a computer program capablemore » of generating symbolic kinematic and static forces models of manipulators. The fact that the models are obtained trigonometrically reduced is among the most significant results of this work and the most difficult to implement. Mathematica, a commercial program that allows symbolic manipulation, is used to implement the program package. SML is written such that the user can change any of the subroutines or create new ones easily. To assist the user, an on-line help has been written to make of SML a user friendly package. Some sample applications are presented. The design and optimization of the 5-degrees-of-freedom (DOF) FARS manipulator using SML is discussed. Finally, the kinematic and static models of two different 7-DOF manipulators are calculated symbolically.« less
Understanding the Performance and Potential of Cloud Computing for Scientific Applications
Sadooghi, Iman; Martin, Jesus Hernandez; Li, Tonglin; ...
2015-02-19
In this paper, commercial clouds bring a great opportunity to the scientific computing area. Scientific applications usually require significant resources, however not all scientists have access to sufficient high-end computing systems, may of which can be found in the Top500 list. Cloud Computing has gained the attention of scientists as a competitive resource to run HPC applications at a potentially lower cost. But as a different infrastructure, it is unclear whether clouds are capable of running scientific applications with a reasonable performance per money spent. This work studies the performance of public clouds and places this performance in context tomore » price. We evaluate the raw performance of different services of AWS cloud in terms of the basic resources, such as compute, memory, network and I/O. We also evaluate the performance of the scientific applications running in the cloud. This paper aims to assess the ability of the cloud to perform well, as well as to evaluate the cost of the cloud running scientific applications. We developed a full set of metrics and conducted a comprehensive performance evlauation over the Amazon cloud. We evaluated EC2, S3, EBS and DynamoDB among the many Amazon AWS services. We evaluated the memory sub-system performance with CacheBench, the network performance with iperf, processor and network performance with the HPL benchmark application, and shared storage with NFS and PVFS in addition to S3. We also evaluated a real scientific computing application through the Swift parallel scripting system at scale. Armed with both detailed benchmarks to gauge expected performance and a detailed monetary cost analysis, we expect this paper will be a recipe cookbook for scientists to help them decide where to deploy and run their scientific applications between public clouds, private clouds, or hybrid clouds.« less
Understanding the Performance and Potential of Cloud Computing for Scientific Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sadooghi, Iman; Martin, Jesus Hernandez; Li, Tonglin
In this paper, commercial clouds bring a great opportunity to the scientific computing area. Scientific applications usually require significant resources, however not all scientists have access to sufficient high-end computing systems, may of which can be found in the Top500 list. Cloud Computing has gained the attention of scientists as a competitive resource to run HPC applications at a potentially lower cost. But as a different infrastructure, it is unclear whether clouds are capable of running scientific applications with a reasonable performance per money spent. This work studies the performance of public clouds and places this performance in context tomore » price. We evaluate the raw performance of different services of AWS cloud in terms of the basic resources, such as compute, memory, network and I/O. We also evaluate the performance of the scientific applications running in the cloud. This paper aims to assess the ability of the cloud to perform well, as well as to evaluate the cost of the cloud running scientific applications. We developed a full set of metrics and conducted a comprehensive performance evlauation over the Amazon cloud. We evaluated EC2, S3, EBS and DynamoDB among the many Amazon AWS services. We evaluated the memory sub-system performance with CacheBench, the network performance with iperf, processor and network performance with the HPL benchmark application, and shared storage with NFS and PVFS in addition to S3. We also evaluated a real scientific computing application through the Swift parallel scripting system at scale. Armed with both detailed benchmarks to gauge expected performance and a detailed monetary cost analysis, we expect this paper will be a recipe cookbook for scientists to help them decide where to deploy and run their scientific applications between public clouds, private clouds, or hybrid clouds.« less
NASA Technical Reports Server (NTRS)
Sen, Syamal K.; Shaykhian, Gholam Ali
2011-01-01
MatLab(TradeMark)(MATrix LABoratory) is a numerical computation and simulation tool that is used by thousands Scientists and Engineers in many countries. MatLab does purely numerical calculations, which can be used as a glorified calculator or interpreter programming language; its real strength is in matrix manipulations. Computer algebra functionalities are achieved within the MatLab environment using "symbolic" toolbox. This feature is similar to computer algebra programs, provided by Maple or Mathematica to calculate with mathematical equations using symbolic operations. MatLab in its interpreter programming language form (command interface) is similar with well known programming languages such as C/C++, support data structures and cell arrays to define classes in object oriented programming. As such, MatLab is equipped with most of the essential constructs of a higher programming language. MatLab is packaged with an editor and debugging functionality useful to perform analysis of large MatLab programs and find errors. We believe there are many ways to approach real-world problems; prescribed methods to ensure foregoing solutions are incorporated in design and analysis of data processing and visualization can benefit engineers and scientist in gaining wider insight in actual implementation of their perspective experiments. This presentation will focus on data processing and visualizations aspects of engineering and scientific applications. Specifically, it will discuss methods and techniques to perform intermediate-level data processing covering engineering and scientific problems. MatLab programming techniques including reading various data files formats to produce customized publication-quality graphics, importing engineering and/or scientific data, organizing data in tabular format, exporting data to be used by other software programs such as Microsoft Excel, data presentation and visualization will be discussed.
RSTensorFlow: GPU Enabled TensorFlow for Deep Learning on Commodity Android Devices
Alzantot, Moustafa; Wang, Yingnan; Ren, Zhengshuang; Srivastava, Mani B.
2018-01-01
Mobile devices have become an essential part of our daily lives. By virtue of both their increasing computing power and the recent progress made in AI, mobile devices evolved to act as intelligent assistants in many tasks rather than a mere way of making phone calls. However, popular and commonly used tools and frameworks for machine intelligence are still lacking the ability to make proper use of the available heterogeneous computing resources on mobile devices. In this paper, we study the benefits of utilizing the heterogeneous (CPU and GPU) computing resources available on commodity android devices while running deep learning models. We leveraged the heterogeneous computing framework RenderScript to accelerate the execution of deep learning models on commodity Android devices. Our system is implemented as an extension to the popular open-source framework TensorFlow. By integrating our acceleration framework tightly into TensorFlow, machine learning engineers can now easily make benefit of the heterogeneous computing resources on mobile devices without the need of any extra tools. We evaluate our system on different android phones models to study the trade-offs of running different neural network operations on the GPU. We also compare the performance of running different models architectures such as convolutional and recurrent neural networks on CPU only vs using heterogeneous computing resources. Our result shows that although GPUs on the phones are capable of offering substantial performance gain in matrix multiplication on mobile devices. Therefore, models that involve multiplication of large matrices can run much faster (approx. 3 times faster in our experiments) due to GPU support. PMID:29629431
Multidisciplinary Simulation Acceleration using Multiple Shared-Memory Graphical Processing Units
NASA Astrophysics Data System (ADS)
Kemal, Jonathan Yashar
For purposes of optimizing and analyzing turbomachinery and other designs, the unsteady Favre-averaged flow-field differential equations for an ideal compressible gas can be solved in conjunction with the heat conduction equation. We solve all equations using the finite-volume multiple-grid numerical technique, with the dual time-step scheme used for unsteady simulations. Our numerical solver code targets CUDA-capable Graphical Processing Units (GPUs) produced by NVIDIA. Making use of MPI, our solver can run across networked compute notes, where each MPI process can use either a GPU or a Central Processing Unit (CPU) core for primary solver calculations. We use NVIDIA Tesla C2050/C2070 GPUs based on the Fermi architecture, and compare our resulting performance against Intel Zeon X5690 CPUs. Solver routines converted to CUDA typically run about 10 times faster on a GPU for sufficiently dense computational grids. We used a conjugate cylinder computational grid and ran a turbulent steady flow simulation using 4 increasingly dense computational grids. Our densest computational grid is divided into 13 blocks each containing 1033x1033 grid points, for a total of 13.87 million grid points or 1.07 million grid points per domain block. To obtain overall speedups, we compare the execution time of the solver's iteration loop, including all resource intensive GPU-related memory copies. Comparing the performance of 8 GPUs to that of 8 CPUs, we obtain an overall speedup of about 6.0 when using our densest computational grid. This amounts to an 8-GPU simulation running about 39.5 times faster than running than a single-CPU simulation.
Computing shifts to monitor ATLAS distributed computing infrastructure and operations
NASA Astrophysics Data System (ADS)
Adam, C.; Barberis, D.; Crépé-Renaudin, S.; De, K.; Fassi, F.; Stradling, A.; Svatos, M.; Vartapetian, A.; Wolters, H.
2017-10-01
The ATLAS Distributed Computing (ADC) group established a new Computing Run Coordinator (CRC) shift at the start of LHC Run 2 in 2015. The main goal was to rely on a person with a good overview of the ADC activities to ease the ADC experts’ workload. The CRC shifter keeps track of ADC tasks related to their fields of expertise and responsibility. At the same time, the shifter maintains a global view of the day-to-day operations of the ADC system. During Run 1, this task was accomplished by a person of the expert team called the ADC Manager on Duty (AMOD), a position that was removed during the shutdown period due to the reduced number and availability of ADC experts foreseen for Run 2. The CRC position was proposed to cover some of the AMODs former functions, while allowing more people involved in computing to participate. In this way, CRC shifters help with the training of future ADC experts. The CRC shifters coordinate daily ADC shift operations, including tracking open issues, reporting, and representing ADC in relevant meetings. The CRC also facilitates communication between the ADC experts team and the other ADC shifters. These include the Distributed Analysis Support Team (DAST), which is the first point of contact for addressing all distributed analysis questions, and the ATLAS Distributed Computing Shifters (ADCoS), which check and report problems in central services, sites, Tier-0 export, data transfers and production tasks. Finally, the CRC looks at the level of ADC activities on a weekly or monthly timescale to ensure that ADC resources are used efficiently.
RSTensorFlow: GPU Enabled TensorFlow for Deep Learning on Commodity Android Devices.
Alzantot, Moustafa; Wang, Yingnan; Ren, Zhengshuang; Srivastava, Mani B
2017-06-01
Mobile devices have become an essential part of our daily lives. By virtue of both their increasing computing power and the recent progress made in AI, mobile devices evolved to act as intelligent assistants in many tasks rather than a mere way of making phone calls. However, popular and commonly used tools and frameworks for machine intelligence are still lacking the ability to make proper use of the available heterogeneous computing resources on mobile devices. In this paper, we study the benefits of utilizing the heterogeneous (CPU and GPU) computing resources available on commodity android devices while running deep learning models. We leveraged the heterogeneous computing framework RenderScript to accelerate the execution of deep learning models on commodity Android devices. Our system is implemented as an extension to the popular open-source framework TensorFlow. By integrating our acceleration framework tightly into TensorFlow, machine learning engineers can now easily make benefit of the heterogeneous computing resources on mobile devices without the need of any extra tools. We evaluate our system on different android phones models to study the trade-offs of running different neural network operations on the GPU. We also compare the performance of running different models architectures such as convolutional and recurrent neural networks on CPU only vs using heterogeneous computing resources. Our result shows that although GPUs on the phones are capable of offering substantial performance gain in matrix multiplication on mobile devices. Therefore, models that involve multiplication of large matrices can run much faster (approx. 3 times faster in our experiments) due to GPU support.
NASA Technical Reports Server (NTRS)
Chawner, David M.; Gomez, Ray J.
2010-01-01
In the Applied Aerosciences and CFD branch at Johnson Space Center, computational simulations are run that face many challenges. Two of which are the ability to customize software for specialized needs and the need to run simulations as fast as possible. There are many different tools that are used for running these simulations and each one has its own pros and cons. Once these simulations are run, there needs to be software capable of visualizing the results in an appealing manner. Some of this software is called open source, meaning that anyone can edit the source code to make modifications and distribute it to all other users in a future release. This is very useful, especially in this branch where many different tools are being used. File readers can be written to load any file format into a program, to ease the bridging from one tool to another. Programming such a reader requires knowledge of the file format that is being read as well as the equations necessary to obtain the derived values after loading. When running these CFD simulations, extremely large files are being loaded and having values being calculated. These simulations usually take a few hours to complete, even on the fastest machines. Graphics processing units (GPUs) are usually used to load the graphics for computers; however, in recent years, GPUs are being used for more generic applications because of the speed of these processors. Applications run on GPUs have been known to run up to forty times faster than they would on normal central processing units (CPUs). If these CFD programs are extended to run on GPUs, the amount of time they would require to complete would be much less. This would allow more simulations to be run in the same amount of time and possibly perform more complex computations.
NASA Astrophysics Data System (ADS)
Pasik, Tomasz; van der Meij, Raymond
2017-12-01
This article presents an efficient search method for representative circular and unconstrained slip surfaces with the use of the tailored genetic algorithm. Searches for unconstrained slip planes with rigid equilibrium methods are yet uncommon in engineering practice, and little publications regarding truly free slip planes exist. The proposed method presents an effective procedure being the result of the right combination of initial population type, selection, crossover and mutation method. The procedure needs little computational effort to find the optimum, unconstrained slip plane. The methodology described in this paper is implemented using Mathematica. The implementation, along with further explanations, is fully presented so the results can be reproduced. Sample slope stability calculations are performed for four cases, along with a detailed result interpretation. Two cases are compared with analyses described in earlier publications. The remaining two are practical cases of slope stability analyses of dikes in Netherlands. These four cases show the benefits of analyzing slope stability with a rigid equilibrium method combined with a genetic algorithm. The paper concludes by describing possibilities and limitations of using the genetic algorithm in the context of the slope stability problem.
Hayat, T; Farooq, S; Alsaedi, A
2017-04-01
The primary objective of present analysis is to model the peristalsis of copper-water based nanoliquid in the presence of first order velocity and thermal slip conditions in a curved channel. Mixed convection, viscous dissipation and heat generation/absorption are also accounted. Mathematical formulation is simplified under the assumption of small Reynolds number and large wavelength. Regular perturbation technique is employed to find the solution of the resulting equations in terms of series for small Brinkman number. The final expression for pressure gradient, pressure rise, stream function, velocity and temperature are obtained and discussed through graphs. Mathematica software is utilized to compute the solution of the system of equations and to plot the graphical results. Results indicates that insertion of 30% copper nanoparticles in the basefluid (water) velocity and temperature reduces by almost 3% and 40% respecively. Moreover it is seen that size of the trapped bolus also reduces almost 20% with the insertion of 20% nanoparticles (copper) in the basefluid (water). It is noted that velocity and temperature are decreasing functions of nanoparticle volume fraction. Moreover the temperature rises when heat generation parameter and Brinkman number are enhanced. Copyright © 2017 Elsevier B.V. All rights reserved.
Soule, Pat LeRoy
1978-01-01
Water-surface profiles of the 25-, 50-, and 100-year recurrence interval discharges have been computed for all streams and reaches of channels in Fairfax County, Virginia, having a drainage area greater than 1 square mile except for Dogue Creek, Little Hunting Creek, and that portion of Cameron Run above Lake Barcroft. Maps having a 2-foot contour interval and a horizontal scale of 1 inch equals 100 feet were used for base on which flood boundaries were delineated for 25-, 50-, and 100-year floods to be expected in each basin under ultimate development conditions. This report is one of a series and presents a discussion of techniques employed in computing discharges and profiles as well as the flood profiles and maps on which flood boundaries have been delineated for the Occoquan River and its tributaries within Fairfax County and those streams on Mason Neck within Fairfax County tributary to the Potomac River. (Woodard-USGS)
ACON: a multipurpose production controller for plasma physics codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Snell, C.
1983-01-01
ACON is a BCON controller designed to run large production codes on the CTSS Cray-1 or the LTSS 7600 computers. ACON can also be operated interactively, with input from the user's terminal. The controller can run one code or a sequence of up to ten codes during the same job. Options are available to get and save Mass storage files, to perform Historian file updating operations, to compile and load source files, and to send out print and film files. Special features include ability to retry after Mass failures, backup options for saving files, startup messages for the various codes,more » and ability to reserve specified amounts of computer time after successive code runs. ACON's flexibility and power make it useful for running a number of different production codes.« less
Experimental Realization of High-Efficiency Counterfactual Computation.
Kong, Fei; Ju, Chenyong; Huang, Pu; Wang, Pengfei; Kong, Xi; Shi, Fazhan; Jiang, Liang; Du, Jiangfeng
2015-08-21
Counterfactual computation (CFC) exemplifies the fascinating quantum process by which the result of a computation may be learned without actually running the computer. In previous experimental studies, the counterfactual efficiency is limited to below 50%. Here we report an experimental realization of the generalized CFC protocol, in which the counterfactual efficiency can break the 50% limit and even approach unity in principle. The experiment is performed with the spins of a negatively charged nitrogen-vacancy color center in diamond. Taking advantage of the quantum Zeno effect, the computer can remain in the not-running subspace due to the frequent projection by the environment, while the computation result can be revealed by final detection. The counterfactual efficiency up to 85% has been demonstrated in our experiment, which opens the possibility of many exciting applications of CFC, such as high-efficiency quantum integration and imaging.
Experimental Realization of High-Efficiency Counterfactual Computation
NASA Astrophysics Data System (ADS)
Kong, Fei; Ju, Chenyong; Huang, Pu; Wang, Pengfei; Kong, Xi; Shi, Fazhan; Jiang, Liang; Du, Jiangfeng
2015-08-01
Counterfactual computation (CFC) exemplifies the fascinating quantum process by which the result of a computation may be learned without actually running the computer. In previous experimental studies, the counterfactual efficiency is limited to below 50%. Here we report an experimental realization of the generalized CFC protocol, in which the counterfactual efficiency can break the 50% limit and even approach unity in principle. The experiment is performed with the spins of a negatively charged nitrogen-vacancy color center in diamond. Taking advantage of the quantum Zeno effect, the computer can remain in the not-running subspace due to the frequent projection by the environment, while the computation result can be revealed by final detection. The counterfactual efficiency up to 85% has been demonstrated in our experiment, which opens the possibility of many exciting applications of CFC, such as high-efficiency quantum integration and imaging.
Running of scalar spectral index in multi-field inflation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gong, Jinn-Ouk, E-mail: jinn-ouk.gong@apctp.org
We compute the running of the scalar spectral index in general multi-field slow-roll inflation. By incorporating explicit momentum dependence at the moment of horizon crossing, we can find the running straightforwardly. At the same time, we can distinguish the contributions from the quasi de Sitter background and the super-horizon evolution of the field fluctuations.
NASA Astrophysics Data System (ADS)
Hill, M. C.; Jakeman, J.; Razavi, S.; Tolson, B.
2015-12-01
For many environmental systems model runtimes have remained very long as more capable computers have been used to add more processes and more time and space discretization. Scientists have also added more parameters and kinds of observations, and many model runs are needed to explore the models. Computational demand equals run time multiplied by number of model runs divided by parallelization opportunities. Model exploration is conducted using sensitivity analysis, optimization, and uncertainty quantification. Sensitivity analysis is used to reveal consequences of what may be very complex simulated relations, optimization is used to identify parameter values that fit the data best, or at least better, and uncertainty quantification is used to evaluate the precision of simulated results. The long execution times make such analyses a challenge. Methods for addressing this challenges include computationally frugal analysis of the demanding original model and a number of ingenious surrogate modeling methods. Both commonly use about 50-100 runs of the demanding original model. In this talk we consider the tradeoffs between (1) original model development decisions, (2) computationally frugal analysis of the original model, and (3) using many model runs of the fast surrogate model. Some questions of interest are as follows. If the added processes and discretization invested in (1) are compared with the restrictions and approximations in model analysis produced by long model execution times, is there a net benefit related of the goals of the model? Are there changes to the numerical methods that could reduce the computational demands while giving up less fidelity than is compromised by using computationally frugal methods or surrogate models for model analysis? Both the computationally frugal methods and surrogate models require that the solution of interest be a smooth function of the parameters or interest. How does the information obtained from the local methods typical of (2) and the global averaged methods typical of (3) compare for typical systems? The discussion will use examples of response of the Greenland glacier to global warming and surface and groundwater modeling.
Program Processes Thermocouple Readings
NASA Technical Reports Server (NTRS)
Quave, Christine A.; Nail, William, III
1995-01-01
Digital Signal Processor for Thermocouples (DART) computer program implements precise and fast method of converting voltage to temperature for large-temperature-range thermocouple applications. Written using LabVIEW software. DART available only as object code for use on Macintosh II FX or higher-series computers running System 7.0 or later and IBM PC-series and compatible computers running Microsoft Windows 3.1. Macintosh version of DART (SSC-00032) requires LabVIEW 2.2.1 or 3.0 for execution. IBM PC version (SSC-00031) requires LabVIEW 3.0 for Windows 3.1. LabVIEW software product of National Instruments and not included with program.
NASA Technical Reports Server (NTRS)
Mcenulty, R. E.
1977-01-01
The G189A simulation of the Shuttle Orbiter ECLSS was upgraded. All simulation library versions and simulation models were converted from the EXEC2 to the EXEC8 computer system and a new program, G189PL, was added to the combination master program library. The program permits the post-plotting of up to 100 frames of plot data over any time interval of a G189 simulation run. The overlay structure of the G189A simulations were restructured for the purpose of conserving computer core requirements and minimizing run time requirements.
INHYD: Computer code for intraply hybrid composite design. A users manual
NASA Technical Reports Server (NTRS)
Chamis, C. C.; Sinclair, J. H.
1983-01-01
A computer program (INHYD) was developed for intraply hybrid composite design. A users manual for INHYD is presented. In INHYD embodies several composite micromechanics theories, intraply hybrid composite theories, and an integrated hygrothermomechanical theory. The INHYD can be run in both interactive and batch modes. It has considerable flexibility and capability, which the user can exercise through several options. These options are demonstrated through appropriate INHYD runs in the manual.
Topology Optimization for Reducing Additive Manufacturing Processing Distortions
2017-12-01
features that curl or warp under thermal load and are subsequently struck by the recoater blade /roller. Support structures act to wick heat away and...was run for 150 iterations. The material properties for all examples were Young’s modulus E = 1 GPa, Poisson’s ratio ν = 0.25, and thermal expansion...the element-birth model is significantly more computationally expensive for a full op- timization run . Consider, the computational complexity of a
MindModeling@Home . . . and Anywhere Else You Have Idle Processors
2009-12-01
was SETI @Home. It was established in 1999 for the purpose of demonstrating the utility of “distributed grid computing” by providing a mechanism for...the public imagination, and SETI @Home remains the longest running and one of the most popular volunteer computing projects in the world. This...pursuits. Most of them, including SETI @Home, run on a software architecture called the Berkeley Open Infrastructure for Network Computing (BOINC). Some of
NASA Astrophysics Data System (ADS)
Zhiying, Chen; Ping, Zhou
2017-11-01
Considering the robust optimization computational precision and efficiency for complex mechanical assembly relationship like turbine blade-tip radial running clearance, a hierarchically response surface robust optimization algorithm is proposed. The distribute collaborative response surface method is used to generate assembly system level approximation model of overall parameters and blade-tip clearance, and then a set samples of design parameters and objective response mean and/or standard deviation is generated by using system approximation model and design of experiment method. Finally, a new response surface approximation model is constructed by using those samples, and this approximation model is used for robust optimization process. The analyses results demonstrate the proposed method can dramatic reduce the computational cost and ensure the computational precision. The presented research offers an effective way for the robust optimization design of turbine blade-tip radial running clearance.
Implementing Parquet equations using HPX
NASA Astrophysics Data System (ADS)
Kellar, Samuel; Wagle, Bibek; Yang, Shuxiang; Tam, Ka-Ming; Kaiser, Hartmut; Moreno, Juana; Jarrell, Mark
A new C++ runtime system (HPX) enables simulations of complex systems to run more efficiently on parallel and heterogeneous systems. This increased efficiency allows for solutions to larger simulations of the parquet approximation for a system with impurities. The relevancy of the parquet equations depends upon the ability to solve systems which require long runs and large amounts of memory. These limitations, in addition to numerical complications arising from stability of the solutions, necessitate running on large distributed systems. As the computational resources trend towards the exascale and the limitations arising from computational resources vanish efficiency of large scale simulations becomes a focus. HPX facilitates efficient simulations through intelligent overlapping of computation and communication. Simulations such as the parquet equations which require the transfer of large amounts of data should benefit from HPX implementations. Supported by the the NSF EPSCoR Cooperative Agreement No. EPS-1003897 with additional support from the Louisiana Board of Regents.
DualSPHysics: A numerical tool to simulate real breakwaters
NASA Astrophysics Data System (ADS)
Zhang, Feng; Crespo, Alejandro; Altomare, Corrado; Domínguez, José; Marzeddu, Andrea; Shang, Shao-ping; Gómez-Gesteira, Moncho
2018-02-01
The open-source code DualSPHysics is used in this work to compute the wave run-up in an existing dike in the Chinese coast using realistic dimensions, bathymetry and wave conditions. The GPU computing power of the DualSPHysics allows simulating real-engineering problems that involve complex geometries with a high resolution in a reasonable computational time. The code is first validated by comparing the numerical free-surface elevation, the wave orbital velocities and the time series of the run-up with physical data in a wave flume. Those experiments include a smooth dike and an armored dike with two layers of cubic blocks. After validation, the code is applied to a real case to obtain the wave run-up under different incident wave conditions. In order to simulate the real open sea, the spurious reflections from the wavemaker are removed by using an active wave absorption technique.
Prediction of sound radiated from different practical jet engine inlets
NASA Technical Reports Server (NTRS)
Zinn, B. T.; Meyer, W. L.
1980-01-01
Existing computer codes for calculating the far field radiation patterns surrounding various practical jet engine inlet configurations under different excitation conditions were upgraded. The computer codes were refined and expanded so that they are now more efficient computationally by a factor of about three and they are now capable of producing accurate results up to nondimensional wave numbers of twenty. Computer programs were also developed to help generate accurate geometrical representations of the inlets to be investigated. This data is required as input for the computer programs which calculate the sound fields. This new geometry generating computer program considerably reduces the time required to generate the input data which was one of the most time consuming steps in the process. The results of sample runs using the NASA-Lewis QCSEE inlet are presented and comparison of run times and accuracy are made between the old and upgraded computer codes. The overall accuracy of the computations is determined by comparison of the results of the computations with simple source solutions.
NASA Astrophysics Data System (ADS)
Gerjuoy, Edward
2005-06-01
The security of messages encoded via the widely used RSA public key encryption system rests on the enormous computational effort required to find the prime factors of a large number N using classical (conventional) computers. In 1994 Peter Shor showed that for sufficiently large N, a quantum computer could perform the factoring with much less computational effort. This paper endeavors to explain, in a fashion comprehensible to the nonexpert, the RSA encryption protocol; the various quantum computer manipulations constituting the Shor algorithm; how the Shor algorithm performs the factoring; and the precise sense in which a quantum computer employing Shor's algorithm can be said to accomplish the factoring of very large numbers with less computational effort than a classical computer. It is made apparent that factoring N generally requires many successive runs of the algorithm. Our analysis reveals that the probability of achieving a successful factorization on a single run is about twice as large as commonly quoted in the literature.
Programming the social computer.
Robertson, David; Giunchiglia, Fausto
2013-03-28
The aim of 'programming the global computer' was identified by Milner and others as one of the grand challenges of computing research. At the time this phrase was coined, it was natural to assume that this objective might be achieved primarily through extending programming and specification languages. The Internet, however, has brought with it a different style of computation that (although harnessing variants of traditional programming languages) operates in a style different to those with which we are familiar. The 'computer' on which we are running these computations is a social computer in the sense that many of the elementary functions of the computations it runs are performed by humans, and successful execution of a program often depends on properties of the human society over which the program operates. These sorts of programs are not programmed in a traditional way and may have to be understood in a way that is different from the traditional view of programming. This shift in perspective raises new challenges for the science of the Web and for computing in general.
Myers, E W; Mount, D W
1986-01-01
We describe a program which may be used to find approximate matches to a short predefined DNA sequence in a larger target DNA sequence. The program predicts the usefulness of specific DNA probes and sequencing primers and finds nearly identical sequences that might represent the same regulatory signal. The program is written in the C programming language and will run on virtually any computer system with a C compiler, such as the IBM/PC and other computers running under the MS/DOS and UNIX operating systems. The program has been integrated into an existing software package for the IBM personal computer (see article by Mount and Conrad, this volume). Some examples of its use are given. PMID:3753785
AGIS: Evolution of Distributed Computing information system for ATLAS
NASA Astrophysics Data System (ADS)
Anisenkov, A.; Di Girolamo, A.; Alandes, M.; Karavakis, E.
2015-12-01
ATLAS, a particle physics experiment at the Large Hadron Collider at CERN, produces petabytes of data annually through simulation production and tens of petabytes of data per year from the detector itself. The ATLAS computing model embraces the Grid paradigm and a high degree of decentralization of computing resources in order to meet the ATLAS requirements of petabytes scale data operations. It has been evolved after the first period of LHC data taking (Run-1) in order to cope with new challenges of the upcoming Run- 2. In this paper we describe the evolution and recent developments of the ATLAS Grid Information System (AGIS), developed in order to integrate configuration and status information about resources, services and topology of the computing infrastructure used by the ATLAS Distributed Computing applications and services.
Jaschob, Daniel; Riffle, Michael
2012-07-30
Laboratories engaged in computational biology or bioinformatics frequently need to run lengthy, multistep, and user-driven computational jobs. Each job can tie up a computer for a few minutes to several days, and many laboratories lack the expertise or resources to build and maintain a dedicated computer cluster. JobCenter is a client-server application and framework for job management and distributed job execution. The client and server components are both written in Java and are cross-platform and relatively easy to install. All communication with the server is client-driven, which allows worker nodes to run anywhere (even behind external firewalls or "in the cloud") and provides inherent load balancing. Adding a worker node to the worker pool is as simple as dropping the JobCenter client files onto any computer and performing basic configuration, which provides tremendous ease-of-use, flexibility, and limitless horizontal scalability. Each worker installation may be independently configured, including the types of jobs it is able to run. Executed jobs may be written in any language and may include multistep workflows. JobCenter is a versatile and scalable distributed job management system that allows laboratories to very efficiently distribute all computational work among available resources. JobCenter is freely available at http://code.google.com/p/jobcenter/.
CPU SIM: A Computer Simulator for Use in an Introductory Computer Organization-Architecture Class.
ERIC Educational Resources Information Center
Skrein, Dale
1994-01-01
CPU SIM, an interactive low-level computer simulation package that runs on the Macintosh computer, is described. The program is designed for instructional use in the first or second year of undergraduate computer science, to teach various features of typical computer organization through hands-on exercises. (MSE)
Flame-Vortex Studies to Quantify Markstein Numbers Needed to Model Flame Extinction Limits
NASA Technical Reports Server (NTRS)
Driscoll, James F.; Feikema, Douglas A.
2003-01-01
This has quantified a database of Markstein numbers for unsteady flames; future work will quantify a database of flame extinction limits for unsteady conditions. Unsteady extinction limits have not been documented previously; both a stretch rate and a residence time must be measured, since extinction requires that the stretch rate be sufficiently large for a sufficiently long residence time. Ma was measured for an inwardly-propagating flame (IPF) that is negatively-stretched under microgravity conditions. Computations also were performed using RUN-1DL to explain the measurements. The Markstein number of an inwardly-propagating flame, for both the microgravity experiment and the computations, is significantly larger than that of an outwardy-propagating flame. The computed profiles of the various species within the flame suggest reasons. Computed hydrogen concentrations build up ahead of the IPF but not the OPF. Understanding was gained by running the computations for both simplified and full-chemistry conditions. Numerical Simulations. To explain the experimental findings, numerical simulations of both inwardly and outwardly propagating spherical flames (with complex chemistry) were generated using the RUN-1DL code, which includes 16 species and 46 reactions.
Design and performance of the virtualization platform for offline computing on the ATLAS TDAQ Farm
NASA Astrophysics Data System (ADS)
Ballestrero, S.; Batraneanu, S. M.; Brasolin, F.; Contescu, C.; Di Girolamo, A.; Lee, C. J.; Pozo Astigarraga, M. E.; Scannicchio, D. A.; Twomey, M. S.; Zaytsev, A.
2014-06-01
With the LHC collider at CERN currently going through the period of Long Shutdown 1 there is an opportunity to use the computing resources of the experiments' large trigger farms for other data processing activities. In the case of the ATLAS experiment, the TDAQ farm, consisting of more than 1500 compute nodes, is suitable for running Monte Carlo (MC) production jobs that are mostly CPU and not I/O bound. This contribution gives a thorough review of the design and deployment of a virtualized platform running on this computing resource and of its use to run large groups of CernVM based virtual machines operating as a single CERN-P1 WLCG site. This platform has been designed to guarantee the security and the usability of the ATLAS private network, and to minimize interference with TDAQ's usage of the farm. Openstack has been chosen to provide a cloud management layer. The experience gained in the last 3.5 months shows that the use of the TDAQ farm for the MC simulation contributes to the ATLAS data processing at the level of a large Tier-1 WLCG site, despite the opportunistic nature of the underlying computing resources being used.
Characterization of fission gas bubbles in irradiated U-10Mo fuel
Casella, Andrew M.; Burkes, Douglas E.; MacFarlan, Paul J.; ...
2017-06-06
A simple, repeatable method for characterization of fission gas bubbles in irradiated U-Mo fuels has been developed. This method involves mechanical potting and polishing of samples along with examination with a scanning electron microscope located outside of a hot cell. The commercially available software packages CellProfiler, MATLAB, and Mathematica are used to segment and analyze the captured images. The results are compared and contrasted. Finally, baseline methods for fission gas bubble characterization are suggested for consideration and further development.
A Numerical Method for Predicting Rayleigh Surface Wave Velocity in Anisotropic Crystals (Postprint)
2017-09-05
generalized version of the equations are very difficult to derive, even in symbolic math languages such as Mathematica. As a result, the equations are...formalism, Math . Mech. Solids 9 (1) (2004) 5–15. [8] M. Destrade, The explicit secular equation for surface acoustic waves in monoclinic elastic crystals...Q. J. Mech. Appl. Math . 55 (2) (2002) 297–311. [10] D. Taylor, Surface waves in anisotropic media: the secular equation and its numerical solution
A Highly Functional Decision Paradigm Based on Nonlinear Adaptive Genetic Algorithm
1997-10-07
significant speedup. p£ lC <$jALTnimm SCTED & 14. SUBJECT TERMS Network Topology Optimization, Mathlink, Mathematica Plug-In, GA Route Optimizer, DSP...operations per second 2.4 Gbytes/second sustainable on-chip data transfer rate 400 Mb/s off-chip peak transfer rate Layer-to-layer interconnection...SecondHighestDist# = DistanceArray%(IndexList%(ChromeGene%(i%, 1) -1), IndexList%( Chrom eGene%(i%, 2) - 1)) For j% = 1 To StrandLength% - 1 ’If highest distance
Dill: an algorithm and a symbolic software package for doing classical supersymmetry calculations
NASA Astrophysics Data System (ADS)
Luc̆ić, Vladan
1995-11-01
An algorithm is presented that formalizes different steps in a classical Supersymmetric (SUSY) calculation. Based on the algorithm Dill, a symbolic software package, that can perform the calculations, is developed in the Mathematica programming language. While the algorithm is quite general, the package is created for the 4 - D, N = 1 model. Nevertheless, with little modification, the package could be used for other SUSY models. The package has been tested and some of the results are presented.
NASA Astrophysics Data System (ADS)
Pandya, Raaghav; Raja, Hammad; Enriquez-Torres, Delfino; Serey-Roman, Maria Ignacia; Hassebo, Yasser; Marciniak, Małgorzata
2018-02-01
The purpose of this research is to analyze mathematically cylindrical shapes of flexible solar panels and compare their efficiency to the flat panels. The efficiency is defined to be the flux density, which is the ratio of the mathematical flux and the surface area. In addition we describe the trajectory of the Sun at specific locations: the North Pole, The Equator and a geostationary satellite above the Equator. The calculations were performed with software: Maple, Mathematica, and MATLAB.
Simulation of a Diode Pumped Alkali Laser; a Three Level Numerical Approach
2010-03-01
The model will be developed to aid in the research and design of new DPAL systems. A DPAL is a relatively new type of laser which relies on laser...DPAL system to the fidelity required to perform testing and investigation of new systems without the creation of an experimental apparatus. Hence, to...1 26.24 * 10-9 H*Lewis Hz*L; A32@85D = 0; A32@87D = 0; III. Parameters Printed by Mathematica for Students 65 III. Parameters A. Enviromental
Scalable load balancing for massively parallel distributed Monte Carlo particle transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Brien, M. J.; Brantley, P. S.; Joy, K. I.
2013-07-01
In order to run computer simulations efficiently on massively parallel computers with hundreds of thousands or millions of processors, care must be taken that the calculation is load balanced across the processors. Examining the workload of every processor leads to an unscalable algorithm, with run time at least as large as O(N), where N is the number of processors. We present a scalable load balancing algorithm, with run time 0(log(N)), that involves iterated processor-pair-wise balancing steps, ultimately leading to a globally balanced workload. We demonstrate scalability of the algorithm up to 2 million processors on the Sequoia supercomputer at Lawrencemore » Livermore National Laboratory. (authors)« less
Performance of a supercharged direct-injection stratified-charge rotary combustion engine
NASA Technical Reports Server (NTRS)
Bartrand, Timothy A.; Willis, Edward A.
1990-01-01
A zero-dimensional thermodynamic performance computer model for direct-injection stratified-charge rotary combustion engines was modified and run for a single rotor supercharged engine. Operating conditions for the computer runs were a single boost pressure and a matrix of speeds, loads and engine materials. A representative engine map is presented showing the predicted range of efficient operation. After discussion of the engine map, a number of engine features are analyzed individually. These features are: heat transfer and the influence insulating materials have on engine performance and exhaust energy; intake manifold pressure oscillations and interactions with the combustion chamber; and performance losses and seal friction. Finally, code running times and convergence data are presented.
Decrease in Ground-Run Distance of Small Airplanes by Applying Electrically-Driven Wheels
NASA Astrophysics Data System (ADS)
Kobayashi, Hiroshi; Nishizawa, Akira
A new takeoff method for small airplanes was proposed. Ground-roll performance of an airplane driven by electrically-powered wheels was experimentally and computationally studied. The experiments verified that the ground-run distance was decreased by half with a combination of the powered driven wheels and propeller without increase of energy consumption during the ground-roll. The computational analysis showed the ground-run distance of the wheel-driven aircraft was independent of the motor power when the motor capability exceeded the friction between tires and ground. Furthermore, the distance was minimized when the angle of attack was set to the value so that the wing generated negative lift.
Coupled circuit numerical analysis of eddy currents in an open MRI system.
Akram, Md Shahadat Hossain; Terada, Yasuhiko; Keiichiro, Ishi; Kose, Katsumi
2014-08-01
We performed a new coupled circuit numerical simulation of eddy currents in an open compact magnetic resonance imaging (MRI) system. Following the coupled circuit approach, the conducting structures were divided into subdomains along the length (or width) and the thickness, and by implementing coupled circuit concepts we have simulated transient responses of eddy currents for subdomains in different locations. We implemented the Eigen matrix technique to solve the network of coupled differential equations to speed up our simulation program. On the other hand, to compute the coupling relations between the biplanar gradient coil and any other conducting structure, we implemented the solid angle form of Ampere's law. We have also calculated the solid angle for three dimensions to compute inductive couplings in any subdomain of the conducting structures. Details of the temporal and spatial distribution of the eddy currents were then implemented in the secondary magnetic field calculation by the Biot-Savart law. In a desktop computer (Programming platform: Wolfram Mathematica 8.0®, Processor: Intel(R) Core(TM)2 Duo E7500 @ 2.93GHz; OS: Windows 7 Professional; Memory (RAM): 4.00GB), it took less than 3min to simulate the entire calculation of eddy currents and fields, and approximately 6min for X-gradient coil. The results are given in the time-space domain for both the direct and the cross-terms of the eddy current magnetic fields generated by the Z-gradient coil. We have also conducted free induction decay (FID) experiments of eddy fields using a nuclear magnetic resonance (NMR) probe to verify our simulation results. The simulation results were found to be in good agreement with the experimental results. In this study we have also conducted simulations for transient and spatial responses of secondary magnetic field induced by X-gradient coil. Our approach is fast and has much less computational complexity than the conventional electromagnetic numerical simulation methods. Copyright © 2014 Elsevier Inc. All rights reserved.
NASADIG - NASA DEVICE INDEPENDENT GRAPHICS LIBRARY (AMDAHL VERSION)
NASA Technical Reports Server (NTRS)
Rogers, J. E.
1994-01-01
The NASA Device Independent Graphics Library, NASADIG, can be used with many computer-based engineering and management applications. The library gives the user the opportunity to translate data into effective graphic displays for presentation. The software offers many features which allow the user flexibility in creating graphics. These include two-dimensional plots, subplot projections in 3D-space, surface contour line plots, and surface contour color-shaded plots. Routines for three-dimensional plotting, wireframe surface plots, surface plots with hidden line removal, and surface contour line plots are provided. Other features include polar and spherical coordinate plotting, world map plotting utilizing either cylindrical equidistant or Lambert equal area projection, plot translation, plot rotation, plot blowup, splines and polynomial interpolation, area blanking control, multiple log/linear axes, legends and text control, curve thickness control, and multiple text fonts (18 regular, 4 bold). NASADIG contains several groups of subroutines. Included are subroutines for plot area and axis definition; text set-up and display; area blanking; line style set-up, interpolation, and plotting; color shading and pattern control; legend, text block, and character control; device initialization; mixed alphabets setting; and other useful functions. The usefulness of many routines is dependent on the prior definition of basic parameters. The program's control structure uses a serial-level construct with each routine restricted for activation at some prescribed level(s) of problem definition. NASADIG provides the following output device drivers: Selanar 100XL, VECTOR Move/Draw ASCII and PostScript files, Tektronix 40xx, 41xx, and 4510 Rasterizer, DEC VT-240 (4014 mode), IBM AT/PC compatible with SmartTerm 240 emulator, HP Lasergrafix Film Recorder, QMS 800/1200, DEC LN03+ Laserprinters, and HP LaserJet (Series III). NASADIG is written in FORTRAN and is available for several platforms. NASADIG 5.7 is available for DEC VAX series computers running VMS 5.0 or later (MSC-21801), Cray X-MP and Y-MP series computers running UNICOS (COS-10049), and Amdahl 5990 mainframe computers running UTS (COS-10050). NASADIG 5.1 is available for UNIX-based operating systems (MSC-22001). The UNIX version has been successfully implemented on Sun4 series computers running SunOS, SGI IRIS computers running IRIX, Hewlett Packard 9000 computers running HP-UX, and Convex computers running Convex OS (MSC-22001). The standard distribution medium for MSC-21801 is a set of two 6250 BPI 9-track magnetic tapes in DEC VAX BACKUP format. It is also available on a set of two TK50 tape cartridges in DEC VAX BACKUP format. The standard distribution medium for COS-10049 and COS-10050 is a 6250 BPI 9-track magnetic tape in UNIX tar format. Other distribution media and formats may be available upon request. The standard distribution medium for MSC-22001 is a .25 inch streaming magnetic tape cartridge (Sun QIC-24) in UNIX tar format. Alternate distribution media and formats are available upon request. With minor modification, the UNIX source code can be ported to other platforms including IBM PC/AT series computers and compatibles. NASADIG is also available bundled with TRASYS, the Thermal Radiation Analysis System (COS-10026, DEC VAX version; COS-10040, CRAY version).
NASADIG - NASA DEVICE INDEPENDENT GRAPHICS LIBRARY (UNIX VERSION)
NASA Technical Reports Server (NTRS)
Rogers, J. E.
1994-01-01
The NASA Device Independent Graphics Library, NASADIG, can be used with many computer-based engineering and management applications. The library gives the user the opportunity to translate data into effective graphic displays for presentation. The software offers many features which allow the user flexibility in creating graphics. These include two-dimensional plots, subplot projections in 3D-space, surface contour line plots, and surface contour color-shaded plots. Routines for three-dimensional plotting, wireframe surface plots, surface plots with hidden line removal, and surface contour line plots are provided. Other features include polar and spherical coordinate plotting, world map plotting utilizing either cylindrical equidistant or Lambert equal area projection, plot translation, plot rotation, plot blowup, splines and polynomial interpolation, area blanking control, multiple log/linear axes, legends and text control, curve thickness control, and multiple text fonts (18 regular, 4 bold). NASADIG contains several groups of subroutines. Included are subroutines for plot area and axis definition; text set-up and display; area blanking; line style set-up, interpolation, and plotting; color shading and pattern control; legend, text block, and character control; device initialization; mixed alphabets setting; and other useful functions. The usefulness of many routines is dependent on the prior definition of basic parameters. The program's control structure uses a serial-level construct with each routine restricted for activation at some prescribed level(s) of problem definition. NASADIG provides the following output device drivers: Selanar 100XL, VECTOR Move/Draw ASCII and PostScript files, Tektronix 40xx, 41xx, and 4510 Rasterizer, DEC VT-240 (4014 mode), IBM AT/PC compatible with SmartTerm 240 emulator, HP Lasergrafix Film Recorder, QMS 800/1200, DEC LN03+ Laserprinters, and HP LaserJet (Series III). NASADIG is written in FORTRAN and is available for several platforms. NASADIG 5.7 is available for DEC VAX series computers running VMS 5.0 or later (MSC-21801), Cray X-MP and Y-MP series computers running UNICOS (COS-10049), and Amdahl 5990 mainframe computers running UTS (COS-10050). NASADIG 5.1 is available for UNIX-based operating systems (MSC-22001). The UNIX version has been successfully implemented on Sun4 series computers running SunOS, SGI IRIS computers running IRIX, Hewlett Packard 9000 computers running HP-UX, and Convex computers running Convex OS (MSC-22001). The standard distribution medium for MSC-21801 is a set of two 6250 BPI 9-track magnetic tapes in DEC VAX BACKUP format. It is also available on a set of two TK50 tape cartridges in DEC VAX BACKUP format. The standard distribution medium for COS-10049 and COS-10050 is a 6250 BPI 9-track magnetic tape in UNIX tar format. Other distribution media and formats may be available upon request. The standard distribution medium for MSC-22001 is a .25 inch streaming magnetic tape cartridge (Sun QIC-24) in UNIX tar format. Alternate distribution media and formats are available upon request. With minor modification, the UNIX source code can be ported to other platforms including IBM PC/AT series computers and compatibles. NASADIG is also available bundled with TRASYS, the Thermal Radiation Analysis System (COS-10026, DEC VAX version; COS-10040, CRAY version).
Multiple running speed signals in medial entorhinal cortex
Hinman, James R.; Brandon, Mark P.; Climer, Jason R.; Chapman, G. William; Hasselmo, Michael E.
2016-01-01
Grid cells in medial entorhinal cortex (MEC) can be modeled using oscillatory interference or attractor dynamic mechanisms that perform path integration, a computation requiring information about running direction and speed. The two classes of computational models often use either an oscillatory frequency or a firing rate that increases as a function of running speed. Yet it is currently not known whether these are two manifestations of the same speed signal or dissociable signals with potentially different anatomical substrates. We examined coding of running speed in MEC and identified these two speed signals to be independent of each other within individual neurons. The medial septum (MS) is strongly linked to locomotor behavior and removal of MS input resulted in strengthening of the firing rate speed signal, while decreasing the strength of the oscillatory speed signal. Thus two speed signals are present in MEC that are differentially affected by disrupted MS input. PMID:27427460
Running SINDA '85/FLUINT interactive on the VAX
NASA Technical Reports Server (NTRS)
Simmonds, Boris
1992-01-01
Computer software as engineering tools are typically run in three modes: Batch, Demand, and Interactive. The first two are the most popular in the SINDA world. The third one is not so popular, due probably to the users inaccessibility to the command procedure files for running SINDA '85, or lack of familiarity with the SINDA '85 execution processes (pre-processor, processor, compilation, linking, execution and all of the file assignment, creation, deletions and de-assignments). Interactive is the mode that makes thermal analysis with SINDA '85 a real-time design tool. This paper explains a command procedure sufficient (the minimum modifications required in an existing demand command procedure) to run SINDA '85 on the VAX in an interactive mode. To exercise the procedure a sample problem is presented exemplifying the mode, plus additional programming capabilities available in SINDA '85. Following the same guidelines the process can be extended to other SINDA '85 residence computer platforms.
Multi-GPGPU Tsunami simulation at Toyama-bay
NASA Astrophysics Data System (ADS)
Furuyama, Shoichi; Ueda, Yuki
2017-07-01
Accelerated multi General Purpose Graphics Processing Unit (GPGPU) calculation for Tsunami run-up simulation was achieved at the wide area (whole Toyama-bay in Japan) by faster computation technique. Toyama-bay has active-faults at the sea-bed. It has a high possibility to occur earthquakes and Tsunami waves in the case of the huge earthquake, that's why to predict the area of Tsunami run-up is important for decreasing damages to residents by the disaster. However it is very hard task to achieve the simulation by the computer resources problem. A several meter's order of the high resolution calculation is required for the running-up Tsunami simulation because artificial structures on the ground such as roads, buildings, and houses are very small. On the other hand the huge area simulation is also required. In the Toyama-bay case the area is 42 [km] × 15 [km]. When 5 [m] × 5 [m] size computational cells are used for the simulation, over 26,000,000 computational cells are generated. To calculate the simulation, a normal CPU desktop computer took about 10 hours for the calculation. An improvement of calculation time is important problem for the immediate prediction system of Tsunami running-up, as a result it will contribute to protect a lot of residents around the coastal region. The study tried to decrease this calculation time by using multi GPGPU system which is equipped with six NVIDIA TESLA K20xs, InfiniBand network connection between computer nodes by MVAPICH library. As a result 5.16 times faster calculation was achieved on six GPUs than one GPU case and it was 86% parallel efficiency to the linear speed up.
An analysis of running skyline load path.
Ward W. Carson; Charles N. Mann
1971-01-01
This paper is intended for those who wish to prepare an algorithm to determine the load path of a running skyline. The mathematics of a simplified approach to this running skyline design problem are presented. The approach employs assumptions which reduce the complexity of the problem to the point where it can be solved on desk-top computers of limited capacities. The...
Job Priorities on Peregrine | High-Performance Computing | NREL
allocation when run with qos=high. Requesting a Node Reservation If you are doing work that requires real scheduler more efficiently plan resources for larger jobs. When projects reach their allocation limit, jobs associated with those projects will run at very low priority, which will ensure that these jobs run only when
Running High-Throughput Jobs on Peregrine | High-Performance Computing |
unique name (using "name=") and usse the task name to create a unique output file name. For runs on and how many tasks to give to each worker at a time using the NITRO_COORD_OPTIONS environment . Finally, you start Nitro by executing launch_nitro.sh. Sample Nitro job script To run a job using the
AlgoRun: a Docker-based packaging system for platform-agnostic implemented algorithms.
Hosny, Abdelrahman; Vera-Licona, Paola; Laubenbacher, Reinhard; Favre, Thibauld
2016-08-01
There is a growing need in bioinformatics for easy-to-use software implementations of algorithms that are usable across platforms. At the same time, reproducibility of computational results is critical and often a challenge due to source code changes over time and dependencies. The approach introduced in this paper addresses both of these needs with AlgoRun, a dedicated packaging system for implemented algorithms, using Docker technology. Implemented algorithms, packaged with AlgoRun, can be executed through a user-friendly interface directly from a web browser or via a standardized RESTful web API to allow easy integration into more complex workflows. The packaged algorithm includes the entire software execution environment, thereby eliminating the common problem of software dependencies and the irreproducibility of computations over time. AlgoRun-packaged algorithms can be published on http://algorun.org, a centralized searchable directory to find existing AlgoRun-packaged algorithms. AlgoRun is available at http://algorun.org and the source code under GPL license is available at https://github.com/algorun laubenbacher@uchc.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greenough, Jeffrey A.; de Supinski, Bronis R.; Yates, Robert K.
2005-04-25
We describe the performance of the block-structured Adaptive Mesh Refinement (AMR) code Raptor on the 32k node IBM BlueGene/L computer. This machine represents a significant step forward towards petascale computing. As such, it presents Raptor with many challenges for utilizing the hardware efficiently. In terms of performance, Raptor shows excellent weak and strong scaling when running in single level mode (no adaptivity). Hardware performance monitors show Raptor achieves an aggregate performance of 3:0 Tflops in the main integration kernel on the 32k system. Results from preliminary AMR runs on a prototype astrophysical problem demonstrate the efficiency of the current softwaremore » when running at large scale. The BG/L system is enabling a physics problem to be considered that represents a factor of 64 increase in overall size compared to the largest ones of this type computed to date. Finally, we provide a description of the development work currently underway to address our inefficiencies.« less
Machine-learning the string landscape
NASA Astrophysics Data System (ADS)
He, Yang-Hui
2017-11-01
We propose a paradigm to apply machine learning various databases which have emerged in the study of the string landscape. In particular, we establish neural networks as both classifiers and predictors and train them with a host of available data ranging from Calabi-Yau manifolds and vector bundles, to quiver representations for gauge theories, using a novel framework of recasting geometrical and physical data as pixelated images. We find that even a relatively simple neural network can learn many significant quantities to astounding accuracy in a matter of minutes and can also predict hithertofore unencountered results, whereby rendering the paradigm a valuable tool in physics as well as pure mathematics. Of course, this paradigm is useful not only to physicists but to also to mathematicians; for instance, could our NN be trained well enough to approximate bundle cohomology calculations? This, and a host of other examples, we will now examine.Methodology Neural networks are known for their complexity, involving usually a complicated directed graph each node of which is a ;perceptron; (an activation function imitating a neuron) and amongst the multitude of which there are many arrows encoding input/output. Throughout this letter, we will use a rather simple multi-layer perceptron (MLP) consisting of 5 layers, three of which are hidden, with activation functions typically of the form of a logistic sigmoid or a hyperbolic tangent. The input layer is a linear layer of 100 to 1000 nodes, recognizing a tensor (as we will soon see, algebro-geometric objects such as Calabi-Yau manifolds or polytopes are generically configurations of integer tensors) and the output layer is a summation layer giving a number corresponding to a Hodge number, or to rank of a cohomology group, etc. Such an MLP can be implemented, for instance, on the latest versions of Wolfram Mathematica. With 500-1000 training rounds, the running time is merely about 5-20 minutes on an ordinary laptop. It is reassuring and pleasantly surprising that even such a relatively simple NN can achieve the level of accuracy shortly to be presented.This letter is a companion summary of the longer paper[42]where the interested reader can find more details of the computations and the data.
ASDIR-II. Volume I. User Manual
1975-12-01
normally the most significant part of the overall aircraft IR signature. The 4 radiance is directly dependent upon the geometric view factors , a set...tactors as punched card output in. a view factor computer run. For the view factor computer run IB49 through 53 and all IDS input A, from IDS-2 to IDS-6...may be excluded from the input string if the * program execution is requested to stop after punching the viewv factors . Inputs required for punching
Feasibility of Virtual Machine and Cloud Computing Technologies for High Performance Computing
2014-05-01
Hat Enterprise Linux SaaS software as a service VM virtual machine vNUMA virtual non-uniform memory access WRF weather research and forecasting...previously mentioned in Chapter I Section B1 of this paper, which is used to run the weather research and forecasting ( WRF ) model in their experiments...against a VMware virtualization solution of WRF . The experiment consisted of running WRF in a standard configuration between the D-VTM and VMware while
The Air Force Geophysics Laboratory Standalone Data Acquisition System: A Functional Description.
1980-10-09
the board are a buffer for the RUN/HALT front panel switch and a retriggerable oneshot multivibrator. This latter circuit senses the SRUN pulse train...recording on the data tapes, and providing the master timing source for data acquisition. An Electronic Research Company (ERC) model 2446 digital...the computer is fed to a retriggerable oneshot multivibrator on the board. (SRUN consists of a pulse train that is present when the computer is running
Improved Algorithms Speed It Up for Codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hazi, A
2005-09-20
Huge computers, huge codes, complex problems to solve. The longer it takes to run a code, the more it costs. One way to speed things up and save time and money is through hardware improvements--faster processors, different system designs, bigger computers. But another side of supercomputing can reap savings in time and speed: software improvements to make codes--particularly the mathematical algorithms that form them--run faster and more efficiently. Speed up math? Is that really possible? According to Livermore physicist Eugene Brooks, the answer is a resounding yes. ''Sure, you get great speed-ups by improving hardware,'' says Brooks, the deputy leadermore » for Computational Physics in N Division, which is part of Livermore's Physics and Advanced Technologies (PAT) Directorate. ''But the real bonus comes on the software side, where improvements in software can lead to orders of magnitude improvement in run times.'' Brooks knows whereof he speaks. Working with Laboratory physicist Abraham Szoeke and others, he has been instrumental in devising ways to shrink the running time of what has, historically, been a tough computational nut to crack: radiation transport codes based on the statistical or Monte Carlo method of calculation. And Brooks is not the only one. Others around the Laboratory, including physicists Andrew Williamson, Randolph Hood, and Jeff Grossman, have come up with innovative ways to speed up Monte Carlo calculations using pure mathematics.« less
Hari, Pradip; Ko, Kevin; Koukoumidis, Emmanouil; Kremer, Ulrich; Martonosi, Margaret; Ottoni, Desiree; Peh, Li-Shiuan; Zhang, Pei
2008-10-28
Increasingly, spatial awareness plays a central role in many distributed and mobile computing applications. Spatially aware applications rely on information about the geographical position of compute devices and their supported services in order to support novel functionality. While many spatial application drivers already exist in mobile and distributed computing, very little systems research has explored how best to program these applications, to express their spatial and temporal constraints, and to allow efficient implementations on highly dynamic real-world platforms. This paper proposes the SARANA system architecture, which includes language and run-time system support for spatially aware and resource-aware applications. SARANA allows users to express spatial regions of interest, as well as trade-offs between quality of result (QoR), latency and cost. The goal is to produce applications that use resources efficiently and that can be run on diverse resource-constrained platforms ranging from laptops to personal digital assistants and to smart phones. SARANA's run-time system manages QoR and cost trade-offs dynamically by tracking resource availability and locations, brokering usage/pricing agreements and migrating programs to nodes accordingly. A resource cost model permeates the SARANA system layers, permitting users to express their resource needs and QoR expectations in units that make sense to them. Although we are still early in the system development, initial versions have been demonstrated on a nine-node system prototype.
Lee, Jae H.; Yao, Yushu; Shrestha, Uttam; Gullberg, Grant T.; Seo, Youngho
2014-01-01
The primary goal of this project is to implement the iterative statistical image reconstruction algorithm, in this case maximum likelihood expectation maximum (MLEM) used for dynamic cardiac single photon emission computed tomography, on Spark/GraphX. This involves porting the algorithm to run on large-scale parallel computing systems. Spark is an easy-to- program software platform that can handle large amounts of data in parallel. GraphX is a graph analytic system running on top of Spark to handle graph and sparse linear algebra operations in parallel. The main advantage of implementing MLEM algorithm in Spark/GraphX is that it allows users to parallelize such computation without any expertise in parallel computing or prior knowledge in computer science. In this paper we demonstrate a successful implementation of MLEM in Spark/GraphX and present the performance gains with the goal to eventually make it useable in clinical setting. PMID:27081299
Lee, Jae H; Yao, Yushu; Shrestha, Uttam; Gullberg, Grant T; Seo, Youngho
2014-11-01
The primary goal of this project is to implement the iterative statistical image reconstruction algorithm, in this case maximum likelihood expectation maximum (MLEM) used for dynamic cardiac single photon emission computed tomography, on Spark/GraphX. This involves porting the algorithm to run on large-scale parallel computing systems. Spark is an easy-to- program software platform that can handle large amounts of data in parallel. GraphX is a graph analytic system running on top of Spark to handle graph and sparse linear algebra operations in parallel. The main advantage of implementing MLEM algorithm in Spark/GraphX is that it allows users to parallelize such computation without any expertise in parallel computing or prior knowledge in computer science. In this paper we demonstrate a successful implementation of MLEM in Spark/GraphX and present the performance gains with the goal to eventually make it useable in clinical setting.
The engineering design integration (EDIN) system. [digital computer program complex
NASA Technical Reports Server (NTRS)
Glatt, C. R.; Hirsch, G. N.; Alford, G. E.; Colquitt, W. N.; Reiners, S. J.
1974-01-01
A digital computer program complex for the evaluation of aerospace vehicle preliminary designs is described. The system consists of a Univac 1100 series computer and peripherals using the Exec 8 operating system, a set of demand access terminals of the alphanumeric and graphics types, and a library of independent computer programs. Modification of the partial run streams, data base maintenance and construction, and control of program sequencing are provided by a data manipulation program called the DLG processor. The executive control of library program execution is performed by the Univac Exec 8 operating system through a user established run stream. A combination of demand and batch operations is employed in the evaluation of preliminary designs. Applications accomplished with the EDIN system are described.
NASA Astrophysics Data System (ADS)
Gupta, V.; Gupta, N.; Gupta, S.; Field, E.; Maechling, P.
2003-12-01
Modern laptop computers, and personal computers, can provide capabilities that are, in many ways, comparable to workstations or departmental servers. However, this doesn't mean we should run all computations on our local computers. We have identified several situations in which it preferable to implement our seismological application programs in a distributed, server-based, computing model. In this model, application programs on the user's laptop, or local computer, invoke programs that run on an organizational server, and the results are returned to the invoking system. Situations in which a server-based architecture may be preferred include: (a) a program is written in a language, or written for an operating environment, that is unsupported on the local computer, (b) software libraries or utilities required to execute a program are not available on the users computer, (c) a computational program is physically too large, or computationally too expensive, to run on a users computer, (d) a user community wants to enforce a consistent method of performing a computation by standardizing on a single implementation of a program, and (e) the computational program may require current information, that is not available to all client computers. Until recently, distributed, server-based, computational capabilities were implemented using client/server architectures. In these architectures, client programs were often written in the same language, and they executed in the same computing environment, as the servers. Recently, a new distributed computational model, called Web Services, has been developed. Web Services are based on Internet standards such as XML, SOAP, WDSL, and UDDI. Web Services offer the promise of platform, and language, independent distributed computing. To investigate this new computational model, and to provide useful services to the SCEC Community, we have implemented several computational and utility programs using a Web Service architecture. We have hosted these Web Services as a part of the SCEC Community Modeling Environment (SCEC/CME) ITR Project (http://www.scec.org/cme). We have implemented Web Services for several of the reasons sited previously. For example, we implemented a FORTRAN-based Earthquake Rupture Forecast (ERF) as a Web Service for use by client computers that don't support a FORTRAN runtime environment. We implemented a Generic Mapping Tool (GMT) Web Service for use by systems that don't have local access to GMT. We implemented a Hazard Map Calculator Web Service to execute Hazard calculations that are too computationally intensive to run on a local system. We implemented a Coordinate Conversion Web Service to enforce a standard and consistent method for converting between UTM and Lat/Lon. Our experience developing these services indicates both strengths and weakness in current Web Service technology. Client programs that utilize Web Services typically need network access, a significant disadvantage at times. Programs with simple input and output parameters were the easiest to implement as Web Services, while programs with complex parameter-types required a significant amount of additional development. We also noted that Web services are very data-oriented, and adapting object-oriented software into the Web Service model proved problematic. Also, the Web Service approach of converting data types into XML format for network transmission has significant inefficiencies for some data sets.
Using Avizo Software on the Peregrine System | High-Performance Computing |
be run remotely from the Peregrine visualization node. First, launch a TurboVNC remote desktop. Then from a terminal in that remote desktop: % module load avizo % vglrun avizo Running Locally Avizo can
A Computing Infrastructure for Supporting Climate Studies
NASA Astrophysics Data System (ADS)
Yang, C.; Bambacus, M.; Freeman, S. M.; Huang, Q.; Li, J.; Sun, M.; Xu, C.; Wojcik, G. S.; Cahalan, R. F.; NASA Climate @ Home Project Team
2011-12-01
Climate change is one of the major challenges facing us on the Earth planet in the 21st century. Scientists build many models to simulate the past and predict the climate change for the next decades or century. Most of the models are at a low resolution with some targeting high resolution in linkage to practical climate change preparedness. To calibrate and validate the models, millions of model runs are needed to find the best simulation and configuration. This paper introduces the NASA effort on Climate@Home project to build a supercomputer based-on advanced computing technologies, such as cloud computing, grid computing, and others. Climate@Home computing infrastructure includes several aspects: 1) a cloud computing platform is utilized to manage the potential spike access to the centralized components, such as grid computing server for dispatching and collecting models runs results; 2) a grid computing engine is developed based on MapReduce to dispatch models, model configuration, and collect simulation results and contributing statistics; 3) a portal serves as the entry point for the project to provide the management, sharing, and data exploration for end users; 4) scientists can access customized tools to configure model runs and visualize model results; 5) the public can access twitter and facebook to get the latest about the project. This paper will introduce the latest progress of the project and demonstrate the operational system during the AGU fall meeting. It will also discuss how this technology can become a trailblazer for other climate studies and relevant sciences. It will share how the challenges in computation and software integration were solved.
Zhang, Yong; Otani, Akihito; Maginn, Edward J
2015-08-11
Equilibrium molecular dynamics is often used in conjunction with a Green-Kubo integral of the pressure tensor autocorrelation function to compute the shear viscosity of fluids. This approach is computationally expensive and is subject to a large amount of variability because the plateau region of the Green-Kubo integral is difficult to identify unambiguously. Here, we propose a time decomposition approach for computing the shear viscosity using the Green-Kubo formalism. Instead of one long trajectory, multiple independent trajectories are run and the Green-Kubo relation is applied to each trajectory. The averaged running integral as a function of time is fit to a double-exponential function with a weighting function derived from the standard deviation of the running integrals. Such a weighting function minimizes the uncertainty of the estimated shear viscosity and provides an objective means of estimating the viscosity. While the formal Green-Kubo integral requires an integration to infinite time, we suggest an integration cutoff time tcut, which can be determined by the relative values of the running integral and the corresponding standard deviation. This approach for computing the shear viscosity can be easily automated and used in computational screening studies where human judgment and intervention in the data analysis are impractical. The method has been applied to the calculation of the shear viscosity of a relatively low-viscosity liquid, ethanol, and relatively high-viscosity ionic liquid, 1-n-butyl-3-methylimidazolium bis(trifluoromethane-sulfonyl)imide ([BMIM][Tf2N]), over a range of temperatures. These test cases show that the method is robust and yields reproducible and reliable shear viscosity values.
Shared address collectives using counter mechanisms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blocksome, Michael; Dozsa, Gabor; Gooding, Thomas M
A shared address space on a compute node stores data received from a network and data to transmit to the network. The shared address space includes an application buffer that can be directly operated upon by a plurality of processes, for instance, running on different cores on the compute node. A shared counter is used for one or more of signaling arrival of the data across the plurality of processes running on the compute node, signaling completion of an operation performed by one or more of the plurality of processes, obtaining reservation slots by one or more of the pluralitymore » of processes, or combinations thereof.« less
Crew appliance computer program manual, volume 1
NASA Technical Reports Server (NTRS)
Russell, D. J.
1975-01-01
Trade studies of numerous appliance concepts for advanced spacecraft galley, personal hygiene, housekeeping, and other areas were made to determine which best satisfy the space shuttle orbiter and modular space station mission requirements. Analytical models of selected appliance concepts not currently included in the G-189A Generalized Environmental/Thermal Control and Life Support Systems (ETCLSS) Computer Program subroutine library were developed. The new appliance subroutines are given along with complete analytical model descriptions, solution methods, user's input instructions, and validation run results. The appliance components modeled were integrated with G-189A ETCLSS models for shuttle orbiter and modular space station, and results from computer runs of these systems are presented.
NASA Astrophysics Data System (ADS)
Steiger, Damian S.; Haener, Thomas; Troyer, Matthias
Quantum computers promise to transform our notions of computation by offering a completely new paradigm. A high level quantum programming language and optimizing compilers are essential components to achieve scalable quantum computation. In order to address this, we introduce the ProjectQ software framework - an open source effort to support both theorists and experimentalists by providing intuitive tools to implement and run quantum algorithms. Here, we present our ProjectQ quantum compiler, which compiles a quantum algorithm from our high-level Python-embedded language down to low-level quantum gates available on the target system. We demonstrate how this compiler can be used to control actual hardware and to run high-performance simulations.
PHREEQCI; a graphical user interface for the geochemical computer program PHREEQC
Charlton, Scott R.; Macklin, Clifford L.; Parkhurst, David L.
1997-01-01
PhreeqcI is a Windows-based graphical user interface for the geochemical computer program PHREEQC. PhreeqcI provides the capability to generate and edit input data files, run simulations, and view text files containing simulation results, all within the framework of a single interface. PHREEQC is a multipurpose geochemical program that can perform speciation, inverse, reaction-path, and 1D advective reaction-transport modeling. Interactive access to all of the capabilities of PHREEQC is available with PhreeqcI. The interface is written in Visual Basic and will run on personal computers under the Windows(3.1), Windows95, and WindowsNT operating systems.
Local rollback for fault-tolerance in parallel computing systems
Blumrich, Matthias A [Yorktown Heights, NY; Chen, Dong [Yorktown Heights, NY; Gara, Alan [Yorktown Heights, NY; Giampapa, Mark E [Yorktown Heights, NY; Heidelberger, Philip [Yorktown Heights, NY; Ohmacht, Martin [Yorktown Heights, NY; Steinmacher-Burow, Burkhard [Boeblingen, DE; Sugavanam, Krishnan [Yorktown Heights, NY
2012-01-24
A control logic device performs a local rollback in a parallel super computing system. The super computing system includes at least one cache memory device. The control logic device determines a local rollback interval. The control logic device runs at least one instruction in the local rollback interval. The control logic device evaluates whether an unrecoverable condition occurs while running the at least one instruction during the local rollback interval. The control logic device checks whether an error occurs during the local rollback. The control logic device restarts the local rollback interval if the error occurs and the unrecoverable condition does not occur during the local rollback interval.
[Groupamatic 360 C1 and automated blood donor processing in a transfusion center].
Guimbretiere, J; Toscer, M; Harousseau, H
1978-03-01
Automation of donor management flow path is controlled by: --a 3 slip "port a punch" card, --the groupamatic unit with a result sorted out on punch paper tape, --the management computer off line connected to groupamatic. Data tracking at blood collection time is made by punching a card with the donor card used as a master card. Groupamatic performs: --a standard blood grouping with one run for registered donors and two runs for new donors, --a phenotyping with two runs, --a screening of irregular antibodies. Themanagement computer checks the correlation between the data of the two runs or the data of a single run and that of previous file. It updates the data resident in the central file and prints out: --the controls of the different blood group for the red cell panel, --The listing of error messages, --The listing of emergency call up, --The listing of collected blood units when arrived at the blood center, with quantitative and qualitative information such as: number of blood, units collected, donor addresses, etc., --Statistics, --Donor cards, --Diplomas.
Abusive User Policy | High-Performance Computing | NREL
below. First Incident The user's ability to run new jobs or store new data will be suspended temporarily acknowledged and participated in a remedy, ability to run new jobs or store new data will be restored. Second Incident Suspend running new jobs or storing new data. Terminate jobs if necessary. The system and
PNNL streamlines energy-guzzling computers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beckman, Mary T.; Marquez, Andres
In a room the size of a garage, two rows of six-foot-tall racks holding supercomputer hard drives sit back-to-back. Thin tubes and wires snake off the hard drives, slithering into the corners. Stepping between the rows, a rush of heat whips around you -- the air from fans blowing off processing heat. But walk farther in, between the next racks of hard drives, and the temperature drops noticeably. These drives are being cooled by a non-conducting liquid that runs right over the hardworking processors. The liquid carries the heat away in tubes, saving the air a few degrees. This ismore » the Energy Smart Data Center at Pacific Northwest National Laboratory. The bigger, faster, and meatier supercomputers get, the more energy they consume. PNNL's Andres Marquez has developed this test bed to learn how to train the behemoths in energy efficiency. The work will help supercomputers perform better as well. Processors have to keep cool or suffer from "thermal throttling," says Marquez. "That's the performance threshold where the computer is too hot to run well. That threshold is an industry secret." The center at EMSL, DOE's national scientific user facility at PNNL, harbors several ways of experimenting with energy usage. For example, the room's air conditioning is isolated from the rest of EMSL -- pipes running beneath the floor carry temperature-controlled water through heat exchangers to cooling towers outside. "We can test whether it's more energy efficient to cool directly on the processing chips or out in the water tower," says Marquez. The hard drives feed energy and temperature data to a network server running specially designed software that controls and monitors the data center. To test the center’s limits, the team runs the processors flat out – not only on carefully controlled test programs in the Energy Smart computers, but also on real world software from other EMSL research, such as regional weather forecasting models. Marquez's group is also developing "power aware computing", where the computer programs themselves perform calculations more energy efficiently. Maybe once computers get smart about energy, they'll have tips for their users.« less
A Menu-Driven Interface to Unix-Based Resources
Evans, Elizabeth A.
1989-01-01
Unix has often been overlooked in the past as a viable operating system for anyone other than computer scientists. Its terseness, non-mnemonic nature of the commands, and the lack of user-friendly software to run under it are but a few of the user-related reasons which have been cited. It is, nevertheless, the operating system of choice in many cases. This paper describes a menu-driven interface to Unix which provides user-friendlier access to the software resources available on the computers running under Unix.
The Impact of Typhoons on the Ocean in the Pacific (ITOP) Field and Data Management Support
2011-12-16
in October o f 2009 to develop effective sampling strategies for 20 I 0. EOL /Computing Data and Software Facil ity (CDS) supported the !TO P Dry Run...measurement strategies necessitated a dry run experiment in October of 2009 to develop effective sampling strategies for 2010. EOL /Computing Data and...contains products from 21 September through 32 October 2009. The catalog remains accessible at EOL at the above mentioned uri. The products listed by
Building Computer-Based Experiments in Psychology without Programming Skills.
Ruisoto, Pablo; Bellido, Alberto; Ruiz, Javier; Juanes, Juan A
2016-06-01
Research in Psychology usually requires to build and run experiments. However, although this task has required scripting, recent computer tools based on graphical interfaces offer new opportunities in this field for researchers with non-programming skills. The purpose of this study is to illustrate and provide a comparative overview of two of the main free open source "point and click" software packages for building and running experiments in Psychology: PsychoPy and OpenSesame. Recommendations for their potential use are further discussed.
Interoperability...NMCI and Beyond
2001-05-31
wireless. “On The Road” – Pagers – Cell phones – Palm-size PDAs – Two way pagers – Hand-held computing device – Laptop computer – Two-way radios – A...combat capability”… $0 $5 $10 $15 $20 $25 Electric Power NMCI Seat First Run Movie Cell Phone Fed. Civilian Salary 23.80 11.00 4.00 1.380.20 F/A-18...Flying Hour: 1,134.00 Fed. Civilian Salary (mean): 23.80 Cell Phone Air Time: 11.00 First Run Movie: 4.00 DSN
NASA Astrophysics Data System (ADS)
Flores-Bustamante, Mario C.; Rosete-Aguilar, Martha; Calixto, Sergio
2016-03-01
A lens containing a liquid medium and having at least one elastic membrane as one of its components is known as an elastic membrane lens (EML). The elastic membrane may have a constant or variable thickness. The optical properties of the EML change by modifying the profile of its elastic membrane(s). The EML formed of elastic constant thickness membrane(s) have been studied extensively. However, EML information using elastic membrane of variable thickness is limited. In this work, we present simulation results of the mechanical and optical behavior of two EML with variable thickness membranes (convex-plane membranes). The profile of its surfaces were modified by liquid medium volume increases. The model of the convex-plane membranes, as well as the simulation of its mechanical behavior, were performed using Solidworks® software; and surface's points of the deformed elastic lens were obtained. Experimental stress-strain data, obtained from a silicone rubber simple tensile test, according to ASTM D638 norm, were used in the simulation. Algebraic expressions, (Schwarzschild formula, up to four deformation coefficients, in a cylindrical coordinate system (r, z)), of the meridional profiles of the first and second surfaces of the deformed convex-plane membranes, were obtained using the results from Solidworks® and a program in the software Mathematica®. The optical performance of the EML was obtained by simulation using the software OSLO® and the algebraic expressions obtained in Mathematica®.
Barlough, J E; Jacobson, R H; Downing, D R; Lynch, T J; Scott, F W
1987-01-01
The computer-assisted, kinetics-based enzyme-linked immunosorbent assay for coronavirus antibodies in cats was calibrated to the conventional indirect immunofluorescence assay by linear regression analysis and computerized interpolation (generation of "immunofluorescence assay-equivalent" titers). Procedures were developed for normalization and standardization of kinetics-based enzyme-linked immunosorbent assay results through incorporation of five different control sera of predetermined ("expected") titer in daily runs. When used with such sera and with computer assistance, the kinetics-based enzyme-linked immunosorbent assay minimized both within-run and between-run variability while allowing also for efficient data reduction and statistical analysis and reporting of results. PMID:3032390
Barlough, J E; Jacobson, R H; Downing, D R; Lynch, T J; Scott, F W
1987-01-01
The computer-assisted, kinetics-based enzyme-linked immunosorbent assay for coronavirus antibodies in cats was calibrated to the conventional indirect immunofluorescence assay by linear regression analysis and computerized interpolation (generation of "immunofluorescence assay-equivalent" titers). Procedures were developed for normalization and standardization of kinetics-based enzyme-linked immunosorbent assay results through incorporation of five different control sera of predetermined ("expected") titer in daily runs. When used with such sera and with computer assistance, the kinetics-based enzyme-linked immunosorbent assay minimized both within-run and between-run variability while allowing also for efficient data reduction and statistical analysis and reporting of results.
Simple Simulation Algorithms and Sample Applications
NASA Astrophysics Data System (ADS)
Kröger, Martin
This section offers basic recipes and sample applications which allow the reader to immediately start his/her own simulation project on topics we dealt with in this book. Concerning molecular dynamics and Monte Carlo simulation there are, of course, several useful books already available which describe the ‘art of simulation‘ [141, 156, 256] in an exhaustive way. The reason we print some simple codes is that we skipped algorithmic details in the foregoing chapters. Simulations are always performed using dimensionless numbers, and all dimensional quantities can be expressed in terms of reduced units, cf. Sect. 4.3 for conventional Lennard Jones units. In this chapter, we concentrate on the necessary, and skip anything more sophisticated. Codes have been used in classrooms, they are obviously open for modifications and extensions, and offer not only an executable, but all necessary formulas for doing simulations in the correct (which is often essential) order. The overall spirit is as follows: codes are short, run without changes, demonstrate the main principle in a modular fashion, and are thus in particular open regarding efficiency issues and extensions. Algorithms are presented in the MatlabTM language, which is mostly directly portable to programming languages like fortran, c, or MathematicaTM. For an introduction we refer to [423]. Additional commands needed to visualize the results are given in the figure title for each application. Simulation codes, in a less modular fashion, are also available online at www.complexfluids.ethz.ch. Functions are shared over sections, for that reason we begin with an alphabetic list of all (nonbuiltin) functions in this chapter.
NASA Technical Reports Server (NTRS)
Cole, Bjorn; Chung, Seung
2012-01-01
One of the challenges of systems engineering is in working multidisciplinary problems in a cohesive manner. When planning analysis of these problems, system engineers must trade between time and cost for analysis quality and quantity. The quality often correlates with greater run time in multidisciplinary models and the quantity is associated with the number of alternatives that can be analyzed. The trade-off is due to the resource intensive process of creating a cohesive multidisciplinary systems model and analysis. Furthermore, reuse or extension of the models used in one stage of a product life cycle for another is a major challenge. Recent developments have enabled a much less resource-intensive and more rigorous approach than hand-written translation scripts between multi-disciplinary models and their analyses. The key is to work from a core systems model defined in a MOF-based language such as SysML and in leveraging the emerging tool ecosystem, such as Query/View/Transformation (QVT), from the OMG community. SysML was designed to model multidisciplinary systems. The QVT standard was designed to transform SysML models into other models, including those leveraged by engineering analyses. The Europa Habitability Mission (EHM) team has begun to exploit these capabilities. In one case, a Matlab/Simulink model is generated on the fly from a system description for power analysis written in SysML. In a more general case, symbolic analysis (supported by Wolfram Mathematica) is coordinated by data objects transformed from the systems model, enabling extremely flexible and powerful design exploration and analytical investigations of expected system performance.
2012-01-01
Background Laboratories engaged in computational biology or bioinformatics frequently need to run lengthy, multistep, and user-driven computational jobs. Each job can tie up a computer for a few minutes to several days, and many laboratories lack the expertise or resources to build and maintain a dedicated computer cluster. Results JobCenter is a client–server application and framework for job management and distributed job execution. The client and server components are both written in Java and are cross-platform and relatively easy to install. All communication with the server is client-driven, which allows worker nodes to run anywhere (even behind external firewalls or “in the cloud”) and provides inherent load balancing. Adding a worker node to the worker pool is as simple as dropping the JobCenter client files onto any computer and performing basic configuration, which provides tremendous ease-of-use, flexibility, and limitless horizontal scalability. Each worker installation may be independently configured, including the types of jobs it is able to run. Executed jobs may be written in any language and may include multistep workflows. Conclusions JobCenter is a versatile and scalable distributed job management system that allows laboratories to very efficiently distribute all computational work among available resources. JobCenter is freely available at http://code.google.com/p/jobcenter/. PMID:22846423
Changes in running kinematics, kinetics, and spring-mass behavior over a 24-h run.
Morin, Jean-Benoît; Samozino, Pierre; Millet, Guillaume Y
2011-05-01
This study investigated the changes in running mechanics and spring-mass behavior over a 24-h treadmill run (24TR). Kinematics, kinetics, and spring-mass characteristics of the running step were assessed in 10 experienced ultralong-distance runners before, every 2 h, and after a 24TR using an instrumented treadmill dynamometer. These measurements were performed at 10 km·h, and mechanical parameters were sampled at 1000 Hz for 10 consecutive steps. Contact and aerial times were determined from ground reaction force (GRF) signals and used to compute step frequency. Maximal GRF, loading rate, downward displacement of the center of mass, and leg length change during the support phase were determined and used to compute both vertical and leg stiffness. Subjects' running pattern and spring-mass behavior significantly changed over the 24TR with a 4.9% higher step frequency on average (because of a significantly 4.5% shorter contact time), a lower maximal GRF (by 4.4% on average), a 13.0% lower leg length change during contact, and an increase in both leg and vertical stiffness (+9.9% and +8.6% on average, respectively). Most of these changes were significant from the early phase of the 24TR (fourth to sixth hour of running) and could be speculated as contributing to an overall limitation of the potentially harmful consequences of such a long-duration run on subjects' musculoskeletal system. During a 24TR, the changes in running mechanics and spring-mass behavior show a clear shift toward a higher oscillating frequency and stiffness, along with lower GRF and leg length change (hence a reduced overall eccentric load) during the support phase of running. © 2011 by the American College of Sports Medicine
ERIC Educational Resources Information Center
Shade, Daniel D.
1994-01-01
Provides advice and suggestions for educators or parents who are trying to decide what type of computer to buy to run the latest computer software for children. Suggests that purchasers should buy a computer with as large a hard drive as possible, at least 10 megabytes of RAM, and a CD-ROM drive. (MDM)
Use of UNIX in large online processor farms
NASA Astrophysics Data System (ADS)
Biel, Joseph R.
1990-08-01
There has been a recent rapid increase in the power of RISC computers running the UNIX operating system. Fermilab has begun to make use of these computers in the next generation of offline computer farms. It is also planning to use such computers in online computer farms. Issues involved in constructing online UNIX farms are discussed.
Optimizing a mobile robot control system using GPU acceleration
NASA Astrophysics Data System (ADS)
Tuck, Nat; McGuinness, Michael; Martin, Fred
2012-01-01
This paper describes our attempt to optimize a robot control program for the Intelligent Ground Vehicle Competition (IGVC) by running computationally intensive portions of the system on a commodity graphics processing unit (GPU). The IGVC Autonomous Challenge requires a control program that performs a number of different computationally intensive tasks ranging from computer vision to path planning. For the 2011 competition our Robot Operating System (ROS) based control system would not run comfortably on the multicore CPU on our custom robot platform. The process of profiling the ROS control program and selecting appropriate modules for porting to run on a GPU is described. A GPU-targeting compiler, Bacon, is used to speed up development and help optimize the ported modules. The impact of the ported modules on overall performance is discussed. We conclude that GPU optimization can free a significant amount of CPU resources with minimal effort for expensive user-written code, but that replacing heavily-optimized library functions is more difficult, and a much less efficient use of time.
NASA Technical Reports Server (NTRS)
Meyer, Donald; Uchenik, Igor
2007-01-01
The PPC750 Performance Monitor (Perfmon) is a computer program that helps the user to assess the performance characteristics of application programs running under the Wind River VxWorks real-time operating system on a PPC750 computer. Perfmon generates a user-friendly interface and collects performance data by use of performance registers provided by the PPC750 architecture. It processes and presents run-time statistics on a per-task basis over a repeating time interval (typically, several seconds or minutes) specified by the user. When the Perfmon software module is loaded with the user s software modules, it is available for use through Perfmon commands, without any modification of the user s code and at negligible performance penalty. Per-task run-time performance data made available by Perfmon include percentage time, number of instructions executed per unit time, dispatch ratio, stack high water mark, and level-1 instruction and data cache miss rates. The performance data are written to a file specified by the user or to the serial port of the computer
Volunteer Computing Experience with ATLAS@Home
NASA Astrophysics Data System (ADS)
Adam-Bourdarios, C.; Bianchi, R.; Cameron, D.; Filipčič, A.; Isacchini, G.; Lançon, E.; Wu, W.;
2017-10-01
ATLAS@Home is a volunteer computing project which allows the public to contribute to computing for the ATLAS experiment through their home or office computers. The project has grown continuously since its creation in mid-2014 and now counts almost 100,000 volunteers. The combined volunteers’ resources make up a sizeable fraction of overall resources for ATLAS simulation. This paper takes stock of the experience gained so far and describes the next steps in the evolution of the project. These improvements include running natively on Linux to ease the deployment on for example university clusters, using multiple cores inside one task to reduce the memory requirements and running different types of workload such as event generation. In addition to technical details the success of ATLAS@Home as an outreach tool is evaluated.
MPI_XSTAR: MPI-based Parallelization of the XSTAR Photoionization Program
NASA Astrophysics Data System (ADS)
Danehkar, Ashkbiz; Nowak, Michael A.; Lee, Julia C.; Smith, Randall K.
2018-02-01
We describe a program for the parallel implementation of multiple runs of XSTAR, a photoionization code that is used to predict the physical properties of an ionized gas from its emission and/or absorption lines. The parallelization program, called MPI_XSTAR, has been developed and implemented in the C++ language by using the Message Passing Interface (MPI) protocol, a conventional standard of parallel computing. We have benchmarked parallel multiprocessing executions of XSTAR, using MPI_XSTAR, against a serial execution of XSTAR, in terms of the parallelization speedup and the computing resource efficiency. Our experience indicates that the parallel execution runs significantly faster than the serial execution, however, the efficiency in terms of the computing resource usage decreases with increasing the number of processors used in the parallel computing.
ATLAS Distributed Computing Experience and Performance During the LHC Run-2
NASA Astrophysics Data System (ADS)
Filipčič, A.;
2017-10-01
ATLAS Distributed Computing during LHC Run-1 was challenged by steadily increasing computing, storage and network requirements. In addition, the complexity of processing task workflows and their associated data management requirements led to a new paradigm in the ATLAS computing model for Run-2, accompanied by extensive evolution and redesign of the workflow and data management systems. The new systems were put into production at the end of 2014, and gained robustness and maturity during 2015 data taking. ProdSys2, the new request and task interface; JEDI, the dynamic job execution engine developed as an extension to PanDA; and Rucio, the new data management system, form the core of Run-2 ATLAS distributed computing engine. One of the big changes for Run-2 was the adoption of the Derivation Framework, which moves the chaotic CPU and data intensive part of the user analysis into the centrally organized train production, delivering derived AOD datasets to user groups for final analysis. The effectiveness of the new model was demonstrated through the delivery of analysis datasets to users just one week after data taking, by completing the calibration loop, Tier-0 processing and train production steps promptly. The great flexibility of the new system also makes it possible to execute part of the Tier-0 processing on the grid when Tier-0 resources experience a backlog during high data-taking periods. The introduction of the data lifetime model, where each dataset is assigned a finite lifetime (with extensions possible for frequently accessed data), was made possible by Rucio. Thanks to this the storage crises experienced in Run-1 have not reappeared during Run-2. In addition, the distinction between Tier-1 and Tier-2 disk storage, now largely artificial given the quality of Tier-2 resources and their networking, has been removed through the introduction of dynamic ATLAS clouds that group the storage endpoint nucleus and its close-by execution satellite sites. All stable ATLAS sites are now able to store unique or primary copies of the datasets. ATLAS Distributed Computing is further evolving to speed up request processing by introducing network awareness, using machine learning and optimisation of the latencies during the execution of the full chain of tasks. The Event Service, a new workflow and job execution engine, is designed around check-pointing at the level of event processing to use opportunistic resources more efficiently. ATLAS has been extensively exploring possibilities of using computing resources extending beyond conventional grid sites in the WLCG fabric to deliver as many computing cycles as possible and thereby enhance the significance of the Monte-Carlo samples to deliver better physics results. The exploitation of opportunistic resources was at an early stage throughout 2015, at the level of 10% of the total ATLAS computing power, but in the next few years it is expected to deliver much more. In addition, demonstrating the ability to use an opportunistic resource can lead to securing ATLAS allocations on the facility, hence the importance of this work goes beyond merely the initial CPU cycles gained. In this paper, we give an overview and compare the performance, development effort, flexibility and robustness of the various approaches.
Analytic Modeling of the Hydrodynamic, Thermal, and Structural Behavior of Foil Thrust Bearings
NASA Technical Reports Server (NTRS)
Bruckner, Robert J.; DellaCorte, Christopher; Prahl, Joseph M.
2005-01-01
A simulation and modeling effort is conducted on gas foil thrust bearings. A foil bearing is a self acting hydrodynamic device capable of separating stationary and rotating components of rotating machinery by a film of air or other gaseous lubricant. Although simple in appearance these bearings have proven to be complicated devices in analysis. They are sensitive to fluid structure interaction, use a compressible gas as a lubricant, may not be in the fully continuum range of fluid mechanics, and operate in the range where viscous heat generation is significant. These factors provide a challenge to the simulation and modeling task. The Reynolds equation with the addition of Knudsen number effects due to thin film thicknesses is used to simulate the hydrodynamics. The energy equation is manipulated to simulate the temperature field of the lubricant film and combined with the ideal gas relationship, provides density field input to the Reynolds equation. Heat transfer between the lubricant and the surroundings is also modeled. The structural deformations of the bearing are modeled with a single partial differential equation. The equation models the top foil as a thin, bending dominated membrane whose deflections are governed by the biharmonic equation. A linear superposition of hydrodynamic load and compliant foundation reaction is included. The stiffness of the compliant foundation is modeled as a distributed stiffness that supports the top foil. The system of governing equations is solved numerically by a computer program written in the Mathematica computing environment. Representative calculations and comparisons with experimental results are included for a generation I gas foil thrust bearing.
Power Analysis of an Enterprise Wireless Communication Architecture
2017-09-01
easily plug a satellite-based communication module into the enterprise processor when needed. Once plugged-in, it automatically runs the corresponding...reduce the SWaP by using a singular processing/computing module to run user applications and to implement waveform algorithms. This approach would...GPP) technology improved enough to allow a wide variety of waveforms to run in the GPP; thus giving rise to the SDR (Brannon 2004). Today’s
Data intensive computing at Sandia.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilson, Andrew T.
2010-09-01
Data-Intensive Computing is parallel computing where you design your algorithms and your software around efficient access and traversal of a data set; where hardware requirements are dictated by data size as much as by desired run times usually distilling compact results from massive data.
SFU Hacking for Non-Hackers v. 1.005
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carter, David James
The program provides a limited virtual environment for exploring some concepts of computer hacking. It simulates a simple computer system with intentional vulnerabilities, allowing the user to issue commands and observe their results. It does not affect the computer on which it runs.
Virtual network computing: cross-platform remote display and collaboration software.
Konerding, D E
1999-04-01
VNC (Virtual Network Computing) is a computer program written to address the problem of cross-platform remote desktop/application display. VNC uses a client/server model in which an image of the desktop of the server is transmitted to the client and displayed. The client collects mouse and keyboard input from the user and transmits them back to the server. The VNC client and server can run on Windows 95/98/NT, MacOS, and Unix (including Linux) operating systems. VNC is multi-user on Unix machines (any number of servers can be run are unrelated to the primary display of the computer), while it is effectively single-user on Macintosh and Windows machines (only one server can be run, displaying the contents of the primary display of the server). The VNC servers can be configured to allow more than one client to connect at one time, effectively allowing collaboration through the shared desktop. I describe the function of VNC, provide details of installation, describe how it achieves its goal, and evaluate the use of VNC for molecular modelling. VNC is an extremely useful tool for collaboration, instruction, software development, and debugging of graphical programs with remote users.
Injecting Artificial Memory Errors Into a Running Computer Program
NASA Technical Reports Server (NTRS)
Bornstein, Benjamin J.; Granat, Robert A.; Wagstaff, Kiri L.
2008-01-01
Single-event upsets (SEUs) or bitflips are computer memory errors caused by radiation. BITFLIPS (Basic Instrumentation Tool for Fault Localized Injection of Probabilistic SEUs) is a computer program that deliberately injects SEUs into another computer program, while the latter is running, for the purpose of evaluating the fault tolerance of that program. BITFLIPS was written as a plug-in extension of the open-source Valgrind debugging and profiling software. BITFLIPS can inject SEUs into any program that can be run on the Linux operating system, without needing to modify the program s source code. Further, if access to the original program source code is available, BITFLIPS offers fine-grained control over exactly when and which areas of memory (as specified via program variables) will be subjected to SEUs. The rate of injection of SEUs is controlled by specifying either a fault probability or a fault rate based on memory size and radiation exposure time, in units of SEUs per byte per second. BITFLIPS can also log each SEU that it injects and, if program source code is available, report the magnitude of effect of the SEU on a floating-point value or other program variable.
Recent Performance Results of VPIC on Trinity
NASA Astrophysics Data System (ADS)
Nystrom, W. D.; Bergen, B.; Bird, R. F.; Bowers, K. J.; Daughton, W. S.; Guo, F.; Le, A.; Li, H.; Nam, H.; Pang, X.; Stark, D. J.; Rust, W. N., III; Yin, L.; Albright, B. J.
2017-10-01
Trinity is a new DOE compute resource now in production at Los Alamos National Laboratory. Trinity has several new and unique features including two compute partitions, one with dual socket Intel Haswell Xeon compute nodes and one with Intel Knights Landing (KNL) Xeon Phi compute nodes, use of on package high bandwidth memory (HBM) for KNL nodes, ability to configure KNL nodes with respect to HBM model and on die network topology in a variety of operational modes at run time, and use of solid state storage via burst buffer technology to reduce time required to perform I/O. An effort is in progress to optimize VPIC on Trinity by taking advantage of these new architectural features. Results of work will be presented on performance of VPIC on Haswell and KNL partitions for single node runs and runs at scale. Results include use of burst buffers at scale to optimize I/O, comparison of strategies for using MPI and threads, performance benefits using HBM and effectiveness of using intrinsics for vectorization. Work performed under auspices of U.S. Dept. of Energy by Los Alamos National Security, LLC Los Alamos National Laboratory under contract DE-AC52-06NA25396 and supported by LANL LDRD program.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kendon, Viv
2014-12-04
Quantum versions of random walks have diverse applications that are motivating experimental implementations as well as theoretical studies. Recent results showing quantum walks are “universal for quantum computation” relate to algorithms, to be run on quantum computers. We consider whether an experimental implementation of a quantum walk could provide useful computation before we have a universal quantum computer.
29 CFR 102.111 - Time computation.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 29 Labor 2 2010-07-01 2010-07-01 false Time computation. 102.111 Section 102.111 Labor Regulations... Papers § 102.111 Time computation. (a) In computing any period of time prescribed or allowed by these rules, the day of the act, event, or default after which the designated period of time begins to run is...
29 CFR 102.111 - Time computation.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 29 Labor 2 2014-07-01 2014-07-01 false Time computation. 102.111 Section 102.111 Labor Regulations... Papers § 102.111 Time computation. (a) In computing any period of time prescribed or allowed by these rules, the day of the act, event, or default after which the designated period of time begins to run is...
29 CFR 102.111 - Time computation.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 29 Labor 2 2012-07-01 2012-07-01 false Time computation. 102.111 Section 102.111 Labor Regulations... Papers § 102.111 Time computation. (a) In computing any period of time prescribed or allowed by these rules, the day of the act, event, or default after which the designated period of time begins to run is...
29 CFR 102.111 - Time computation.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 29 Labor 2 2013-07-01 2013-07-01 false Time computation. 102.111 Section 102.111 Labor Regulations... Papers § 102.111 Time computation. (a) In computing any period of time prescribed or allowed by these rules, the day of the act, event, or default after which the designated period of time begins to run is...
Fast methods to numerically integrate the Reynolds equation for gas fluid films
NASA Technical Reports Server (NTRS)
Dimofte, Florin
1992-01-01
The alternating direction implicit (ADI) method is adopted, modified, and applied to the Reynolds equation for thin, gas fluid films. An efficient code is developed to predict both the steady-state and dynamic performance of an aerodynamic journal bearing. An alternative approach is shown for hybrid journal gas bearings by using Liebmann's iterative solution (LIS) for elliptic partial differential equations. The results are compared with known design criteria from experimental data. The developed methods show good accuracy and very short computer running time in comparison with methods based on an inverting of a matrix. The computer codes need a small amount of memory and can be run on either personal computers or on mainframe systems.
New insights into faster computation of uncertainties
NASA Astrophysics Data System (ADS)
Bhattacharya, Atreyee
2012-11-01
Heavy computation power, lengthy simulations, and an exhaustive number of model runs—often these seem like the only statistical tools that scientists have at their disposal when computing uncertainties associated with predictions, particularly in cases of environmental processes such as groundwater movement. However, calculation of uncertainties need not be as lengthy, a new study shows. Comparing two approaches—the classical Bayesian “credible interval” and a less commonly used regression-based “confidence interval” method—Lu et al. show that for many practical purposes both methods provide similar estimates of uncertainties. The advantage of the regression method is that it demands 10-1000 model runs, whereas the classical Bayesian approach requires 10,000 to millions of model runs.
Non-volatile memory for checkpoint storage
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blumrich, Matthias A.; Chen, Dong; Cipolla, Thomas M.
A system, method and computer program product for supporting system initiated checkpoints in high performance parallel computing systems and storing of checkpoint data to a non-volatile memory storage device. The system and method generates selective control signals to perform checkpointing of system related data in presence of messaging activity associated with a user application running at the node. The checkpointing is initiated by the system such that checkpoint data of a plurality of network nodes may be obtained even in the presence of user applications running on highly parallel computers that include ongoing user messaging activity. In one embodiment, themore » non-volatile memory is a pluggable flash memory card.« less
Integration of High-Performance Computing into Cloud Computing Services
NASA Astrophysics Data System (ADS)
Vouk, Mladen A.; Sills, Eric; Dreher, Patrick
High-Performance Computing (HPC) projects span a spectrum of computer hardware implementations ranging from peta-flop supercomputers, high-end tera-flop facilities running a variety of operating systems and applications, to mid-range and smaller computational clusters used for HPC application development, pilot runs and prototype staging clusters. What they all have in common is that they operate as a stand-alone system rather than a scalable and shared user re-configurable resource. The advent of cloud computing has changed the traditional HPC implementation. In this article, we will discuss a very successful production-level architecture and policy framework for supporting HPC services within a more general cloud computing infrastructure. This integrated environment, called Virtual Computing Lab (VCL), has been operating at NC State since fall 2004. Nearly 8,500,000 HPC CPU-Hrs were delivered by this environment to NC State faculty and students during 2009. In addition, we present and discuss operational data that show that integration of HPC and non-HPC (or general VCL) services in a cloud can substantially reduce the cost of delivering cloud services (down to cents per CPU hour).
Control of the TSU 2-m automatic telescope
NASA Astrophysics Data System (ADS)
Eaton, Joel A.; Williamson, Michael H.
2004-09-01
Tennessee State University is operating a 2-m automatic telescope for high-dispersion spectroscopy. The alt-azimuth telescope is fiber-coupled to a conventional echelle spectrograph with two resolutions (R=30,000 and 70,000). We control this instrument with four computers running linux and communicating over ethernet through the UDP protocol. A computer physically located on the telescope handles the acquisition and tracking of stars. We avoid the need for real-time programming in this application by periodically latching the positions of the axes in a commercial motion controller and the time in a GPS receiver. A second (spectrograph) computer sets up the spectrograph and runs its CCD, a third (roof) computer controls the roll-off roof and front flap of the telescope enclosure, and the fourth (executive) computer makes decisions about which stars to observe and when to close the observatory for bad weather. The only human intervention in the telescope's operation involves changing the observing program, copying data back to TSU, and running quality-control checks on the data. It has been running reliably in this completely automatic, unattended mode for more than a year with all day-to-day adminsitration carried out over the Internet. To support automatic operation, we have written a number of useful tools to predict and analyze what the telescope does. These include a simulator that predicts roughly how the telescope will operate on a given night, a quality-control program to parse logfiles from the telescope and identify problems, and a rescheduling program that calculates new priorities to keep the frequency of observation for the various stars roughly as desired. We have also set up a database to keep track of the tens of thousands of spectra we expect to get each year.
Implementation of an Intelligent Control System
1992-05-01
there- fore implemented in a portable equipment rack. The controls computer consists of a microcomputer running a real time operating system , interface...circuit boards are mounted in an industry standard Multibus I chassis. The microcomputer runs the iRMX real time operating system . This operating system
10 CFR 205.5 - Computation of time.
Code of Federal Regulations, 2010 CFR
2010-01-01
..., event, or default from which the designated period of time begins to run is not to be included. The last... holiday in which event the period runs until the end of the next day that is neither a Saturday, Sunday... be added to the prescribed period. ...
10 CFR 205.5 - Computation of time.
Code of Federal Regulations, 2013 CFR
2013-01-01
..., event, or default from which the designated period of time begins to run is not to be included. The last... holiday in which event the period runs until the end of the next day that is neither a Saturday, Sunday... be added to the prescribed period. ...
10 CFR 205.5 - Computation of time.
Code of Federal Regulations, 2012 CFR
2012-01-01
..., event, or default from which the designated period of time begins to run is not to be included. The last... holiday in which event the period runs until the end of the next day that is neither a Saturday, Sunday... be added to the prescribed period. ...
10 CFR 205.5 - Computation of time.
Code of Federal Regulations, 2011 CFR
2011-01-01
..., event, or default from which the designated period of time begins to run is not to be included. The last... holiday in which event the period runs until the end of the next day that is neither a Saturday, Sunday... be added to the prescribed period. ...
10 CFR 205.5 - Computation of time.
Code of Federal Regulations, 2014 CFR
2014-01-01
..., event, or default from which the designated period of time begins to run is not to be included. The last... holiday in which event the period runs until the end of the next day that is neither a Saturday, Sunday... be added to the prescribed period. ...
DOT National Transportation Integrated Search
1995-09-05
The Run-Off-Road Collision Avoidance Using IVHS Countermeasures program is to address the single vehicle crash problem through application of technology to prevent and/or reduce the severity of these crashes. : This report documents the RORSIM comput...
Investigation of Storage Options for Scientific Computing on Grid and Cloud Facilities
NASA Astrophysics Data System (ADS)
Garzoglio, Gabriele
2012-12-01
In recent years, several new storage technologies, such as Lustre, Hadoop, OrangeFS, and BlueArc, have emerged. While several groups have run benchmarks to characterize them under a variety of configurations, more work is needed to evaluate these technologies for the use cases of scientific computing on Grid clusters and Cloud facilities. This paper discusses our evaluation of the technologies as deployed on a test bed at FermiCloud, one of the Fermilab infrastructure-as-a-service Cloud facilities. The test bed consists of 4 server-class nodes with 40 TB of disk space and up to 50 virtual machine clients, some running on the storage server nodes themselves. With this configuration, the evaluation compares the performance of some of these technologies when deployed on virtual machines and on “bare metal” nodes. In addition to running standard benchmarks such as IOZone to check the sanity of our installation, we have run I/O intensive tests using physics-analysis applications. This paper presents how the storage solutions perform in a variety of realistic use cases of scientific computing. One interesting difference among the storage systems tested is found in a decrease in total read throughput with increasing number of client processes, which occurs in some implementations but not others.
Investigation of storage options for scientific computing on Grid and Cloud facilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garzoglio, Gabriele
In recent years, several new storage technologies, such as Lustre, Hadoop, OrangeFS, and BlueArc, have emerged. While several groups have run benchmarks to characterize them under a variety of configurations, more work is needed to evaluate these technologies for the use cases of scientific computing on Grid clusters and Cloud facilities. This paper discusses our evaluation of the technologies as deployed on a test bed at FermiCloud, one of the Fermilab infrastructure-as-a-service Cloud facilities. The test bed consists of 4 server-class nodes with 40 TB of disk space and up to 50 virtual machine clients, some running on the storagemore » server nodes themselves. With this configuration, the evaluation compares the performance of some of these technologies when deployed on virtual machines and on bare metal nodes. In addition to running standard benchmarks such as IOZone to check the sanity of our installation, we have run I/O intensive tests using physics-analysis applications. This paper presents how the storage solutions perform in a variety of realistic use cases of scientific computing. One interesting difference among the storage systems tested is found in a decrease in total read throughput with increasing number of client processes, which occurs in some implementations but not others.« less
Pedretti, Kevin
2008-11-18
A compute processor allocator architecture for allocating compute processors to run applications in a multiple processor computing apparatus is distributed among a subset of processors within the computing apparatus. Each processor of the subset includes a compute processor allocator. The compute processor allocators can share a common database of information pertinent to compute processor allocation. A communication path permits retrieval of information from the database independently of the compute processor allocators.
Algorithmic transformation of multi-loop master integrals to a canonical basis with CANONICA
NASA Astrophysics Data System (ADS)
Meyer, Christoph
2018-01-01
The integration of differential equations of Feynman integrals can be greatly facilitated by using a canonical basis. This paper presents the Mathematica package CANONICA, which implements a recently developed algorithm to automatize the transformation to a canonical basis. This represents the first publicly available implementation suitable for differential equations depending on multiple scales. In addition to the presentation of the package, this paper extends the description of some aspects of the algorithm, including a proof of the uniqueness of canonical forms up to constant transformations.
NASA Astrophysics Data System (ADS)
Looney, Craig W.
2009-10-01
Wolfram|Alpha (http://www.wolframalpha.com/), a free internet-based mathematical engine released earlier this year, represents an orders-of magnitude advance in mathematical power freely available - without money, passwords, or downloads - on the web. Wolfram|Alpha is based on Mathematica, so it can plot functions, take derivatives, solve systems of equations, perform symbolic and numerical integration, and more. These capabilities (especially plotting and integration) will be explored in the context of topics covered in upper level undergraduate physics courses.
NASA Astrophysics Data System (ADS)
Szidarovszky, Tamás; Jono, Maho; Yamanouchi, Kaoru
2018-07-01
A user-friendly and cross-platform software called Laser-Induced Molecular Alignment and Orientation simulator (LIMAO) has been developed. The program can be used to simulate within the rigid rotor approximation the rotational dynamics of gas phase molecules induced by linearly polarized intense laser fields at a given temperature. The software is implemented in the Java and Mathematica programming languages. The primary aim of LIMAO is to aid experimental scientists in predicting and analyzing experimental data representing laser-induced spatial alignment and orientation of molecules.
Regarding on the prototype solutions for the nonlinear fractional-order biological population model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baskonus, Haci Mehmet, E-mail: hmbaskonus@gmail.com; Bulut, Hasan
2016-06-08
In this study, we have submitted to literature a method newly extended which is called as Improved Bernoulli sub-equation function method based on the Bernoulli Sub-ODE method. The proposed analytical scheme has been expressed with steps. We have obtained some new analytical solutions to the nonlinear fractional-order biological population model by using this technique. Two and three dimensional surfaces of analytical solutions have been drawn by wolfram Mathematica 9. Finally, a conclusion has been submitted by mentioning important acquisitions founded in this study.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Riley, Nathan; Geissel, Matthias; Lewis, Sean M
2015-03-01
The data described in this document consist of image files of shadowgraphs of astrophysically relevant laser driven blast waves. Supporting files include Mathematica notebooks containing design calculations, tabulated experimental data and notes, and relevant publications from the open research literature. The data was obtained on the Z-Beamlet laser from July to September 2014. Selected images and calculations will be published as part of a PhD dissertation and in associated publications in the open research literature, with Sandia credited as appropriate. The authors are not aware of any restrictions that could affect the release of the data.
Methodology of Numerical Optimization for Orbital Parameters of Binary Systems
NASA Astrophysics Data System (ADS)
Araya, I.; Curé, M.
2010-02-01
The use of a numerical method of maximization (or minimization) in optimization processes allows us to obtain a great amount of solutions. Therefore, we can find a global maximum or minimum of the problem, but this is only possible if we used a suitable methodology. To obtain the global optimum values, we use the genetic algorithm called PIKAIA (P. Charbonneau) and other four algorithms implemented in Mathematica. We demonstrate that derived orbital parameters of binary systems published in some papers, based on radial velocity measurements, are local minimum instead of global ones.
BelleII@home: Integrate volunteer computing resources into DIRAC in a secure way
NASA Astrophysics Data System (ADS)
Wu, Wenjing; Hara, Takanori; Miyake, Hideki; Ueda, Ikuo; Kan, Wenxiao; Urquijo, Phillip
2017-10-01
The exploitation of volunteer computing resources has become a popular practice in the HEP computing community as the huge amount of potential computing power it provides. In the recent HEP experiments, the grid middleware has been used to organize the services and the resources, however it relies heavily on the X.509 authentication, which is contradictory to the untrusted feature of volunteer computing resources, therefore one big challenge to utilize the volunteer computing resources is how to integrate them into the grid middleware in a secure way. The DIRAC interware which is commonly used as the major component of the grid computing infrastructure for several HEP experiments proposes an even bigger challenge to this paradox as its pilot is more closely coupled with operations requiring the X.509 authentication compared to the implementations of pilot in its peer grid interware. The Belle II experiment is a B-factory experiment at KEK, and it uses DIRAC for its distributed computing. In the project of BelleII@home, in order to integrate the volunteer computing resources into the Belle II distributed computing platform in a secure way, we adopted a new approach which detaches the payload running from the Belle II DIRAC pilot which is a customized pilot pulling and processing jobs from the Belle II distributed computing platform, so that the payload can run on volunteer computers without requiring any X.509 authentication. In this approach we developed a gateway service running on a trusted server which handles all the operations requiring the X.509 authentication. So far, we have developed and deployed the prototype of BelleII@home, and tested its full workflow which proves the feasibility of this approach. This approach can also be applied on HPC systems whose work nodes do not have outbound connectivity to interact with the DIRAC system in general.
Visualizing Interstellar's Wormhole
NASA Astrophysics Data System (ADS)
James, Oliver; von Tunzelmann, Eugénie; Franklin, Paul; Thorne, Kip S.
2015-06-01
Christopher Nolan's science fiction movie Interstellar offers a variety of opportunities for students in elementary courses on general relativity theory. This paper describes such opportunities, including: (i) At the motivational level, the manner in which elementary relativity concepts underlie the wormhole visualizations seen in the movie; (ii) At the briefest computational level, instructive calculations with simple but intriguing wormhole metrics, including, e.g., constructing embedding diagrams for the three-parameter wormhole that was used by our visual effects team and Christopher Nolan in scoping out possible wormhole geometries for the movie; (iii) Combining the proper reference frame of a camera with solutions of the geodesic equation, to construct a light-ray-tracing map backward in time from a camera's local sky to a wormhole's two celestial spheres; (iv) Implementing this map, for example, in Mathematica, Maple or Matlab, and using that implementation to construct images of what a camera sees when near or inside a wormhole; (v) With the student's implementation, exploring how the wormhole's three parameters influence what the camera sees—which is precisely how Christopher Nolan, using our implementation, chose the parameters for Interstellar's wormhole; (vi) Using the student's implementation, exploring the wormhole's Einstein ring and particularly the peculiar motions of star images near the ring, and exploring what it looks like to travel through a wormhole.
Reaction-diffusion systems in natural sciences and new technology transfer
NASA Astrophysics Data System (ADS)
Keller, André A.
2012-12-01
Diffusion mechanisms in natural sciences and innovation management involve partial differential equations (PDEs). This is due to their spatio-temporal dimensions. Functional semi-discretized PDEs (with lattice spatial structures or time delays) may be even more adapted to real world problems. In the modeling process, PDEs can also formalize behaviors, such as the logistic growth of populations with migration, and the adopters’ dynamics of new products in innovation models. In biology, these events are related to variations in the environment, population densities and overcrowding, migration and spreading of humans, animals, plants and other cells and organisms. In chemical reactions, molecules of different species interact locally and diffuse. In the management of new technologies, the diffusion processes of innovations in the marketplace (e.g., the mobile phone) are a major subject. These innovation diffusion models refer mainly to epidemic models. This contribution introduces that modeling process by using PDEs and reviews the essential features of the dynamics and control in biological, chemical and new technology transfer. This paper is essentially user-oriented with basic nonlinear evolution equations, delay PDEs, several analytical and numerical methods for solving, different solutions, and with the use of mathematical packages, notebooks and codes. The computations are carried out by using the software Wolfram Mathematica®7, and C++ codes.
Hübler, Merla J; Buchman, Timothy G
2008-02-01
To model the effects of system connectedness on recovery of dysfunctional tissues. One-dimensional elementary cellular automata models with small-world features, where the center-input for a few cells comes not from itself but, with a given probability, from another cell. This probability represents the connectivity of the network. The long-range connections are chosen randomly to survey the potential influences of distant information flowing into a local region. MATLAB and Mathematica computing environments. None. None. We determined the recovery rate of the entropy after perturbing a uniformly dormant system. We observed that the recovery of normal activity after perturbation of a dormant system had the characteristics of an epidemic. Moreover, we found that the rate of recovery to normal steady-state activity increased rapidly even for small amounts of long-range connectivity. Findings obtained through numerical simulation were verified through analytical solutions. This study links our hypothesis that multiple organ function syndromes represent recoupling failure with a mathematical model showing the contribution of such coupling to reactivation of dormant systems. The implication is that strategies aimed not at target tissues or target organs but rather at restoring the quality and quantity of interconnections across those tissues and organs may be a novel therapeutic strategy.
Rotary Kiln Gasification of Solid Waste for Base Camps
2017-10-02
cup after full day run 3.3 Feedstock Handling System Garbage bags containing waste feedstock are placed into feed bin FB-101. Ram feeder RF-102...Environmental Science and Technology using the Factory Talk SCADA software running on a laptop computer. A wireless Ethernet router that is located within the...pyrolysis oil produced required consistent draining from the system during operation and became a liquid waste disposal problem. A 5-hour test run could
Integration of Titan supercomputer at OLCF with ATLAS Production System
NASA Astrophysics Data System (ADS)
Barreiro Megino, F.; De, K.; Jha, S.; Klimentov, A.; Maeno, T.; Nilsson, P.; Oleynik, D.; Padolski, S.; Panitkin, S.; Wells, J.; Wenaus, T.; ATLAS Collaboration
2017-10-01
The PanDA (Production and Distributed Analysis) workload management system was developed to meet the scale and complexity of distributed computing for the ATLAS experiment. PanDA managed resources are distributed worldwide, on hundreds of computing sites, with thousands of physicists accessing hundreds of Petabytes of data and the rate of data processing already exceeds Exabyte per year. While PanDA currently uses more than 200,000 cores at well over 100 Grid sites, future LHC data taking runs will require more resources than Grid computing can possibly provide. Additional computing and storage resources are required. Therefore ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. In this paper we will describe a project aimed at integration of ATLAS Production System with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA Pilot framework for job submission to Titan’s batch queues and local data management, with lightweight MPI wrappers to run single node workloads in parallel on Titan’s multi-core worker nodes. It provides for running of standard ATLAS production jobs on unused resources (backfill) on Titan. The system already allowed ATLAS to collect on Titan millions of core-hours per month, execute hundreds of thousands jobs, while simultaneously improving Titans utilization efficiency. We will discuss the details of the implementation, current experience with running the system, as well as future plans aimed at improvements in scalability and efficiency. Notice: This manuscript has been authored, by employees of Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. The publisher by accepting the manuscript for publication acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.
Simulation Framework for Intelligent Transportation Systems
DOT National Transportation Integrated Search
1996-10-01
A simulation framework has been developed for a large-scale, comprehensive, scaleable simulation of an Intelligent Transportation System. The simulator is designed for running on parellel computers and distributed (networked) computer systems, but ca...
Prediction of sound radiation from different practical jet engine inlets
NASA Technical Reports Server (NTRS)
Zinn, B. T.; Meyer, W. L.
1982-01-01
The computer codes necessary for this study were developed and checked against exact solutions generated by the point source method using the NASA Lewis QCSEE inlet geometry. These computer codes were used to predict the acoustic properties of the following five inlet configurations: the NASA Langley Bellmouth, the NASA Lewis JT15D-1 Ground Test Nacelle, and three finite hyperbolic inlets of 50, 70 and 90 degrees. Thirty-five computer runs were done for the NASA Langley Bellmouth. For each of these computer runs, the reflection coefficient at the duct exit plane was calculated as was the far field radiation pattern. These results are presented in both graphical and tabular form with many of the results cross plotted so that trends in the results verses cut-off ratio (wave number) and tangential mode number may be easily identified.
ERIC Educational Resources Information Center
Navarro, Aaron B.
1981-01-01
Presents a program in Level II BASIC for a TRS-80 computer that simulates a Turing machine and discusses the nature of the device. The program is run interactively and is designed to be used as an educational tool by computer science or mathematics students studying computational or automata theory. (MP)
10 CFR 2.1003 - Availability of material.
Code of Federal Regulations, 2013 CFR
2013-01-01
... its license application for a geologic repository, the NRC shall make available no later than thirty... privilege in § 2.1006, graphic-oriented documentary material that includes raw data, computer runs, computer... discrepancies; (ii) Gauge, meter and computer settings; (iii) Probe locations; (iv) Logging intervals and rates...
10 CFR 2.1003 - Availability of material.
Code of Federal Regulations, 2014 CFR
2014-01-01
... its license application for a geologic repository, the NRC shall make available no later than thirty... privilege in § 2.1006, graphic-oriented documentary material that includes raw data, computer runs, computer... discrepancies; (ii) Gauge, meter and computer settings; (iii) Probe locations; (iv) Logging intervals and rates...
ABSENTEE COMPUTATIONS IN A MULTIPLE-ACCESS COMPUTER SYSTEM.
require user interaction, and the user may therefore want to run these computations ’ absentee ’ (or, user not present). A mechanism is presented which...provides for the handling of absentee computations in a multiple-access computer system. The design is intended to be implementation-independent...Some novel features of the system’s design are: a user can switch computations from interactive to absentee (and vice versa), the system can
Exploiting CMS data popularity to model the evolution of data management for Run-2 and beyond
NASA Astrophysics Data System (ADS)
Bonacorsi, D.; Boccali, T.; Giordano, D.; Girone, M.; Neri, M.; Magini, N.; Kuznetsov, V.; Wildish, T.
2015-12-01
During the LHC Run-1 data taking, all experiments collected large data volumes from proton-proton and heavy-ion collisions. The collisions data, together with massive volumes of simulated data, were replicated in multiple copies, transferred among various Tier levels, transformed/slimmed in format/content. These data were then accessed (both locally and remotely) by large groups of distributed analysis communities exploiting the WorldWide LHC Computing Grid infrastructure and services. While efficient data placement strategies - together with optimal data redistribution and deletions on demand - have become the core of static versus dynamic data management projects, little effort has so far been invested in understanding the detailed data-access patterns which surfaced in Run-1. These patterns, if understood, can be used as input to simulation of computing models at the LHC, to optimise existing systems by tuning their behaviour, and to explore next-generation CPU/storage/network co-scheduling solutions. This is of great importance, given that the scale of the computing problem will increase far faster than the resources available to the experiments, for Run-2 and beyond. Studying data-access patterns involves the validation of the quality of the monitoring data collected on the “popularity of each dataset, the analysis of the frequency and pattern of accesses to different datasets by analysis end-users, the exploration of different views of the popularity data (by physics activity, by region, by data type), the study of the evolution of Run-1 data exploitation over time, the evaluation of the impact of different data placement and distribution choices on the available network and storage resources and their impact on the computing operations. This work presents some insights from studies on the popularity data from the CMS experiment. We present the properties of a range of physics analysis activities as seen by the data popularity, and make recommendations for how to tune the initial distribution of data in anticipation of how it will be used in Run-2 and beyond.
Federated data storage system prototype for LHC experiments and data intensive science
NASA Astrophysics Data System (ADS)
Kiryanov, A.; Klimentov, A.; Krasnopevtsev, D.; Ryabinkin, E.; Zarochentsev, A.
2017-10-01
Rapid increase of data volume from the experiments running at the Large Hadron Collider (LHC) prompted physics computing community to evaluate new data handling and processing solutions. Russian grid sites and universities’ clusters scattered over a large area aim at the task of uniting their resources for future productive work, at the same time giving an opportunity to support large physics collaborations. In our project we address the fundamental problem of designing a computing architecture to integrate distributed storage resources for LHC experiments and other data-intensive science applications and to provide access to data from heterogeneous computing facilities. Studies include development and implementation of federated data storage prototype for Worldwide LHC Computing Grid (WLCG) centres of different levels and University clusters within one National Cloud. The prototype is based on computing resources located in Moscow, Dubna, Saint Petersburg, Gatchina and Geneva. This project intends to implement a federated distributed storage for all kind of operations such as read/write/transfer and access via WAN from Grid centres, university clusters, supercomputers, academic and commercial clouds. The efficiency and performance of the system are demonstrated using synthetic and experiment-specific tests including real data processing and analysis workflows from ATLAS and ALICE experiments, as well as compute-intensive bioinformatics applications (PALEOMIX) running on supercomputers. We present topology and architecture of the designed system, report performance and statistics for different access patterns and show how federated data storage can be used efficiently by physicists and biologists. We also describe how sharing data on a widely distributed storage system can lead to a new computing model and reformations of computing style, for instance how bioinformatics program running on supercomputers can read/write data from the federated storage.
Vectorization of transport and diffusion computations on the CDC Cyber 205
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abu-Shumays, I.K.
1986-01-01
The development and testing of alternative numerical methods and computational algorithms specifically designed for the vectorization of transport and diffusion computations on a Control Data Corporation (CDC) Cyber 205 vector computer are described. Two solution methods for the discrete ordinates approximation to the transport equation are summarized and compared. Factors of 4 to 7 reduction in run times for certain large transport problems were achieved on a Cyber 205 as compared with run times on a CDC-7600. The solution of tridiagonal systems of linear equations, central to several efficient numerical methods for multidimensional diffusion computations and essential for fluid flowmore » and other physics and engineering problems, is also dealt with. Among the methods tested, a combined odd-even cyclic reduction and modified Cholesky factorization algorithm for solving linear symmetric positive definite tridiagonal systems is found to be the most effective for these systems on a Cyber 205. For large tridiagonal systems, computation with this algorithm is an order of magnitude faster on a Cyber 205 than computation with the best algorithm for tridiagonal systems on a CDC-7600.« less
Simulation framework for intelligent transportation systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ewing, T.; Doss, E.; Hanebutte, U.
1996-10-01
A simulation framework has been developed for a large-scale, comprehensive, scaleable simulation of an Intelligent Transportation System (ITS). The simulator is designed for running on parallel computers and distributed (networked) computer systems, but can run on standalone workstations for smaller simulations. The simulator currently models instrumented smart vehicles with in-vehicle navigation units capable of optimal route planning and Traffic Management Centers (TMC). The TMC has probe vehicle tracking capabilities (display position and attributes of instrumented vehicles), and can provide two-way interaction with traffic to provide advisories and link times. Both the in-vehicle navigation module and the TMC feature detailed graphicalmore » user interfaces to support human-factors studies. Realistic modeling of variations of the posted driving speed are based on human factors studies that take into consideration weather, road conditions, driver personality and behavior, and vehicle type. The prototype has been developed on a distributed system of networked UNIX computers but is designed to run on parallel computers, such as ANL`s IBM SP-2, for large-scale problems. A novel feature of the approach is that vehicles are represented by autonomous computer processes which exchange messages with other processes. The vehicles have a behavior model which governs route selection and driving behavior, and can react to external traffic events much like real vehicles. With this approach, the simulation is scaleable to take advantage of emerging massively parallel processor (MPP) systems.« less
Lanier, T.H.
1996-01-01
The 100-year flood plain was determined for Upper Three Runs, its tributaries, and the part of the Savannah River that borders the Savannah River Site. The results are provided in tabular and graphical formats. The 100-year flood-plain maps and flood profiles provide water-resource managers of the Savannah River Site with a technical basis for making flood-plain management decisions that could minimize future flood problems and provide a basis for designing and constructing drainage structures along roadways. A hydrologic analysis was made to estimate the 100-year recurrence- interval flow for Upper Three Runs and its tributaries. The analysis showed that the well-drained, sandy soils in the head waters of Upper Three Runs reduce the high flows in the stream; therefore, the South Carolina upper Coastal Plain regional-rural-regression equation does not apply for Upper Three Runs. Conse- quently, a relation was established for 100-year recurrence-interval flow and drainage area using streamflow data from U.S. Geological Survey gaging stations on Upper Three Runs. This relation was used to compute 100-year recurrence-interval flows at selected points along the stream. The regional regression equations were applicable for the tributaries to Upper Three Runs, because the soil types in the drainage basins of the tributaries resemble those normally occurring in upper Coastal Plain basins. This was verified by analysis of the flood-frequency data collected from U.S. Geological Survey gaging station 02197342 on Fourmile Branch. Cross sections were surveyed throughout each reach, and other pertinent data such as flow resistance and land-use were col- lected. The surveyed cross sections and computed 100-year recurrence-interval flows were used in a step-backwater model to compute the 100-year flood profile for Upper Three Runs and its tributaries. The profiles were used to delineate the 100-year flood plain on topographic maps. The Savannah River forms the southwestern border of the Savannah River Site. Data from previously published reports were used to delineate the 100-year flood plain for the Savannah River from the downstream site boundary at the mouth of Lower Three Runs at river mile 125 to the upstream site boundary at river mile 163.
Preventing Run-Time Bugs at Compile-Time Using Advanced C++
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neswold, Richard
When writing software, we develop algorithms that tell the computer what to do at run-time. Our solutions are easier to understand and debug when they are properly modeled using class hierarchies, enumerations, and a well-factored API. Unfortunately, even with these design tools, we end up having to debug our programs at run-time. Worse still, debugging an embedded system changes its dynamics, making it tough to find and fix concurrency issues. This paper describes techniques using C++ to detect run-time bugs *at compile time*. A concurrency library, developed at Fermilab, is used for examples in illustrating these techniques.
Navigating the Challenges of the Cloud
ERIC Educational Resources Information Center
Ovadia, Steven
2010-01-01
Cloud computing is increasingly popular in education. Cloud computing is "the delivery of computer services from vast warehouses of shared machines that enables companies and individuals to cut costs by handing over the running of their email, customer databases or accounting software to someone else, and then accessing it over the internet."…
Onboard Flow Sensing For Downwash Detection and Avoidance On Small Quadrotor Helicopters
2015-01-01
onboard computers, one for flight stabilization and a Linux computer for sensor integration and control calculations . The Linux computer runs Robot...Hirokawa, D. Kubo , S. Suzuki, J. Meguro, and T. Suzuki. Small uav for immediate hazard map generation. In AIAA Infotech@Aerospace Conf, May 2007. 8F
5 CFR 841.109 - Computation of time.
Code of Federal Regulations, 2011 CFR
2011-01-01
....109 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT (CONTINUED) CIVIL SERVICE REGULATIONS... Computation of time. In computing a period of time for filing documents, the day of the action or event after... included unless it is a Saturday, a Sunday, or a legal holiday; in this event, the period runs until the...
Implications of Windowing Techniques for CAI.
ERIC Educational Resources Information Center
Heines, Jesse M.; Grinstein, Georges G.
This paper discusses the use of a technique called windowing in computer assisted instruction to allow independent control of functional areas in complex CAI displays and simultaneous display of output from a running computer program and coordinated instructional material. Two obstacles to widespread use of CAI in computer science courses are…
More Colleges Eye outside Companies to Run Their Computer Operations.
ERIC Educational Resources Information Center
DeLoughry, Thomas J.
1993-01-01
Increasingly, budget pressures and rapid technological change are causing colleges to consider "outsourcing" for computer operations management, particularly for administrative purposes. Supporters see the trend as similar to hiring experts for other, ancillary services. Critics fear loss of control of the institution's vital computer systems.…
Representing, Running, and Revising Mental Models: A Computational Model
ERIC Educational Resources Information Center
Friedman, Scott; Forbus, Kenneth; Sherin, Bruce
2018-01-01
People use commonsense science knowledge to flexibly explain, predict, and manipulate the world around them, yet we lack computational models of how this commonsense science knowledge is represented, acquired, utilized, and revised. This is an important challenge for cognitive science: Building higher order computational models in this area will…
Computational Participation: Understanding Coding as an Extension of Literacy Instruction
ERIC Educational Resources Information Center
Burke, Quinn; O'Byrne, W. Ian; Kafai, Yasmin B.
2016-01-01
Understanding the computational concepts on which countless digital applications run offers learners the opportunity to no longer simply read such media but also become more discerning end users and potentially innovative "writers" of new media themselves. To think computationally--to solve problems, to design systems, and to process and…
Mobile Computer-Assisted-Instruction in Rural New Mexico.
ERIC Educational Resources Information Center
Gittinger, Jack D., Jr.
The University of New Mexico's three-year Computer Assisted Instruction Project established one mobile and five permanent laboratories offering remedial and vocational instruction in winter, 1984-85. Each laboratory has a Degem learning system with minicomputer, teacher terminal, and 32 student terminals. A Digital PDP-11 host computer runs the…
Quantum Statistical Mechanics on a Quantum Computer
NASA Astrophysics Data System (ADS)
Raedt, H. D.; Hams, A. H.; Michielsen, K.; Miyashita, S.; Saito, K.
We describe a quantum algorithm to compute the density of states and thermal equilibrium properties of quantum many-body systems. We present results obtained by running this algorithm on a software implementation of a 21-qubit quantum computer for the case of an antiferromagnetic Heisenberg model on triangular lattices of different size.
A performance comparison of the Cray-2 and the Cray X-MP
NASA Technical Reports Server (NTRS)
Schmickley, Ronald; Bailey, David H.
1986-01-01
A suite of thirteen large Fortran benchmark codes were run on Cray-2 and Cray X-MP supercomputers. These codes were a mix of compute-intensive scientific application programs (mostly Computational Fluid Dynamics) and some special vectorized computation exercise programs. For the general class of programs tested on the Cray-2, most of which were not specially tuned for speed, the floating point operation rates varied under a variety of system load configurations from 40 percent up to 125 percent of X-MP performance rates. It is concluded that the Cray-2, in the original system configuration studied (without memory pseudo-banking) will run untuned Fortran code, on average, about 70 percent of X-MP speeds.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amerio, S.; Behari, S.; Boyd, J.
The Fermilab Tevatron collider's data-taking run ended in September 2011, yielding a dataset with rich scientific potential. The CDF and D0 experiments each have approximately 9 PB of collider and simulated data stored on tape. A large computing infrastructure consisting of tape storage, disk cache, and distributed grid computing for physics analysis with the Tevatron data is present at Fermilab. The Fermilab Run II data preservation project intends to keep this analysis capability sustained through the year 2020 and beyond. To achieve this goal, we have implemented a system that utilizes virtualization, automated validation, and migration to new standards inmore » both software and data storage technology and leverages resources available from currently-running experiments at Fermilab. Lastly, these efforts have also provided useful lessons in ensuring long-term data access for numerous experiments, and enable high-quality scientific output for years to come.« less
Visualization and Tracking of Parallel CFD Simulations
NASA Technical Reports Server (NTRS)
Vaziri, Arsi; Kremenetsky, Mark
1995-01-01
We describe a system for interactive visualization and tracking of a 3-D unsteady computational fluid dynamics (CFD) simulation on a parallel computer. CM/AVS, a distributed, parallel implementation of a visualization environment (AVS) runs on the CM-5 parallel supercomputer. A CFD solver is run as a CM/AVS module on the CM-5. Data communication between the solver, other parallel visualization modules, and a graphics workstation, which is running AVS, are handled by CM/AVS. Partitioning of the visualization task, between CM-5 and the workstation, can be done interactively in the visual programming environment provided by AVS. Flow solver parameters can also be altered by programmable interactive widgets. This system partially removes the requirement of storing large solution files at frequent time steps, a characteristic of the traditional 'simulate (yields) store (yields) visualize' post-processing approach.
Data preservation at the Fermilab Tevatron
NASA Astrophysics Data System (ADS)
Amerio, S.; Behari, S.; Boyd, J.; Brochmann, M.; Culbertson, R.; Diesburg, M.; Freeman, J.; Garren, L.; Greenlee, H.; Herner, K.; Illingworth, R.; Jayatilaka, B.; Jonckheere, A.; Li, Q.; Naymola, S.; Oleynik, G.; Sakumoto, W.; Varnes, E.; Vellidis, C.; Watts, G.; White, S.
2017-04-01
The Fermilab Tevatron collider's data-taking run ended in September 2011, yielding a dataset with rich scientific potential. The CDF and D0 experiments each have approximately 9 PB of collider and simulated data stored on tape. A large computing infrastructure consisting of tape storage, disk cache, and distributed grid computing for physics analysis with the Tevatron data is present at Fermilab. The Fermilab Run II data preservation project intends to keep this analysis capability sustained through the year 2020 and beyond. To achieve this goal, we have implemented a system that utilizes virtualization, automated validation, and migration to new standards in both software and data storage technology and leverages resources available from currently-running experiments at Fermilab. These efforts have also provided useful lessons in ensuring long-term data access for numerous experiments, and enable high-quality scientific output for years to come.
CMSA: a heterogeneous CPU/GPU computing system for multiple similar RNA/DNA sequence alignment.
Chen, Xi; Wang, Chen; Tang, Shanjiang; Yu, Ce; Zou, Quan
2017-06-24
The multiple sequence alignment (MSA) is a classic and powerful technique for sequence analysis in bioinformatics. With the rapid growth of biological datasets, MSA parallelization becomes necessary to keep its running time in an acceptable level. Although there are a lot of work on MSA problems, their approaches are either insufficient or contain some implicit assumptions that limit the generality of usage. First, the information of users' sequences, including the sizes of datasets and the lengths of sequences, can be of arbitrary values and are generally unknown before submitted, which are unfortunately ignored by previous work. Second, the center star strategy is suited for aligning similar sequences. But its first stage, center sequence selection, is highly time-consuming and requires further optimization. Moreover, given the heterogeneous CPU/GPU platform, prior studies consider the MSA parallelization on GPU devices only, making the CPUs idle during the computation. Co-run computation, however, can maximize the utilization of the computing resources by enabling the workload computation on both CPU and GPU simultaneously. This paper presents CMSA, a robust and efficient MSA system for large-scale datasets on the heterogeneous CPU/GPU platform. It performs and optimizes multiple sequence alignment automatically for users' submitted sequences without any assumptions. CMSA adopts the co-run computation model so that both CPU and GPU devices are fully utilized. Moreover, CMSA proposes an improved center star strategy that reduces the time complexity of its center sequence selection process from O(mn 2 ) to O(mn). The experimental results show that CMSA achieves an up to 11× speedup and outperforms the state-of-the-art software. CMSA focuses on the multiple similar RNA/DNA sequence alignment and proposes a novel bitmap based algorithm to improve the center star strategy. We can conclude that harvesting the high performance of modern GPU is a promising approach to accelerate multiple sequence alignment. Besides, adopting the co-run computation model can maximize the entire system utilization significantly. The source code is available at https://github.com/wangvsa/CMSA .
Predictive simulation of gait at low gravity reveals skipping as the preferred locomotion strategy
Ackermann, Marko; van den Bogert, Antonie J.
2012-01-01
The investigation of gait strategies at low gravity environments gained momentum recently as manned missions to the Moon and to Mars are reconsidered. Although reports by astronauts of the Apollo missions indicate alternative gait strategies might be favored on the Moon, computational simulations and experimental investigations have been almost exclusively limited to the study of either walking or running, the locomotion modes preferred under Earth's gravity. In order to investigate the gait strategies likely to be favored at low gravity a series of predictive, computational simulations of gait are performed using a physiological model of the musculoskeletal system, without assuming any particular type of gait. A computationally efficient optimization strategy is utilized allowing for multiple simulations. The results reveal skipping as more efficient and less fatiguing than walking or running and suggest the existence of a walk-skip rather than a walk-run transition at low gravity. The results are expected to serve as a background to the design of experimental investigations of gait under simulated low gravity. PMID:22365845
Predictive simulation of gait at low gravity reveals skipping as the preferred locomotion strategy.
Ackermann, Marko; van den Bogert, Antonie J
2012-04-30
The investigation of gait strategies at low gravity environments gained momentum recently as manned missions to the Moon and to Mars are reconsidered. Although reports by astronauts of the Apollo missions indicate alternative gait strategies might be favored on the Moon, computational simulations and experimental investigations have been almost exclusively limited to the study of either walking or running, the locomotion modes preferred under Earth's gravity. In order to investigate the gait strategies likely to be favored at low gravity a series of predictive, computational simulations of gait are performed using a physiological model of the musculoskeletal system, without assuming any particular type of gait. A computationally efficient optimization strategy is utilized allowing for multiple simulations. The results reveal skipping as more efficient and less fatiguing than walking or running and suggest the existence of a walk-skip rather than a walk-run transition at low gravity. The results are expected to serve as a background to the design of experimental investigations of gait under simulated low gravity. Copyright © 2012 Elsevier Ltd. All rights reserved.
Simulation of a master-slave event set processor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Comfort, J.C.
1984-03-01
Event set manipulation may consume a considerable amount of the computation time spent in performing a discrete-event simulation. One way of minimizing this time is to allow event set processing to proceed in parallel with the remainder of the simulation computation. The paper describes a multiprocessor simulation computer, in which all non-event set processing is performed by the principal processor (called the host). Event set processing is coordinated by a front end processor (the master) and actually performed by several other functionally identical processors (the slaves). A trace-driven simulation program modeling this system was constructed, and was run with tracemore » output taken from two different simulation programs. Output from this simulation suggests that a significant reduction in run time may be realized by this approach. Sensitivity analysis was performed on the significant parameters to the system (number of slave processors, relative processor speeds, and interprocessor communication times). A comparison between actual and simulation run times for a one-processor system was used to assist in the validation of the simulation. 7 references.« less
NASA Astrophysics Data System (ADS)
Calafiura, Paolo; Leggett, Charles; Seuster, Rolf; Tsulaia, Vakhtang; Van Gemmeren, Peter
2015-12-01
AthenaMP is a multi-process version of the ATLAS reconstruction, simulation and data analysis framework Athena. By leveraging Linux fork and copy-on-write mechanisms, it allows for sharing of memory pages between event processors running on the same compute node with little to no change in the application code. Originally targeted to optimize the memory footprint of reconstruction jobs, AthenaMP has demonstrated that it can reduce the memory usage of certain configurations of ATLAS production jobs by a factor of 2. AthenaMP has also evolved to become the parallel event-processing core of the recently developed ATLAS infrastructure for fine-grained event processing (Event Service) which allows the running of AthenaMP inside massively parallel distributed applications on hundreds of compute nodes simultaneously. We present the architecture of AthenaMP, various strategies implemented by AthenaMP for scheduling workload to worker processes (for example: Shared Event Queue and Shared Distributor of Event Tokens) and the usage of AthenaMP in the diversity of ATLAS event processing workloads on various computing resources: Grid, opportunistic resources and HPC.
Plancton: an opportunistic distributed computing project based on Docker containers
NASA Astrophysics Data System (ADS)
Concas, Matteo; Berzano, Dario; Bagnasco, Stefano; Lusso, Stefano; Masera, Massimo; Puccio, Maximiliano; Vallero, Sara
2017-10-01
The computing power of most modern commodity computers is far from being fully exploited by standard usage patterns. In this work we describe the development and setup of a virtual computing cluster based on Docker containers used as worker nodes. The facility is based on Plancton: a lightweight fire-and-forget background service. Plancton spawns and controls a local pool of Docker containers on a host with free resources, by constantly monitoring its CPU utilisation. It is designed to release the resources allocated opportunistically, whenever another demanding task is run by the host user, according to configurable policies. This is attained by killing a number of running containers. One of the advantages of a thin virtualization layer such as Linux containers is that they can be started almost instantly upon request. We will show how fast the start-up and disposal of containers eventually enables us to implement an opportunistic cluster based on Plancton daemons without a central control node, where the spawned Docker containers behave as job pilots. Finally, we will show how Plancton was configured to run up to 10 000 concurrent opportunistic jobs on the ALICE High-Level Trigger facility, by giving a considerable advantage in terms of management compared to virtual machines.
1001 Ways to run AutoDock Vina for virtual screening
NASA Astrophysics Data System (ADS)
Jaghoori, Mohammad Mahdi; Bleijlevens, Boris; Olabarriaga, Silvia D.
2016-03-01
Large-scale computing technologies have enabled high-throughput virtual screening involving thousands to millions of drug candidates. It is not trivial, however, for biochemical scientists to evaluate the technical alternatives and their implications for running such large experiments. Besides experience with the molecular docking tool itself, the scientist needs to learn how to run it on high-performance computing (HPC) infrastructures, and understand the impact of the choices made. Here, we review such considerations for a specific tool, AutoDock Vina, and use experimental data to illustrate the following points: (1) an additional level of parallelization increases virtual screening throughput on a multi-core machine; (2) capturing of the random seed is not enough (though necessary) for reproducibility on heterogeneous distributed computing systems; (3) the overall time spent on the screening of a ligand library can be improved by analysis of factors affecting execution time per ligand, including number of active torsions, heavy atoms and exhaustiveness. We also illustrate differences among four common HPC infrastructures: grid, Hadoop, small cluster and multi-core (virtual machine on the cloud). Our analysis shows that these platforms are suitable for screening experiments of different sizes. These considerations can guide scientists when choosing the best computing platform and set-up for their future large virtual screening experiments.
1001 Ways to run AutoDock Vina for virtual screening.
Jaghoori, Mohammad Mahdi; Bleijlevens, Boris; Olabarriaga, Silvia D
2016-03-01
Large-scale computing technologies have enabled high-throughput virtual screening involving thousands to millions of drug candidates. It is not trivial, however, for biochemical scientists to evaluate the technical alternatives and their implications for running such large experiments. Besides experience with the molecular docking tool itself, the scientist needs to learn how to run it on high-performance computing (HPC) infrastructures, and understand the impact of the choices made. Here, we review such considerations for a specific tool, AutoDock Vina, and use experimental data to illustrate the following points: (1) an additional level of parallelization increases virtual screening throughput on a multi-core machine; (2) capturing of the random seed is not enough (though necessary) for reproducibility on heterogeneous distributed computing systems; (3) the overall time spent on the screening of a ligand library can be improved by analysis of factors affecting execution time per ligand, including number of active torsions, heavy atoms and exhaustiveness. We also illustrate differences among four common HPC infrastructures: grid, Hadoop, small cluster and multi-core (virtual machine on the cloud). Our analysis shows that these platforms are suitable for screening experiments of different sizes. These considerations can guide scientists when choosing the best computing platform and set-up for their future large virtual screening experiments.
Supersonic Retro-Propulsion Experimental Design for Computational Fluid Dynamics Model Validation
NASA Technical Reports Server (NTRS)
Berry, Scott A.; Laws, Christopher T.; Kleb, W. L.; Rhode, Matthew N.; Spells, Courtney; McCrea, Andrew C.; Truble, Kerry A.; Schauerhamer, Daniel G.; Oberkampf, William L.
2011-01-01
The development of supersonic retro-propulsion, an enabling technology for heavy payload exploration missions to Mars, is the primary focus for the present paper. A new experimental model, intended to provide computational fluid dynamics model validation data, was recently designed for the Langley Research Center Unitary Plan Wind Tunnel Test Section 2. Pre-test computations were instrumental for sizing and refining the model, over the Mach number range of 2.4 to 4.6, such that tunnel blockage and internal flow separation issues would be minimized. A 5-in diameter 70-deg sphere-cone forebody, which accommodates up to four 4:1 area ratio nozzles, followed by a 10-in long cylindrical aftbody was developed for this study based on the computational results. The model was designed to allow for a large number of surface pressure measurements on the forebody and aftbody. Supplemental data included high-speed Schlieren video and internal pressures and temperatures. The run matrix was developed to allow for the quantification of various sources of experimental uncertainty, such as random errors due to run-to-run variations and bias errors due to flow field or model misalignments. Some preliminary results and observations from the test are presented, although detailed analyses of the data and uncertainties are still on going.
NASA Technical Reports Server (NTRS)
Ibarreta, Alfonso F.; Driscoll, James F.; Feikema, Douglas A.; Salzman, Jack (Technical Monitor)
2001-01-01
The effect of flame stretch, composed of strain and curvature, plays a major role in the propagation of turbulent premixed flames. Although all forms of stretch (positive and negative) are present in turbulent conditions, little research has been focused on the stretch due to curvature. The present study quantifies the Markstein number (which characterizes the sensitivity of the flame propagation speed to the imposed stretch rate) for an inwardly-propagating flame (IPF). This flame is of interest because it is negatively stretched, and is subjected to curvature effects alone, without the competing effects of strain. In an extension of our previous work, microgravity experiments were run using a vortex-flame interaction to create a pocket of reactants surrounded by an IPF. Computations using the RUN-1DL code of Rogg were also performed in order to explain the measurements. It was found that the Markstein number of an inwardly-propagating flame, for both the microgravity experiment and the computations, is significantly larger than that of an outwardly-propagating flame. Further insight was gained by running the computations for the simplified (hypothetical) cases of one step chemistry, unity Lewis number, and negligible heat release. Results provide additional evidence that the Markstein numbers associated with strain and curvature have different values.
Jian Yang; Hong S. He; Stephen R. Shifley; Frank R. Thompson; Yangjian Zhang
2011-01-01
Although forest landscape models (FLMs) have benefited greatly from ongoing advances of computer technology and software engineering, computing capacity remains a bottleneck in the design and development of FLMs. Computer memory overhead and run time efficiency are primary limiting factors when applying forest landscape models to simulate large landscapes with fine...
Grid site availability evaluation and monitoring at CMS
Lyons, Gaston; Maciulaitis, Rokas; Bagliesi, Giuseppe; ...
2017-10-01
The Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) uses distributed grid computing to store, process, and analyse the vast quantity of scientific data recorded every year. The computing resources are grouped into sites and organized in a tiered structure. Each site provides computing and storage to the CMS computing grid. Over a hundred sites worldwide contribute with resources from hundred to well over ten thousand computing cores and storage from tens of TBytes to tens of PBytes. In such a large computing setup scheduled and unscheduled outages occur continually and are not allowed to significantly impactmore » data handling, processing, and analysis. Unscheduled capacity and performance reductions need to be detected promptly and corrected. CMS developed a sophisticated site evaluation and monitoring system for Run 1 of the LHC based on tools of the Worldwide LHC Computing Grid. For Run 2 of the LHC the site evaluation and monitoring system is being overhauled to enable faster detection/reaction to failures and a more dynamic handling of computing resources. Furthermore, enhancements to better distinguish site from central service issues and to make evaluations more transparent and informative to site support staff are planned.« less
Grid site availability evaluation and monitoring at CMS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lyons, Gaston; Maciulaitis, Rokas; Bagliesi, Giuseppe
The Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) uses distributed grid computing to store, process, and analyse the vast quantity of scientific data recorded every year. The computing resources are grouped into sites and organized in a tiered structure. Each site provides computing and storage to the CMS computing grid. Over a hundred sites worldwide contribute with resources from hundred to well over ten thousand computing cores and storage from tens of TBytes to tens of PBytes. In such a large computing setup scheduled and unscheduled outages occur continually and are not allowed to significantly impactmore » data handling, processing, and analysis. Unscheduled capacity and performance reductions need to be detected promptly and corrected. CMS developed a sophisticated site evaluation and monitoring system for Run 1 of the LHC based on tools of the Worldwide LHC Computing Grid. For Run 2 of the LHC the site evaluation and monitoring system is being overhauled to enable faster detection/reaction to failures and a more dynamic handling of computing resources. Furthermore, enhancements to better distinguish site from central service issues and to make evaluations more transparent and informative to site support staff are planned.« less