Expression Templates for Truncated Power Series
NASA Astrophysics Data System (ADS)
Cary, John R.; Shasharina, Svetlana G.
1997-05-01
Truncated power series are used extensively in accelerator transport modeling for rapid tracking and analysis of nonlinearity. Such mathematical objects are naturally represented computationally as objects in C++. This is more intuitive and produces more transparent code through operator overloading. However, C++ object use often comes with a computational speed loss due, e.g., to the creation of temporaries. We have developed a subset of truncated power series expression templates(http://monet.uwaterloo.ca/blitz/). Such expression templates use the powerful template processing facility of C++ to combine complicated expressions into series operations that exectute more rapidly. We compare computational speeds with existing truncated power series libraries.
1982-06-01
p*A C.._ _ __ _ _ A, d.tibutiou is unhimta 4 iit 84~ L0 TABLE OF CONTENTS APPENDIX SCOPE OF WORK B MERGE AND COST PROGRAM DOCUMENTATION C FATSCO... PROGRAM TO COMPUTE TIME SERIES FREQUENCY RELATIONSHIPS D HEC-DSS - TIME SERIES DATA FILE MANAGEMENT SYSTEM E PLAN 1 -TIM SERIES DATA PLOTS AND ANNUAL...University of Minnesota, utilized an early version of the Hydrologic Engineering * Center’s (HEC) EEC-5c Computer Program . EEC is a Corps of Engineers
Implementation of NASTRAN on the IBM/370 CMS operating system
NASA Technical Reports Server (NTRS)
Britten, S. S.; Schumacker, B.
1980-01-01
The NASA Structural Analysis (NASTRAN) computer program is operational on the IBM 360/370 series computers. While execution of NASTRAN has been described and implemented under the virtual storage operating systems of the IBM 370 models, the IBM 370/168 computer can also operate in a time-sharing mode under the virtual machine operating system using the Conversational Monitor System (CMS) subset. The changes required to make NASTRAN operational under the CMS operating system are described.
System analysis for the Huntsville Operation Support Center distributed computer system
NASA Technical Reports Server (NTRS)
Ingels, F. M.
1986-01-01
A simulation model of the NASA Huntsville Operational Support Center (HOSC) was developed. This simulation model emulates the HYPERchannel Local Area Network (LAN) that ties together the various computers of HOSC. The HOSC system is a large installation of mainframe computers such as the Perkin Elmer 3200 series and the Dec VAX series. A series of six simulation exercises of the HOSC model is described using data sets provided by NASA. The analytical analysis of the ETHERNET LAN and the video terminals (VTs) distribution system are presented. An interface analysis of the smart terminal network model which allows the data flow requirements due to VTs on the ETHERNET LAN to be estimated, is presented.
Operation of the HP2250 with the HP9000 series 200 using PASCAL 3.0
NASA Technical Reports Server (NTRS)
Perry, John; Stroud, C. W.
1986-01-01
A computer program has been written to provide an interface between the HP Series 200 desktop computers, operating under HP Standard Pascal 3.0, and the HP2250 Data Acquisition and Control System. Pascal 3.0 for the HP9000 desktop computer gives a number of procedures for handling bus communication at various levels. It is necessary, however, to reach the lowest possible level in Pascal to handle the bus protocols required by the HP2250. This makes programming extremely complex since these protocols are not documented. The program described solves those problems and allows the user to immediately program, simply and efficiently, any measurement and control language (MCL/50) application with a few procedure calls. The complete set of procedures is available on a 5 1/4 inch diskette from Cosmic. Included in this group of procedures is an Exerciser which allows the user to exercise his HP2250 interactively. The exerciser operates in a fashion similar to the Series 200 operating system programs, but is adapted to the requirements of the HP2250. The programs on the diskette and the user's manual assume the user is acquainted with both the MCL/50 programming language and HP Standard Pascal 3.0 for the HP series 200 desktop computers.
48 CFR 252.227-7013 - Rights in technical data-Noncommercial items.
Code of Federal Regulations, 2011 CFR
2011-10-01
... causing a computer to perform a specific operation or series of operations. (3) Computer software means computer programs, source code, source code listings, object code listings, design details, algorithms... or will be developed exclusively with Government funds; (ii) Studies, analyses, test data, or similar...
48 CFR 252.227-7013 - Rights in technical data-Noncommercial items.
Code of Federal Regulations, 2012 CFR
2012-10-01
... causing a computer to perform a specific operation or series of operations. (3) Computer software means computer programs, source code, source code listings, object code listings, design details, algorithms... or will be developed exclusively with Government funds; (ii) Studies, analyses, test data, or similar...
48 CFR 252.227-7013 - Rights in technical data-Noncommercial items.
Code of Federal Regulations, 2014 CFR
2014-10-01
... causing a computer to perform a specific operation or series of operations. (3) Computer software means computer programs, source code, source code listings, object code listings, design details, algorithms... or will be developed exclusively with Government funds; (ii) Studies, analyses, test data, or similar...
48 CFR 252.227-7013 - Rights in technical data-Noncommercial items.
Code of Federal Regulations, 2010 CFR
2010-10-01
... causing a computer to perform a specific operation or series of operations. (3) Computer software means computer programs, source code, source code listings, object code listings, design details, algorithms... developed exclusively with Government funds; (ii) Studies, analyses, test data, or similar data produced for...
48 CFR 252.227-7013 - Rights in technical data-Noncommercial items.
Code of Federal Regulations, 2013 CFR
2013-10-01
... causing a computer to perform a specific operation or series of operations. (3) Computer software means computer programs, source code, source code listings, object code listings, design details, algorithms... funds; (ii) Studies, analyses, test data, or similar data produced for this contract, when the study...
Programs for Testing Processor-in-Memory Computing Systems
NASA Technical Reports Server (NTRS)
Katz, Daniel S.
2006-01-01
The Multithreaded Microbenchmarks for Processor-In-Memory (PIM) Compilers, Simulators, and Hardware are computer programs arranged in a series for use in testing the performances of PIM computing systems, including compilers, simulators, and hardware. The programs at the beginning of the series test basic functionality; the programs at subsequent positions in the series test increasingly complex functionality. The programs are intended to be used while designing a PIM system, and can be used to verify that compilers, simulators, and hardware work correctly. The programs can also be used to enable designers of these system components to examine tradeoffs in implementation. Finally, these programs can be run on non-PIM hardware (either single-threaded or multithreaded) using the POSIX pthreads standard to verify that the benchmarks themselves operate correctly. [POSIX (Portable Operating System Interface for UNIX) is a set of standards that define how programs and operating systems interact with each other. pthreads is a library of pre-emptive thread routines that comply with one of the POSIX standards.
NASA Technical Reports Server (NTRS)
1975-01-01
NASA structural analysis (NASTRAN) computer program is operational on three series of third generation computers. The problem and difficulties involved in adapting NASTRAN to a fourth generation computer, namely, the Control Data STAR-100, are discussed. The salient features which distinguish Control Data STAR-100 from third generation computers are hardware vector processing capability and virtual memory. A feasible method is presented for transferring NASTRAN to Control Data STAR-100 system while retaining much of the machine-independent code. Basic matrix operations are noted for optimization for vector processing.
Recent Advances and Issues in Computers. Oryx Frontiers of Science Series.
ERIC Educational Resources Information Center
Gay, Martin K.
Discussing recent issues in computer science, this book contains 11 chapters covering: (1) developments that have the potential for changing the way computers operate, including microprocessors, mass storage systems, and computing environments; (2) the national computational grid for high-bandwidth, high-speed collaboration among scientists, and…
Non-Determinism: An Abstract Concept in Computer Science Studies
ERIC Educational Resources Information Center
Armoni, Michal; Gal-Ezer, Judith
2007-01-01
Non-determinism is one of the most important, yet abstract, recurring concepts of Computer Science. It plays an important role in Computer Science areas such as formal language theory, computability theory, distributed computing, and operating systems. We conducted a series of studies on the perception of non-determinism. In the current research,…
NASA Astrophysics Data System (ADS)
Nikolaev, A. S.
2015-03-01
We study the structure of the canonical Poincaré-Lindstedt perturbation series in the Deprit operator formalism and establish its connection to the Kato resolvent expansion. A discussion of invariant definitions for averaging and integrating perturbation operators and their canonical identities reveals a regular pattern in the series for the Deprit generator. This regularity is explained using Kato series and the relation of the perturbation operators to the Laurent coefficients for the resolvent of the Liouville operator. This purely canonical approach systematizes the series and leads to an explicit expression for the Deprit generator in any order of the perturbation theory: , where is the partial pseudoinverse of the perturbed Liouville operator. The corresponding Kato series provides a reasonably effective computational algorithm. The canonical connection of the perturbed and unperturbed averaging operators allows describing ambiguities in the generator and transformed Hamiltonian, while Gustavson integrals turn out to be insensitive to the normalization style. We use nonperturbative examples for illustration.
Experience with a UNIX based batch computing facility for H1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerhards, R.; Kruener-Marquis, U.; Szkutnik, Z.
1994-12-31
A UNIX based batch computing facility for the H1 experiment at DESY is described. The ultimate goal is to replace the DESY IBM mainframe by a multiprocessor SGI Challenge series computer, using the UNIX operating system, for most of the computing tasks in H1.
NASA Technical Reports Server (NTRS)
Roberts, Floyd E., III
1994-01-01
Software provides for control and acquisition of data from optical pyrometer. There are six individual programs in PYROLASER package. Provides quick and easy way to set up, control, and program standard Pyrolaser. Temperature and emisivity measurements either collected as if Pyrolaser in manual operating mode or displayed on real-time strip charts and stored in standard spreadsheet format for posttest analysis. Shell supplied to allow macros, which are test-specific, added to system easily. Written using Labview software for use on Macintosh-series computers running System 6.0.3 or later, Sun Sparc-series computers running Open-Windows 3.0 or MIT's X Window System (X11R4 or X11R5), and IBM PC or compatible computers running Microsoft Windows 3.1 or later.
McKenna, Thomas M; Bawa, Gagandeep; Kumar, Kamal; Reifman, Jaques
2007-04-01
The physiology analysis system (PAS) was developed as a resource to support the efficient warehousing, management, and analysis of physiology data, particularly, continuous time-series data that may be extensive, of variable quality, and distributed across many files. The PAS incorporates time-series data collected by many types of data-acquisition devices, and it is designed to free users from data management burdens. This Web-based system allows both discrete (attribute) and time-series (ordered) data to be manipulated, visualized, and analyzed via a client's Web browser. All processes occur on a server, so that the client does not have to download data or any application programs, and the PAS is independent of the client's computer operating system. The PAS contains a library of functions, written in different computer languages that the client can add to and use to perform specific data operations. Functions from the library are sequentially inserted into a function chain-based logical structure to construct sophisticated data operators from simple function building blocks, affording ad hoc query and analysis of time-series data. These features support advanced mining of physiology data.
Statistical fingerprinting for malware detection and classification
Prowell, Stacy J.; Rathgeb, Christopher T.
2015-09-15
A system detects malware in a computing architecture with an unknown pedigree. The system includes a first computing device having a known pedigree and operating free of malware. The first computing device executes a series of instrumented functions that, when executed, provide a statistical baseline that is representative of the time it takes the software application to run on a computing device having a known pedigree. A second computing device executes a second series of instrumented functions that, when executed, provides an actual time that is representative of the time the known software application runs on the second computing device. The system detects malware when there is a difference in execution times between the first and the second computing devices.
NASA Technical Reports Server (NTRS)
Lawson, C. L.; Krogh, F. T.; Gold, S. S.; Kincaid, D. R.; Sullivan, J.; Williams, E.; Hanson, R. J.; Haskell, K.; Dongarra, J.; Moler, C. B.
1982-01-01
The Basic Linear Algebra Subprograms (BLAS) library is a collection of 38 FORTRAN-callable routines for performing basic operations of numerical linear algebra. BLAS library is portable and efficient source of basic operations for designers of programs involving linear algebriac computations. BLAS library is supplied in portable FORTRAN and Assembler code versions for IBM 370, UNIVAC 1100 and CDC 6000 series computers.
Generating series for GUE correlators
NASA Astrophysics Data System (ADS)
Dubrovin, Boris; Yang, Di
2017-11-01
We extend to the Toda lattice hierarchy the approach of Bertola et al. (Phys D Nonlinear Phenom 327:30-57, 2016; IMRN, 2016) to computation of logarithmic derivatives of tau-functions in terms of the so-called matrix resolvents of the corresponding difference Lax operator. As a particular application we obtain explicit generating series for connected GUE correlators. On this basis an efficient recursive procedure for computing the correlators in full genera is developed.
A note on an attempt at more efficient Poisson series evaluation. [for lunar libration
NASA Technical Reports Server (NTRS)
Shelus, P. J.; Jefferys, W. H., III
1975-01-01
A substantial reduction has been achieved in the time necessary to compute lunar libration series. The method involves eliminating many of the trigonometric function calls by a suitable transformation and applying a short SNOBOL processor to the FORTRAN coding of the transformed series, which obviates many of the multiplication operations during the course of series evaluation. It is possible to accomplish similar results quite easily with other Poisson series.
Computing Operating Characteristics Of Bearing/Shaft Systems
NASA Technical Reports Server (NTRS)
Moore, James D.
1996-01-01
SHABERTH computer program predicts operating characteristics of bearings in multibearing load-support system. Lubricated and nonlubricated bearings modeled. Calculates loads, torques, temperatures, and fatigue lives of ball and/or roller bearings on single shaft. Provides for analysis of reaction of system to termination of supply of lubricant to bearings and other lubricated mechanical elements. Valuable in design and analysis of shaft/bearing systems. Two versions of SHABERTH available. Cray version (LEW-14860), "Computing Thermal Performances Of Shafts and Bearings". IBM PC version (MFS-28818), written for IBM PC-series and compatible computers running MS-DOS.
Queueing Network Models for Parallel Processing of Task Systems: an Operational Approach
NASA Technical Reports Server (NTRS)
Mak, Victor W. K.
1986-01-01
Computer performance modeling of possibly complex computations running on highly concurrent systems is considered. Earlier works in this area either dealt with a very simple program structure or resulted in methods with exponential complexity. An efficient procedure is developed to compute the performance measures for series-parallel-reducible task systems using queueing network models. The procedure is based on the concept of hierarchical decomposition and a new operational approach. Numerical results for three test cases are presented and compared to those of simulations.
Field evaluation of a wireless handheld computer for railroad roadway workers.
DOT National Transportation Integrated Search
2009-01-31
This report is the third in a series describing the development and evaluation of a software application to facilitate communications for railroad roadway workers using a wireless handheld computer. The current prototype operated on a cell phone inte...
Field evaluation of a wireless handheld computer for railroad roadway workers
DOT National Transportation Integrated Search
2009-01-01
This report is the third in a series describing the development and evaluation of a software application to facilitate communications for railroad roadway workers using a wireless handheld computer. The current prototype operated on a cell phone inte...
Tomkins, James L [Albuquerque, NM; Camp, William J [Albuquerque, NM
2007-07-17
A multiple processor computing apparatus includes a physical interconnect structure that is flexibly configurable to support selective segregation of classified and unclassified users. The physical interconnect structure includes routers in service or compute processor boards distributed in an array of cabinets connected in series on each board and to respective routers in neighboring row cabinet boards with the routers in series connection coupled to routers in series connection in respective neighboring column cabinet boards. The array can include disconnect cabinets or respective routers in all boards in each cabinet connected in a toroid. The computing apparatus can include an emulator which permits applications from the same job to be launched on processors that use different operating systems.
ALOHA System Technical Reports 16, 19, 24, 28, and 30, 1974.
ERIC Educational Resources Information Center
Hawaii Univ., Honolulu. ALOHA System.
A series of technical reports based on the Aloha System for educational computer programs provide a background on how various countries in the Pacific region developed computer capabilities and describe their current operations, as well as prospects for future expansion. Included are studies on the Japan-Hawaii TELEX and Satellite; computers at…
A two-dimensional graphing program for the Tektronix 4050-series graphics computers
Kipp, K.L.
1983-01-01
A refined, two-dimensional graph-plotting program was developed for use on Tektronix 4050-series graphics computers. Important features of this program include: any combination of logarithmic and linear axes, optional automatic scaling and numbering of the axes, multiple-curve plots, character or drawn symbol-point plotting, optional cartridge-tape data input and plot-format storage, optional spline fitting for smooth curves, and built-in data-editing options. The program is run while the Tektronix is not connected to any large auxiliary computer, although data from files on an auxiliary computer easily can be transferred to data-cartridge for later plotting. The user is led through the plot-construction process by a series of questions and requests for data input. Five example plots are presented to illustrate program capability and the sequence of program operation. (USGS)
48 CFR 52.227-20 - Rights in Data-SBIR Program.
Code of Federal Regulations, 2011 CFR
2011-10-01
... series of operations; and (ii) Recorded information comprising source code listings, design details...) Means (i) Computer programs that comprise a series of instructions, rules, routines, or statements... small business innovation research contract issued under the authority of 15 U.S.C. 638, which data are...
48 CFR 52.227-20 - Rights in Data-SBIR Program.
Code of Federal Regulations, 2010 CFR
2010-10-01
... series of operations; and (ii) Recorded information comprising source code listings, design details...) Means (i) Computer programs that comprise a series of instructions, rules, routines, or statements... small business innovation research contract issued under the authority of 15 U.S.C. 638, which data are...
48 CFR 52.227-20 - Rights in Data-SBIR Program.
Code of Federal Regulations, 2013 CFR
2013-10-01
... series of operations; and (ii) Recorded information comprising source code listings, design details...) Means (i) Computer programs that comprise a series of instructions, rules, routines, or statements... small business innovation research contract issued under the authority of 15 U.S.C. 638, which data are...
48 CFR 52.227-20 - Rights in Data-SBIR Program.
Code of Federal Regulations, 2012 CFR
2012-10-01
... series of operations; and (ii) Recorded information comprising source code listings, design details...) Means (i) Computer programs that comprise a series of instructions, rules, routines, or statements... small business innovation research contract issued under the authority of 15 U.S.C. 638, which data are...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-01
.... Securities Offering. Series 86 Research Analyst--Analysis..... From $160 to $175. Series 87 Research Analyst... Order Processing Assistant Representatives, Research Analysts and Operations Professionals, respectively... examination.\\7\\ \\6\\ PROCTOR is a computer system that is specifically designed for the administration and...
48 CFR 52.227-20 - Rights in Data-SBIR Program.
Code of Federal Regulations, 2014 CFR
2014-10-01
... series of operations; and (ii) Recorded information comprising source code listings, design details...) Means (i) Computer programs that comprise a series of instructions, rules, routines, or statements... small business innovation research contract issued under the authority of 15 U.S.C. 638, which data are...
Using trees to compute approximate solutions to ordinary differential equations exactly
NASA Technical Reports Server (NTRS)
Grossman, Robert
1991-01-01
Some recent work is reviewed which relates families of trees to symbolic algorithms for the exact computation of series which approximate solutions of ordinary differential equations. It turns out that the vector space whose basis is the set of finite, rooted trees carries a natural multiplication related to the composition of differential operators, making the space of trees an algebra. This algebraic structure can be exploited to yield a variety of algorithms for manipulating vector fields and the series and algebras they generate.
Computer controlled fluorometer device and method of operating same
Kolber, Z.; Falkowski, P.
1990-07-17
A computer controlled fluorometer device and method of operating same, said device being made to include a pump flash source and a probe flash source and one or more sample chambers in combination with a light condenser lens system and associated filters and reflectors and collimators, as well as signal conditioning and monitoring means and a programmable computer means and a software programmable source of background irradiance that is operable according to the method of the invention to rapidly, efficiently and accurately measure photosynthetic activity by precisely monitoring and recording changes in fluorescence yield produced by a controlled series of predetermined cycles of probe and pump flashes from the respective probe and pump sources that are controlled by the computer means. 13 figs.
Computer controlled fluorometer device and method of operating same
Kolber, Zbigniew; Falkowski, Paul
1990-01-01
A computer controlled fluorometer device and method of operating same, said device being made to include a pump flash source and a probe flash source and one or more sample chambers in combination with a light condenser lens system and associated filters and reflectors and collimators, as well as signal conditioning and monitoring means and a programmable computer means and a software programmable source of background irradiance that is operable according to the method of the invention to rapidly, efficiently and accurately measure photosynthetic activity by precisely monitoring and recording changes in fluorescence yield produced by a controlled series of predetermined cycles of probe and pump flashes from the respective probe and pump sources that are controlled by the computer means.
The engineering design integration (EDIN) system. [digital computer program complex
NASA Technical Reports Server (NTRS)
Glatt, C. R.; Hirsch, G. N.; Alford, G. E.; Colquitt, W. N.; Reiners, S. J.
1974-01-01
A digital computer program complex for the evaluation of aerospace vehicle preliminary designs is described. The system consists of a Univac 1100 series computer and peripherals using the Exec 8 operating system, a set of demand access terminals of the alphanumeric and graphics types, and a library of independent computer programs. Modification of the partial run streams, data base maintenance and construction, and control of program sequencing are provided by a data manipulation program called the DLG processor. The executive control of library program execution is performed by the Univac Exec 8 operating system through a user established run stream. A combination of demand and batch operations is employed in the evaluation of preliminary designs. Applications accomplished with the EDIN system are described.
NASA Astrophysics Data System (ADS)
Lu, Yi; Haverkort, Maurits W.
2017-12-01
We present a nonperturbative, divergence-free series expansion of Green's functions using effective operators. The method is especially suited for computing correlators of complex operators as a series of correlation functions of simpler forms. We apply the method to study low-energy excitations in resonant inelastic x-ray scattering (RIXS) in doped one- and two-dimensional single-band Hubbard models. The RIXS operator is expanded into polynomials of spin, density, and current operators weighted by fundamental x-ray spectral functions. These operators couple to different polarization channels resulting in simple selection rules. The incident photon energy dependent coefficients help to pinpoint main RIXS contributions from different degrees of freedom. We show in particular that, with parameters pertaining to cuprate superconductors, local spin excitation dominates the RIXS spectral weight over a wide doping range in the cross-polarization channel.
Subtlenoise: sonification of distributed computing operations
NASA Astrophysics Data System (ADS)
Love, P. A.
2015-12-01
The operation of distributed computing systems requires comprehensive monitoring to ensure reliability and robustness. There are two components found in most monitoring systems: one being visually rich time-series graphs and another being notification systems for alerting operators under certain pre-defined conditions. In this paper the sonification of monitoring messages is explored using an architecture that fits easily within existing infrastructures based on mature opensource technologies such as ZeroMQ, Logstash, and Supercollider (a synth engine). Message attributes are mapped onto audio attributes based on broad classification of the message (continuous or discrete metrics) but keeping the audio stream subtle in nature. The benefits of audio rendering are described in the context of distributed computing operations and may provide a less intrusive way to understand the operational health of these systems.
NASA Technical Reports Server (NTRS)
Martini, W. R.
1981-01-01
A series of computer programs are presented with full documentation which simulate the transient behavior of a modern 4 cylinder Siemens arrangement Stirling engine with burner and air preheater. Cold start, cranking, idling, acceleration through 3 gear changes and steady speed operation are simulated. Sample results and complete operating instructions are given. A full source code listing of all programs are included.
Connecting Neural Coding to Number Cognition: A Computational Account
ERIC Educational Resources Information Center
Prather, Richard W.
2012-01-01
The current study presents a series of computational simulations that demonstrate how the neural coding of numerical magnitude may influence number cognition and development. This includes behavioral phenomena cataloged in cognitive literature such as the development of numerical estimation and operational momentum. Though neural research has…
Laboratory services series: a programmed maintenance system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tuxbury, D.C.; Srite, B.E.
1980-01-01
The diverse facilities, operations and equipment at a major national research and development laboratory require a systematic, analytical approach to operating equipment maintenance. A computer-scheduled preventive maintenance program is described including program development, equipment identification, maintenance and inspection instructions, scheduling, personnel, and equipment history.
Print Station Operation. Microcomputing Working Paper Series.
ERIC Educational Resources Information Center
Wozny, Lucy Anne
During the academic year 1983-84, Drexel University instituted a new policy requiring all incoming students to have access to a microcomputer. The computer chosen to fulfill this requirement was the Macintosh from Apple Computer, Inc. Although this requirement put an additional financial burden on the Drexel student, the university administration…
Westenbroek, Stephen M.; Doherty, John; Walker, John F.; Kelson, Victor A.; Hunt, Randall J.; Cera, Timothy B.
2012-01-01
The TSPROC (Time Series PROCessor) computer software uses a simple scripting language to process and analyze time series. It was developed primarily to assist in the calibration of environmental models. The software is designed to perform calculations on time-series data commonly associated with surface-water models, including calculation of flow volumes, transformation by means of basic arithmetic operations, and generation of seasonal and annual statistics and hydrologic indices. TSPROC can also be used to generate some of the key input files required to perform parameter optimization by means of the PEST (Parameter ESTimation) computer software. Through the use of TSPROC, the objective function for use in the model-calibration process can be focused on specific components of a hydrograph.
Code of Federal Regulations, 2010 CFR
2010-10-01
... rather than paragraph (d)(3) in contracts for basic or applied research with educational institutions... which are data comprising a series of instructions, rules, routines, or statements, regardless of the media in which recorded, that allow or cause a computer to perform a specific operation or series of...
Code of Federal Regulations, 2014 CFR
2014-10-01
... rather than paragraph (d)(3) in contracts for basic or applied research with educational institutions... which are data comprising a series of instructions, rules, routines, or statements, regardless of the media in which recorded, that allow or cause a computer to perform a specific operation or series of...
Code of Federal Regulations, 2013 CFR
2013-10-01
... rather than paragraph (d)(3) in contracts for basic or applied research with educational institutions... which are data comprising a series of instructions, rules, routines, or statements, regardless of the media in which recorded, that allow or cause a computer to perform a specific operation or series of...
Code of Federal Regulations, 2012 CFR
2012-10-01
... rather than paragraph (d)(3) in contracts for basic or applied research with educational institutions... which are data comprising a series of instructions, rules, routines, or statements, regardless of the media in which recorded, that allow or cause a computer to perform a specific operation or series of...
Code of Federal Regulations, 2011 CFR
2011-10-01
... rather than paragraph (d)(3) in contracts for basic or applied research with educational institutions... which are data comprising a series of instructions, rules, routines, or statements, regardless of the media in which recorded, that allow or cause a computer to perform a specific operation or series of...
Systems Management of Air Force Standard Communications-Computer systems: There is a Better Way
1988-04-01
upgrade or replacement of systems. AFR 700-6, Information Systems Operation Management , AFR 700-7, Information Processing Center Opera- tions Management...and AFR 700-8, Telephone Systems Operation Management provide USAF guidance, policy and procedures governing this phase. 4 2. 800-Series Regulations
A Quantitative Model for Assessing Visual Simulation Software Architecture
2011-09-01
Software Engineering Arnold Buss Research Associate Professor of MOVES LtCol Jeff Boleng, PhD Associate Professor of Computer Science U.S. Air Force Academy... science (operating and programming systems series). New York, NY, USA: Elsevier Science Ltd. Henry, S., & Kafura, D. (1984). The evaluation of software...Rudy Darken Professor of Computer Science Dissertation Supervisor Ted Lewis Professor of Computer Science Richard Riehle Professor of Practice
ERIC Educational Resources Information Center
Piele, Philip K.
This document shows how computer technology can aid educators in meeting demands for improved class scheduling and more efficient use of transportation resources. The first section surveys literature on operational systems that provide individualized scheduling for students, varied class structures, and maximum use of space and staff skills.…
Math Activities Using LogoWriter--Numbers & Operations.
ERIC Educational Resources Information Center
Flewelling, Gary
This book is one in a series of teacher resource books developed to: (1) rescue students from the clutches of computers that drill and control; and (2) supply teachers with computer activities compatible with a mathematics program that emphasizes investigation, problem solving, creativity, and hypothesis making and testing. This is not a book…
Logo Burn-In. Microcomputing Working Paper Series.
ERIC Educational Resources Information Center
Drexel Univ., Philadelphia, PA. Microcomputing Program.
This paper describes a hot-stamping operation undertaken at Drexel University in an attempt to prevent computer theft on campus. The program was initiated in response to the University's anticipated receipt of up to 3,000 Macintosh microcomputers per year and the consequent publicity the university was receiving. All clusters of computers (e.g.,…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tournier, J.; El-Genk, M.S.; Huang, L.
1999-01-01
The Institute of Space and Nuclear Power Studies at the University of New Mexico has developed a computer simulation of cylindrical geometry alkali metal thermal-to-electric converter cells using a standard Fortran 77 computer code. The objective and use of this code was to compare the experimental measurements with computer simulations, upgrade the model as appropriate, and conduct investigations of various methods to improve the design and performance of the devices for improved efficiency, durability, and longer operational lifetime. The Institute of Space and Nuclear Power Studies participated in vacuum testing of PX series alkali metal thermal-to-electric converter cells and developedmore » the alkali metal thermal-to-electric converter Performance Evaluation and Analysis Model. This computer model consisted of a sodium pressure loss model, a cell electrochemical and electric model, and a radiation/conduction heat transfer model. The code closely predicted the operation and performance of a wide variety of PX series cells which led to suggestions for improvements to both lifetime and performance. The code provides valuable insight into the operation of the cell, predicts parameters of components within the cell, and is a useful tool for predicting both the transient and steady state performance of systems of cells.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tournier, J.; El-Genk, M.S.; Huang, L.
1999-01-01
The Institute of Space and Nuclear Power Studies at the University of New Mexico has developed a computer simulation of cylindrical geometry alkali metal thermal-to-electric converter cells using a standard Fortran 77 computer code. The objective and use of this code was to compare the experimental measurements with computer simulations, upgrade the model as appropriate, and conduct investigations of various methods to improve the design and performance of the devices for improved efficiency, durability, and longer operational lifetime. The Institute of Space and Nuclear Power Studies participated in vacuum testing of PX series alkali metal thermal-to-electric converter cells and developedmore » the alkali metal thermal-to-electric converter Performance Evaluation and Analysis Model. This computer model consisted of a sodium pressure loss model, a cell electrochemical and electric model, and a radiation/conduction heat transfer model. The code closely predicted the operation and performance of a wide variety of PX series cells which led to suggestions for improvements to both lifetime and performance. The code provides valuable insight into the operation of the cell, predicts parameters of components within the cell, and is a useful tool for predicting both the transient and steady state performance of systems of cells.« less
Self-calibrating multiplexer circuit
Wahl, Chris P.
1997-01-01
A time domain multiplexer system with automatic determination of acceptable multiplexer output limits, error determination, or correction is comprised of a time domain multiplexer, a computer, a constant current source capable of at least three distinct current levels, and two series resistances employed for calibration and testing. A two point linear calibration curve defining acceptable multiplexer voltage limits may be defined by the computer by determining the voltage output of the multiplexer to very accurately known input signals developed from predetermined current levels across the series resistances. Drift in the multiplexer may be detected by the computer when the output voltage limits, expected during normal operation, are exceeded, or the relationship defined by the calibration curve is invalidated.
Koopman Operator Framework for Time Series Modeling and Analysis
NASA Astrophysics Data System (ADS)
Surana, Amit
2018-01-01
We propose an interdisciplinary framework for time series classification, forecasting, and anomaly detection by combining concepts from Koopman operator theory, machine learning, and linear systems and control theory. At the core of this framework is nonlinear dynamic generative modeling of time series using the Koopman operator which is an infinite-dimensional but linear operator. Rather than working with the underlying nonlinear model, we propose two simpler linear representations or model forms based on Koopman spectral properties. We show that these model forms are invariants of the generative model and can be readily identified directly from data using techniques for computing Koopman spectral properties without requiring the explicit knowledge of the generative model. We also introduce different notions of distances on the space of such model forms which is essential for model comparison/clustering. We employ the space of Koopman model forms equipped with distance in conjunction with classical machine learning techniques to develop a framework for automatic feature generation for time series classification. The forecasting/anomaly detection framework is based on using Koopman model forms along with classical linear systems and control approaches. We demonstrate the proposed framework for human activity classification, and for time series forecasting/anomaly detection in power grid application.
A graphics subsystem retrofit design for the bladed-disk data acquisition system. M.S. Thesis
NASA Technical Reports Server (NTRS)
Carney, R. R.
1983-01-01
A graphics subsystem retrofit design for the turbojet blade vibration data acquisition system is presented. The graphics subsystem will operate in two modes permitting the system operator to view blade vibrations on an oscilloscope type of display. The first mode is a real-time mode that displays only gross blade characteristics, such as maximum deflections and standing waves. This mode is used to aid the operator in determining when to collect detailed blade vibration data. The second mode of operation is a post-processing mode that will animate the actual blade vibrations using the detailed data collected on an earlier data collection run. The operator can vary the rate of payback to view differring characteristics of blade vibrations. The heart of the graphics subsystem is a modified version of AMD's ""super sixteen'' computer, called the graphics preprocessor computer (GPC). This computer is based on AMD's 2900 series of bit-slice components.
Computer Drawing Method for Operating Characteristic Curve of PV Power Plant Array Unit
NASA Astrophysics Data System (ADS)
Tan, Jianbin
2018-02-01
According to the engineering design of large-scale grid-connected photovoltaic power stations and the research and development of many simulation and analysis systems, it is necessary to draw a good computer graphics of the operating characteristic curves of photovoltaic array elements and to propose a good segmentation non-linear interpolation algorithm. In the calculation method, Component performance parameters as the main design basis, the computer can get 5 PV module performances. At the same time, combined with the PV array series and parallel connection, the computer drawing of the performance curve of the PV array unit can be realized. At the same time, the specific data onto the module of PV development software can be calculated, and the good operation of PV array unit can be improved on practical application.
Aviation Careers Series: Airline Non-Flying Careers
DOT National Transportation Integrated Search
1996-01-01
TRAVLINK demonstrated the use of Automatic Vehicle Location (AVL), ComputerAided dispatch (CAD), and Automatic Vehicle Identification (AVI) systems on Metropolitan Council Transit Operations (MCTO) buses in Minneapolis, Minnesota and western suburbs,...
Computed tomography image-guided surgery in complex acetabular fractures.
Brown, G A; Willis, M C; Firoozbakhsh, K; Barmada, A; Tessman, C L; Montgomery, A
2000-01-01
Eleven complex acetabular fractures in 10 patients were treated by open reduction with internal fixation incorporating computed tomography image guided software intraoperatively. Each of the implants placed under image guidance was found to be accurate and without penetration of the pelvis or joint space. The setup time for the system was minimal. Accuracy in the range of 1 mm was found when registration was precise (eight cases) and was in the range of 3.5 mm when registration was only approximate (three cases). Added benefits included reduced intraoperative fluoroscopic time, less need for more extensive dissection, and obviation of additional surgical approaches in some cases. Compared with a series of similar fractures treated before this image guided series, the reduction in operative time was significant. For patients with complex anterior and posterior combined fractures, the average operation times with and without application of three-dimensional imaging technique were, respectively, 5 hours 15 minutes and 6 hours 14 minutes, revealing 16% less operative time for those who had surgery using image guidance. In the single column fracture group, the operation time for those with three-dimensional imaging application, was 2 hours 58 minutes and for those with traditional surgery, 3 hours 42 minutes, indicating 20% less operative time for those with imaging modality. Intraoperative computed tomography guided imagery was found to be an accurate and suitable method for use in the operative treatment of complex acetabular fractures with substantial displacement.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mather, Barry
The increasing deployment of distribution-connected photovoltaic (DPV) systems requires utilities to complete complex interconnection studies. Relatively simple interconnection study methods worked well for low penetrations of photovoltaic systems, but more complicated quasi-static time-series (QSTS) analysis is required to make better interconnection decisions as DPV penetration levels increase. Tools and methods must be developed to support this. This paper presents a variable-time-step solver for QSTS analysis that significantly shortens the computational time and effort to complete a detailed analysis of the operation of a distribution circuit with many DPV systems. Specifically, it demonstrates that the proposed variable-time-step solver can reduce themore » required computational time by as much as 84% without introducing any important errors to metrics, such as the highest and lowest voltage occurring on the feeder, number of voltage regulator tap operations, and total amount of losses realized in the distribution circuit during a 1-yr period. Further improvement in computational speed is possible with the introduction of only modest errors in these metrics, such as a 91 percent reduction with less than 5 percent error when predicting voltage regulator operations.« less
Computing Lives And Reliabilities Of Turboprop Transmissions
NASA Technical Reports Server (NTRS)
Coy, J. J.; Savage, M.; Radil, K. C.; Lewicki, D. G.
1991-01-01
Computer program PSHFT calculates lifetimes of variety of aircraft transmissions. Consists of main program, series of subroutines applying to specific configurations, generic subroutines for analysis of properties of components, subroutines for analysis of system, and common block. Main program selects routines used in analysis and causes them to operate in desired sequence. Series of configuration-specific subroutines put in configuration data, perform force and life analyses for components (with help of generic component-property-analysis subroutines), fill property array, call up system-analysis routines, and finally print out results of analysis for system and components. Written in FORTRAN 77(IV).
Monopole operators and Hilbert series of Coulomb branches of 3 d = 4 gauge theories
NASA Astrophysics Data System (ADS)
Cremonesi, Stefano; Hanany, Amihay; Zaffaroni, Alberto
2014-01-01
This paper addresses a long standing problem - to identify the chiral ring and moduli space (i.e. as an algebraic variety) on the Coulomb branch of an = 4 superconformal field theory in 2+1 dimensions. Previous techniques involved a computation of the metric on the moduli space and/or mirror symmetry. These methods are limited to sufficiently small moduli spaces, with enough symmetry, or to Higgs branches of sufficiently small gauge theories. We introduce a simple formula for the Hilbert series of the Coulomb branch, which applies to any good or ugly three-dimensional = 4 gauge theory. The formula counts monopole operators which are dressed by classical operators, the Casimir invariants of the residual gauge group that is left unbroken by the magnetic flux. We apply our formula to several classes of gauge theories. Along the way we make various tests of mirror symmetry, successfully comparing the Hilbert series of the Coulomb branch with the Hilbert series of the Higgs branch of the mirror theory.
Grigoryeva, Lyudmila; Henriques, Julie; Larger, Laurent; Ortega, Juan-Pablo
2014-07-01
Reservoir computing is a recently introduced machine learning paradigm that has already shown excellent performances in the processing of empirical data. We study a particular kind of reservoir computers called time-delay reservoirs that are constructed out of the sampling of the solution of a time-delay differential equation and show their good performance in the forecasting of the conditional covariances associated to multivariate discrete-time nonlinear stochastic processes of VEC-GARCH type as well as in the prediction of factual daily market realized volatilities computed with intraday quotes, using as training input daily log-return series of moderate size. We tackle some problems associated to the lack of task-universality for individually operating reservoirs and propose a solution based on the use of parallel arrays of time-delay reservoirs. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Hegedűs, Árpád
2018-03-01
In this paper, using the light-cone lattice regularization, we compute the finite volume expectation values of the composite operator \\overline{Ψ}Ψ between pure fermion states in the Massive Thirring Model. In the light-cone regularized picture, this expectation value is related to 2-point functions of lattice spin operators being located at neighboring sites of the lattice. The operator \\overline{Ψ}Ψ is proportional to the trace of the stress-energy tensor. This is why the continuum finite volume expectation values can be computed also from the set of non-linear integral equations (NLIE) governing the finite volume spectrum of the theory. Our results for the expectation values coming from the computation of lattice correlators agree with those of the NLIE computations. Previous conjectures for the LeClair-Mussardo-type series representation of the expectation values are also checked.
NASA Technical Reports Server (NTRS)
Izygon, Michel E.
1992-01-01
The development process of the knowledge base for the generation of Test Libraries for Mission Operations Computer (MOC) Command Support focused on a series of information gathering interviews. These knowledge capture sessions are supporting the development of a prototype for evaluating the capabilities of INTUIT on such an application. the prototype includes functions related to POCC (Payload Operation Control Center) processing. It prompts the end-users for input through a series of panels and then generates the Meds associated with the initialization and the update of hazardous command tables for a POCC Processing TLIB.
ERIC Educational Resources Information Center
Stevens, Mary Elizabeth
The series, of which this is the initial report, is intended to give a selective overview of research and development efforts and requirements in the computer and information sciences. The operations of information acquisition, sensing, and input to information processing systems are considered in generalized terms. Specific topics include but are…
ERIC Educational Resources Information Center
Possen, Uri M.; And Others
As an introduction, this paper presents a statement of the objectives of the university computing center (UCC) from the viewpoint of the university, the government, the typical user, and the UCC itself. The operating and financial structure of a UCC are described. Three main types of budgeting schemes are discussed: time allocation, pseudo-dollar,…
2013-03-15
310) 451-7002; Fax: (310) 451-6915; Email : order@rand.org The research described in this report was conducted as part of a series of previously...the office of a project sponsor at computer 11 and then sent through email from computer 11 to computer 12 over network 14. Sometimes, this file is...operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover
Behaviour of a series of reservoirs separated by drowned gates
NASA Astrophysics Data System (ADS)
Kolechkina, Alla; van Nooijen, Ronald
2017-04-01
Modern control systems tend to be based on computers and therefore to operate by sending commands to structures at given intervals (discrete time control system). Moreover, for almost all water management control systems there are practical lower limits on the time interval between structure adjustments and even between measurements. The water resource systems that are being controlled are physical systems whose state changes continuously. If we combine a continuously changing system and a discrete time controller we get a hybrid system. We use material from recent control theory literature to examine the behaviour of a series of reservoirs separated by drowned gates where the gates are under computer control.
2013-03-21
10 2.3 Time Series Response Data ................................................................................. 12 2.4 Comparison of Response...to 12 evaluating the efficiency of the parameter estimates. In the past, the most popular form of response surface design used the D-optimality...as well. A model can refer to almost anything in math , statistics, or computer science. It can be any “physical, mathematical, or logical
Duality quantum algorithm efficiently simulates open quantum systems
Wei, Shi-Jie; Ruan, Dong; Long, Gui-Lu
2016-01-01
Because of inevitable coupling with the environment, nearly all practical quantum systems are open system, where the evolution is not necessarily unitary. In this paper, we propose a duality quantum algorithm for simulating Hamiltonian evolution of an open quantum system. In contrast to unitary evolution in a usual quantum computer, the evolution operator in a duality quantum computer is a linear combination of unitary operators. In this duality quantum algorithm, the time evolution of the open quantum system is realized by using Kraus operators which is naturally implemented in duality quantum computer. This duality quantum algorithm has two distinct advantages compared to existing quantum simulation algorithms with unitary evolution operations. Firstly, the query complexity of the algorithm is O(d3) in contrast to O(d4) in existing unitary simulation algorithm, where d is the dimension of the open quantum system. Secondly, By using a truncated Taylor series of the evolution operators, this duality quantum algorithm provides an exponential improvement in precision compared with previous unitary simulation algorithm. PMID:27464855
A GPU accelerated and error-controlled solver for the unbounded Poisson equation in three dimensions
NASA Astrophysics Data System (ADS)
Exl, Lukas
2017-12-01
An efficient solver for the three dimensional free-space Poisson equation is presented. The underlying numerical method is based on finite Fourier series approximation. While the error of all involved approximations can be fully controlled, the overall computation error is driven by the convergence of the finite Fourier series of the density. For smooth and fast-decaying densities the proposed method will be spectrally accurate. The method scales with O(N log N) operations, where N is the total number of discretization points in the Cartesian grid. The majority of the computational costs come from fast Fourier transforms (FFT), which makes it ideal for GPU computation. Several numerical computations on CPU and GPU validate the method and show efficiency and convergence behavior. Tests are performed using the Vienna Scientific Cluster 3 (VSC3). A free MATLAB implementation for CPU and GPU is provided to the interested community.
Books: Not just computers and companions
NASA Astrophysics Data System (ADS)
2009-08-01
Mary Brück's historical survey is the second RAS Series publication, and one that gives an insight into the different roles taken by women astronomers: original thinkers, computers, educators, and entrepreneurs. In this extract from the chapter ``The Labyrinths of Heaven'', we find out how Maria Short (c1788-1869) came to own and operate an observatory open to the public in the mid-19th century.
Gambini, R; Pullin, J
2000-12-18
We consider general relativity with a cosmological constant as a perturbative expansion around a completely solvable diffeomorphism invariant field theory. This theory is the lambda --> infinity limit of general relativity. This allows an explicit perturbative computational setup in which the quantum states of the theory and the classical observables can be explicitly computed. An unexpected relationship arises at a quantum level between the discrete spectrum of the volume operator and the allowed values of the cosmological constant.
Development and Evaluation of Vocational Competency Measures. Final Report.
ERIC Educational Resources Information Center
Chalupsky, Albert B.; And Others
A series of occupational competency tests representing all seven vocational education curriculum areas were developed, field tested, and validated. Seventeen occupations were selected for competency test development: agricultural chemicals applications technician, farm equipment mechanic, computer operator, word processing specialist, apparel…
Diagnostic ability of computed tomography using DentaScan software in endodontics: case reports.
Siotia, Jaya; Gupta, Sunil K; Acharya, Shashi R; Saraswathi, Vidya
2011-01-01
Radiographic examination is essential in diagnosis and treatment planning in endodontics. Conventional radiographs depict structures in two dimensions only. The ability to assess the area of interest in three dimensions is advantageous. Computed tomography is an imaging technique which produces three-dimensional images of an object by taking a series of two-dimensional sectional X-ray images. DentaScan is a computed tomography software program that allows the mandible and maxilla to be imaged in three planes: axial, panoramic, and cross-sectional. As computed tomography is used in endodontics, DentaScan can play a wider role in endodontic diagnosis. It provides valuable information in the assessment of the morphology of the root canal, diagnosis of root fractures, internal and external resorptions, pre-operative assessment of anatomic structures etc. The aim of this article is to explore the clinical usefulness of computed tomography and DentaScan in endodontic diagnosis, through a series of four cases of different endodontic problems.
Angiographic assessment of initial balloon angioplasty results.
Gardiner, Geoffrey A; Sullivan, Kevin L; Halpern, Ethan J; Parker, Laurence; Beck, Margaret; Bonn, Joseph; Levin, David C
2004-10-01
To determine the influence of three factors involved in the angiographic assessment of balloon angioplasty-interobserver variability, operator bias, and the definition used to determine success-on the primary (technical) results of angioplasty in the peripheral arteries. Percent stenosis in 107 lesions in lower-extremity arteries was graded by three independent, experienced vascular radiologists ("observers") before and after balloon angioplasty and their estimates were compared with the initial interpretations reported by the physician performing the procedure ("operator") and an automated quantitative computer analysis. Observer variability was measured with use of intraclass correlation coefficients and SD. Differences among the operator, observers, and the computer were analyzed with use of the Wilcoxon signed-rank test and analysis of variance. For each evaluator, the results in this series of lesions were interpreted with three different definitions of success. Estimation of residual stenosis varied by an average range of 22.76% with an average SD of 8.99. The intraclass correlation coefficients averaged 0.59 for residual stenosis after angioplasty for the three observers but decreased to 0.36 when the operator was included as the fourth evaluator. There was good to very good agreement among the three independent observers and the computer, but poor correlation with the operator (P = .001). The primary success rates for this series of lesions varied from a low of 47% to high of 99%, depending solely on which definition of success was used. Significant differences among the operator, the three observers, and the computer were not present when the definition of success was based on less than 50% residual stenosis. Observer variability and bias in the subjective evaluation of peripheral angioplasty can have a significant influence on the reported initial success rates. This effect can be largely eliminated with the use of residual stenosis of less than 50% to define success. Otherwise, meaningful evaluation of angioplasty results will require independent panels of evaluators or computerized measurements.
NASA Technical Reports Server (NTRS)
1990-01-01
While a new technology called 'virtual reality' is still at the 'ground floor' level, one of its basic components, 3D computer graphics is already in wide commercial use and expanding. Other components that permit a human operator to 'virtually' explore an artificial environment and to interact with it are being demonstrated routinely at Ames and elsewhere. Virtual reality might be defined as an environment capable of being virtually entered - telepresence, it is called - or interacted with by a human. The Virtual Interface Environment Workstation (VIEW) is a head-mounted stereoscopic display system in which the display may be an artificial computer-generated environment or a real environment relayed from remote video cameras. Operator can 'step into' this environment and interact with it. The DataGlove has a series of fiber optic cables and sensors that detect any movement of the wearer's fingers and transmit the information to a host computer; a computer generated image of the hand will move exactly as the operator is moving his gloved hand. With appropriate software, the operator can use the glove to interact with the computer scene by grasping an object. The DataSuit is a sensor equipped full body garment that greatly increases the sphere of performance for virtual reality simulations.
Cosmological coherent state expectation values in loop quantum gravity I. Isotropic kinematics
NASA Astrophysics Data System (ADS)
Dapor, Andrea; Liegener, Klaus
2018-07-01
This is the first paper of a series dedicated to loop quantum gravity (LQG) coherent states and cosmology. The concept is based on the effective dynamics program of Loop Quantum Cosmology, where the classical dynamics generated by the expectation value of the Hamiltonian on semiclassical states is found to be in agreement with the quantum evolution of such states. We ask the question of whether this expectation value agrees with the one obtained in the full theory. The answer is in the negative, Dapor and Liegener (2017 arXiv:1706.09833). This series of papers is dedicated to detailing the computations that lead to that surprising result. In the current paper, we construct the family of coherent states in LQG which represent flat (k = 0) Robertson–Walker spacetimes, and present the tools needed to compute expectation values of polynomial operators in holonomy and flux on such states. These tools will be applied to the LQG Hamiltonian operator (in Thiemann regularization) in the second paper of the series. The third paper will present an extension to cosmologies and a comparison with alternative regularizations of the Hamiltonian.
Mathematical Modeling of Diverse Phenomena
NASA Technical Reports Server (NTRS)
Howard, J. C.
1979-01-01
Tensor calculus is applied to the formulation of mathematical models of diverse phenomena. Aeronautics, fluid dynamics, and cosmology are among the areas of application. The feasibility of combining tensor methods and computer capability to formulate problems is demonstrated. The techniques described are an attempt to simplify the formulation of mathematical models by reducing the modeling process to a series of routine operations, which can be performed either manually or by computer.
Implementation of cryptographic hash function SHA256 in C++
NASA Astrophysics Data System (ADS)
Shrivastava, Akash
2012-02-01
This abstract explains the implementation of SHA Secure hash algorithm 256 using C++. The SHA-2 is a strong hashing algorithm used in almost all kinds of security applications. The algorithm consists of 2 phases: Preprocessing and hash computation. Preprocessing involves padding a message, parsing the padded message into m-bits blocks, and setting initialization values to be used in the hash computation. It generates a message schedule from padded message and uses that schedule, along with functions, constants, and word operations to iteratively generate a series of hash values. The final hash value generated by the computation is used to determine the message digest. SHA-2 includes a significant number of changes from its predecessor, SHA-1. SHA-2 consists of a set of four hash functions with digests that are 224, 256, 384 or 512 bits. The algorithm outputs a 256 bits message block with an internal state block of 256 bits and initial block size of 512 bits. Maximum message length in bit is generated is 2^64 -1, over all computed over a series of 64 rounds consisting or several operations such as and, or, Xor, Shr, Rot. The code will provide clear understanding of the hash algorithm and generates hash values to retrieve message digest.
FPGA-Based Stochastic Echo State Networks for Time-Series Forecasting.
Alomar, Miquel L; Canals, Vincent; Perez-Mora, Nicolas; Martínez-Moll, Víctor; Rosselló, Josep L
2016-01-01
Hardware implementation of artificial neural networks (ANNs) allows exploiting the inherent parallelism of these systems. Nevertheless, they require a large amount of resources in terms of area and power dissipation. Recently, Reservoir Computing (RC) has arisen as a strategic technique to design recurrent neural networks (RNNs) with simple learning capabilities. In this work, we show a new approach to implement RC systems with digital gates. The proposed method is based on the use of probabilistic computing concepts to reduce the hardware required to implement different arithmetic operations. The result is the development of a highly functional system with low hardware resources. The presented methodology is applied to chaotic time-series forecasting.
FPGA-Based Stochastic Echo State Networks for Time-Series Forecasting
Alomar, Miquel L.; Canals, Vincent; Perez-Mora, Nicolas; Martínez-Moll, Víctor; Rosselló, Josep L.
2016-01-01
Hardware implementation of artificial neural networks (ANNs) allows exploiting the inherent parallelism of these systems. Nevertheless, they require a large amount of resources in terms of area and power dissipation. Recently, Reservoir Computing (RC) has arisen as a strategic technique to design recurrent neural networks (RNNs) with simple learning capabilities. In this work, we show a new approach to implement RC systems with digital gates. The proposed method is based on the use of probabilistic computing concepts to reduce the hardware required to implement different arithmetic operations. The result is the development of a highly functional system with low hardware resources. The presented methodology is applied to chaotic time-series forecasting. PMID:26880876
A mathematical model of an active control landing gear for load control during impact and roll-out
NASA Technical Reports Server (NTRS)
Mcgehee, J. R.; Carden, H. D.
1976-01-01
A mathematical model of an active control landing gear (ACOLAG) was developed and programmed for operation on a digital computer. The mathematical model includes theoretical subsonic aerodynamics; first-mode wing bending and torsional characteristics; oleo-pneumatic shock strut with fit and binding friction; closed-loop, series-hydraulic control; empirical tire force-deflection characteristics; antiskid braking; and sinusoidal or random runway roughness. The mathematical model was used to compute the loads and motions for a simulated vertical drop test and a simulated landing impact of a conventional (passive) main landing gear designed for a 2268-kg (5000-lbm) class airplane. Computations were also made for a simply modified version of the passive gear including a series-hydraulic active control system. Comparison of computed results for the passive gear with experimental data shows that the active control landing gear analysis is valid for predicting the loads and motions of an airplane during a symmetrical landing. Computed results for the series-hydraulic active control in conjunction with the simply modified passive gear show that 20- to 30-percent reductions in wing force, relative to those occurring with the modified passive gear, can be obtained during the impact phase of the landing. These reductions in wing force could result in substantial increases in fatigue life of the structure.
DNA-programmed dynamic assembly of quantum dots for molecular computation.
He, Xuewen; Li, Zhi; Chen, Muzi; Ma, Nan
2014-12-22
Despite the widespread use of quantum dots (QDs) for biosensing and bioimaging, QD-based bio-interfaceable and reconfigurable molecular computing systems have not yet been realized. DNA-programmed dynamic assembly of multi-color QDs is presented for the construction of a new class of fluorescence resonance energy transfer (FRET)-based QD computing systems. A complete set of seven elementary logic gates (OR, AND, NOR, NAND, INH, XOR, XNOR) are realized using a series of binary and ternary QD complexes operated by strand displacement reactions. The integration of different logic gates into a half-adder circuit for molecular computation is also demonstrated. This strategy is quite versatile and straightforward for logical operations and would pave the way for QD-biocomputing-based intelligent molecular diagnostics. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Simple Multiplexing Hand-Held Control Unit
NASA Technical Reports Server (NTRS)
Hannaford, Blake
1989-01-01
Multiplexer consists of series of resistors, each shunted by single-pole, single-throw switch. User operates switches by pressing buttons or squeezing triggers. Prototype includes three switches operated successfully in over 200 hours of system operations. Number of switches accommodated determined by signal-to-noise ratio of current source, noise induced in control unit and cable, and number of bits in output of analog-to-digital converter. Because many computer-contolled robots have extra analog-to-digital channels, such multiplexer added at little extra cost.
Discrete Data Qualification System and Method Comprising Noise Series Fault Detection
NASA Technical Reports Server (NTRS)
Fulton, Christopher; Wong, Edmond; Melcher, Kevin; Bickford, Randall
2013-01-01
A Sensor Data Qualification (SDQ) function has been developed that allows the onboard flight computers on NASA s launch vehicles to determine the validity of sensor data to ensure that critical safety and operational decisions are not based on faulty sensor data. This SDQ function includes a novel noise series fault detection algorithm for qualification of the output data from LO2 and LH2 low-level liquid sensors. These sensors are positioned in a launch vehicle s propellant tanks in order to detect propellant depletion during a rocket engine s boost operating phase. This detection capability can prevent the catastrophic situation where the engine operates without propellant. The output from each LO2 and LH2 low-level liquid sensor is a discrete valued signal that is expected to be in either of two states, depending on whether the sensor is immersed (wet) or exposed (dry). Conventional methods for sensor data qualification, such as threshold limit checking, are not effective for this type of signal due to its discrete binary-state nature. To address this data qualification challenge, a noise computation and evaluation method, also known as a noise fault detector, was developed to detect unreasonable statistical characteristics in the discrete data stream. The method operates on a time series of discrete data observations over a moving window of data points and performs a continuous examination of the resulting observation stream to identify the presence of anomalous characteristics. If the method determines the existence of anomalous results, the data from the sensor is disqualified for use by other monitoring or control functions.
SPring-8 beamline control system.
Ohata, T; Konishi, H; Kimura, H; Furukawa, Y; Tamasaku, K; Nakatani, T; Tanabe, T; Matsumoto, N; Ishii, M; Ishikawa, T
1998-05-01
The SPring-8 beamline control system is now taking part in the control of the insertion device (ID), front end, beam transportation channel and all interlock systems of the beamline: it will supply a highly standardized environment of apparatus control for collaborative researchers. In particular, ID operation is very important in a third-generation synchrotron light source facility. It is also very important to consider the security system because the ID is part of the storage ring and is therefore governed by the synchrotron ring control system. The progress of computer networking systems and the technology of security control require the development of a highly flexible control system. An interlock system that is independent of the control system has increased the reliability. For the beamline control system the so-called standard model concept has been adopted. VME-bus (VME) is used as the front-end control system and a UNIX workstation as the operator console. CPU boards of the VME-bus are RISC processor-based board computers operated by a LynxOS-based HP-RT real-time operating system. The workstation and the VME are linked to each other by a network, and form the distributed system. The HP 9000/700 series with HP-UX and the HP 9000/743rt series with HP-RT are used. All the controllable apparatus may be operated from any workstation.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-10
... design feature associated with the architecture and connectivity capabilities of the airplanes' computer... vulnerabilities to the airplanes' systems. The proposed network architecture includes the following connectivity.... Operator business and administrative support systems, and 3. Passenger entertainment systems, and access by...
NASA Astrophysics Data System (ADS)
Dudas, Illes; Berta, Miklos; Cser, Istvan
1998-12-01
Up-to-date manufacturing equipments of production of rotational parts in small series are lathe-centers and CNC grinding machines with high concentration of manufacturing operations. By the use of these machine tools it can be produced parts with requirements of increased accuracy and surface quality. In the lathe centers, which contain the manufacturing procedures of lathes using stationary tools and of drilling-milling machine tools using rotational tools, non-rotational surfaces of rotational parts can also be produced. The high concentration of manufacturing operations makes necessary the planning and programing of the measuring, monitoring and quality control into the technological process during manufacturing operation. In this way, taking into consideration the technological possibilities of lathe canters, the scope of computer aided technological planning duties significantly increases. It is trivial requirement to give only once the descriptions of the prefabricated parts and ready made parts. Starting taking into account these careful considerations we have been developing the planning system of technology of body of revolution on the base of GTIPROG/EC system which useful for programming of lathe centers. Out paper deals with the results of development and the occurring problems.
Time series modeling of human operator dynamics in manual control tasks
NASA Technical Reports Server (NTRS)
Biezad, D. J.; Schmidt, D. K.
1984-01-01
A time-series technique is presented for identifying the dynamic characteristics of the human operator in manual control tasks from relatively short records of experimental data. Control of system excitation signals used in the identification is not required. The approach is a multi-channel identification technique for modeling multi-input/multi-output situations. The method presented includes statistical tests for validity, is designed for digital computation, and yields estimates for the frequency responses of the human operator. A comprehensive relative power analysis may also be performed for validated models. This method is applied to several sets of experimental data; the results are discussed and shown to compare favorably with previous research findings. New results are also presented for a multi-input task that has not been previously modeled to demonstrate the strengths of the method.
Time Series Modeling of Human Operator Dynamics in Manual Control Tasks
NASA Technical Reports Server (NTRS)
Biezad, D. J.; Schmidt, D. K.
1984-01-01
A time-series technique is presented for identifying the dynamic characteristics of the human operator in manual control tasks from relatively short records of experimental data. Control of system excitation signals used in the identification is not required. The approach is a multi-channel identification technique for modeling multi-input/multi-output situations. The method presented includes statistical tests for validity, is designed for digital computation, and yields estimates for the frequency response of the human operator. A comprehensive relative power analysis may also be performed for validated models. This method is applied to several sets of experimental data; the results are discussed and shown to compare favorably with previous research findings. New results are also presented for a multi-input task that was previously modeled to demonstrate the strengths of the method.
Preliminary Evaluation of MapReduce for High-Performance Climate Data Analysis
NASA Technical Reports Server (NTRS)
Duffy, Daniel Q.; Schnase, John L.; Thompson, John H.; Freeman, Shawn M.; Clune, Thomas L.
2012-01-01
MapReduce is an approach to high-performance analytics that may be useful to data intensive problems in climate research. It offers an analysis paradigm that uses clusters of computers and combines distributed storage of large data sets with parallel computation. We are particularly interested in the potential of MapReduce to speed up basic operations common to a wide range of analyses. In order to evaluate this potential, we are prototyping a series of canonical MapReduce operations over a test suite of observational and climate simulation datasets. Our initial focus has been on averaging operations over arbitrary spatial and temporal extents within Modern Era Retrospective- Analysis for Research and Applications (MERRA) data. Preliminary results suggest this approach can improve efficiencies within data intensive analytic workflows.
2006-06-01
series with the Philippines, Indonesia, Singapore, Malaysia , Brunei, and the United States. Another example of regional collaboration is the South East...computers to choose from producers such as Sony , Fujitsu, Compaq, Toshiba, Macintosh or a custom-built PC. The selection depends on factors such as
Validating an operational physical method to compute surface radiation from geostationary satellites
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sengupta, Manajit; Dhere, Neelkanth G.; Wohlgemuth, John H.
We developed models to compute global horizontal irradiance (GHI) and direct normal irradiance (DNI) over the last three decades. These models can be classified as empirical or physical based on the approach. Empirical models relate ground-based observations with satellite measurements and use these relations to compute surface radiation. Physical models consider the physics behind the radiation received at the satellite and create retrievals to estimate surface radiation. Furthermore, while empirical methods have been traditionally used for computing surface radiation for the solar energy industry, the advent of faster computing has made operational physical models viable. The Global Solar Insolation Projectmore » (GSIP) is a physical model that computes DNI and GHI using the visible and infrared channel measurements from a weather satellite. GSIP uses a two-stage scheme that first retrieves cloud properties and uses those properties in a radiative transfer model to calculate GHI and DNI. Developed for polar orbiting satellites, GSIP has been adapted to NOAA's Geostationary Operation Environmental Satellite series and can run operationally at high spatial resolutions. Our method holds the possibility of creating high quality datasets of GHI and DNI for use by the solar energy industry. We present an outline of the methodology and results from running the model as well as a validation study using ground-based instruments.« less
Power combining in an array of microwave power rectifiers
NASA Technical Reports Server (NTRS)
Gutmann, R. J.; Borrego, J. M.
1979-01-01
This work analyzes the resultant efficiency degradation when identical rectifiers operate at different RF power levels as caused by the power beam taper. Both a closed-form analytical circuit model and a detailed computer-simulation model are used to obtain the output dc load line of the rectifier. The efficiency degradation is nearly identical with series and parallel combining, and the closed-form analytical model provides results which are similar to the detailed computer-simulation model.
The analysis of control trajectories using symbolic and database computing
NASA Technical Reports Server (NTRS)
Grossman, Robert
1995-01-01
This final report comprises the formal semi-annual status reports for this grant for the periods June 30-December 31, 1993, January 1-June 30, 1994, and June 1-December 31, 1994. The research supported by this grant is broadly concerned with the symbolic computation, mixed numeric-symbolic computation, and database computation of trajectories of dynamical systems, especially control systems. A review of work during the report period covers: trajectories and approximating series, the Cayley algebra of trees, actions of differential operators, geometrically stable integration algorithms, hybrid systems, trajectory stores, PTool, and other activities. A list of publications written during the report period is attached.
Results of the mission profile life test. [for J-series mercury ion engines
NASA Technical Reports Server (NTRS)
Bechtel, R. T.; Trump, G. E.; James, E. L.
1982-01-01
Seven J series 30-cm diameter thrusters have been tested in segments of up to 5,070 hr, for 14,541 hr in the Mission Profile Life Test facility. Test results have indicated the basic thruster design to be consistent with the lifetime goal of 15,000 hr at 2-A beam. The only areas of concern identified which appear to require additional verification testing involve contamination of mercury propellant isolators, which may be due to facility constituents, and the ability of specially covered surfaces to contain sputtered material and prevent flake formation. The ability of the SCR, series resonant inverter power processor to operate the J series thruster and autonomous computer control of the thruster/processor system were demonstrated.
Improvements to the fastex flutter analysis computer code
NASA Technical Reports Server (NTRS)
Taylor, Ronald F.
1987-01-01
Modifications to the FASTEX flutter analysis computer code (UDFASTEX) are described. The objectives were to increase the problem size capacity of FASTEX, reduce run times by modification of the modal interpolation procedure, and to add new user features. All modifications to the program are operable on the VAX 11/700 series computers under the VAX operating system. Interfaces were provided to aid in the inclusion of alternate aerodynamic and flutter eigenvalue calculations. Plots can be made of the flutter velocity, display and frequency data. A preliminary capability was also developed to plot contours of unsteady pressure amplitude and phase. The relevant equations of motion, modal interpolation procedures, and control system considerations are described and software developments are summarized. Additional information documenting input instructions, procedures, and details of the plate spline algorithm is found in the appendices.
Spectral resolution of SU(3)-invariant solutions of the Yang-Baxter equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alishauskas, S.I.; Kulish, P.P.
1986-11-20
The spectral resolution of invariant R-matrices is computed on the basis of solution of the defining equation. Multiple representations in the Clebsch-Gordon series are considered by means of the classifying operator A: a linear combination of known operators of third and fourth degrees in the group generators. The matrix elements of A in a nonorthonormal basis are found. Explicit expressions are presented for the spectral resolutions for a number of representations.
Spectral resolution of SU(3)-invariant solutions of the Yang-Baxter equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alishavskas, S.I.; Kulish, P.P.
1986-11-01
The spectral resolution of invariant R-matrices is computed on the basis of solution of the defining equation. Multiple representations in the Clebsch-Gordon series are considered by means of the classifying operator A: a linear combination of known operators of third and fourth degrees in the group generators. The matrix elements of A in a nonorthonormal basis are found. Explicit expressions are presented for the spectral resolutions for a number of representations.
Performance evaluation of a six-axis generalized force-reflecting teleoperator
NASA Technical Reports Server (NTRS)
Hannaford, B.; Wood, L.; Guggisberg, B.; Mcaffee, D.; Zak, H.
1989-01-01
Work in real-time distributed computation and control has culminated in a prototype force-reflecting telemanipulation system having a dissimilar master (cable-driven, force-reflecting hand controller) and a slave (PUMA 560 robot with custom controller), an extremely high sampling rate (1000 Hz), and a low loop computation delay (5 msec). In a series of experiments with this system and five trained test operators covering over 100 hours of teleoperation, performance was measured in a series of generic and application-driven tasks with and without force feedback, and with control shared between teleoperation and local sensor referenced control. Measurements defining task performance included 100-Hz recording of six-axis force/torque information from the slave manipulator wrist, task completion time, and visual observation of predefined task errors. The task consisted of high precision peg-in-hole insertion, electrical connectors, velcro attach-de-attach, and a twist-lock multi-pin connector. Each task was repeated three times under several operating conditions: normal bilateral telemanipulation, forward position control without force feedback, and shared control. In shared control, orientation was locally servo controlled to comply with applied torques, while translation was under operator control. All performance measures improved as capability was added along a spectrum of capabilities ranging from pure position control through force-reflecting teleoperation and shared control. Performance was optimal for the bare-handed operator.
1984-12-01
3Com Corporation ....... A-18 Ethernet Controller Support . . . . . . A-19 Host Systems Support . . . . . . . . . A-20 Personal Computers Support...A-23 VAX EtherSeries Software 0 * A-23 Network Research Corporation . o o o . o A-24 File Transfer Service . . . . o A-25 Virtual Terminal Service 0...Control office is planning to acquire a Digital Equipment Corporation VAX 11/780 mainframe computer with the Unix Berkeley 4.2BSD operating system. They
Operator bases, S-matrices, and their partition functions
NASA Astrophysics Data System (ADS)
Henning, Brian; Lu, Xiaochuan; Melia, Tom; Murayama, Hitoshi
2017-10-01
Relativistic quantum systems that admit scattering experiments are quantitatively described by effective field theories, where S-matrix kinematics and symmetry considerations are encoded in the operator spectrum of the EFT. In this paper we use the S-matrix to derive the structure of the EFT operator basis, providing complementary descriptions in (i) position space utilizing the conformal algebra and cohomology and (ii) momentum space via an algebraic formulation in terms of a ring of momenta with kinematics implemented as an ideal. These frameworks systematically handle redundancies associated with equations of motion (on-shell) and integration by parts (momentum conservation). We introduce a partition function, termed the Hilbert series, to enumerate the operator basis — correspondingly, the S-matrix — and derive a matrix integral expression to compute the Hilbert series. The expression is general, easily applied in any spacetime dimension, with arbitrary field content and (linearly realized) symmetries. In addition to counting, we discuss construction of the basis. Simple algorithms follow from the algebraic formulation in momentum space. We explicitly compute the basis for operators involving up to n = 5 scalar fields. This construction universally applies to fields with spin, since the operator basis for scalars encodes the momentum dependence of n-point amplitudes. We discuss in detail the operator basis for non-linearly realized symmetries. In the presence of massless particles, there is freedom to impose additional structure on the S- matrix in the form of soft limits. The most na¨ıve implementation for massless scalars leads to the operator basis for pions, which we confirm using the standard CCWZ formulation for non-linear realizations. Although primarily discussed in the language of EFT, some of our results — conceptual and quantitative — may be of broader use in studying conformal field theories as well as the AdS/CFT correspondence.
Computer-assisted navigation in orthopedic surgery.
Mavrogenis, Andreas F; Savvidou, Olga D; Mimidis, George; Papanastasiou, John; Koulalis, Dimitrios; Demertzis, Nikolaos; Papagelopoulos, Panayiotis J
2013-08-01
Computer-assisted navigation has a role in some orthopedic procedures. It allows the surgeons to obtain real-time feedback and offers the potential to decrease intra-operative errors and optimize the surgical result. Computer-assisted navigation systems can be active or passive. Active navigation systems can either perform surgical tasks or prohibit the surgeon from moving past a predefined zone. Passive navigation systems provide intraoperative information, which is displayed on a monitor, but the surgeon is free to make any decisions he or she deems necessary. This article reviews the available types of computer-assisted navigation, summarizes the clinical applications and reviews the results of related series using navigation, and informs surgeons of the disadvantages and pitfalls of computer-assisted navigation in orthopedic surgery. Copyright 2013, SLACK Incorporated.
Reactor transient control in support of PFR/TREAT TUCOP experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burrows, D.R.; Larsen, G.R.; Harrison, L.J.
1984-01-01
Unique energy deposition and experiment control requirements posed bythe PFR/TREAT series of transient undercooling/overpower (TUCOP) experiments resulted in equally unique TREAT reactor operations. New reactor control computer algorithms were written and used with the TREAT reactor control computer system to perform such functions as early power burst generation (based on test train flow conditions), burst generation produced by a step insertion of reactivity following a controlled power ramp, and shutdown (SCRAM) initiators based on both test train conditions and energy deposition. Specialized hardware was constructed to simulate test train inputs to the control computer system so that computer algorithms couldmore » be tested in real time without irradiating the experiment.« less
Linear solver performance in elastoplastic problem solution on GPU cluster
NASA Astrophysics Data System (ADS)
Khalevitsky, Yu. V.; Konovalov, A. V.; Burmasheva, N. V.; Partin, A. S.
2017-12-01
Applying the finite element method to severe plastic deformation problems involves solving linear equation systems. While the solution procedure is relatively hard to parallelize and computationally intensive by itself, a long series of large scale systems need to be solved for each problem. When dealing with fine computational meshes, such as in the simulations of three-dimensional metal matrix composite microvolume deformation, tens and hundreds of hours may be needed to complete the whole solution procedure, even using modern supercomputers. In general, one of the preconditioned Krylov subspace methods is used in a linear solver for such problems. The method convergence highly depends on the operator spectrum of a problem stiffness matrix. In order to choose the appropriate method, a series of computational experiments is used. Different methods may be preferable for different computational systems for the same problem. In this paper we present experimental data obtained by solving linear equation systems from an elastoplastic problem on a GPU cluster. The data can be used to substantiate the choice of the appropriate method for a linear solver to use in severe plastic deformation simulations.
WOLF; automatic typing program
Evenden, G.I.
1982-01-01
A FORTRAN IV program for the Hewlett-Packard 1000 series computer provides for automatic typing operations and can, when employed with manufacturer's text editor, provide a system to greatly facilitate preparation of reports, letters and other text. The input text and imbedded control data can perform nearly all of the functions of a typist. A few of the features available are centering, titles, footnotes, indentation, page numbering (including Roman numerals), automatic paragraphing, and two forms of tab operations. This documentation contains both user and technical description of the program.
2005-11-01
interest has a large variance so that excessive run lengths are required. This naturally invokes the interest for searches for effective variance ...years since World War II the nature , organization, and mode of the operation of command organizations within the Army has remained virtually...Laboratory began a series of studies and projects focused on investigating the nature of military command and control (C2) operations. The questions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Preston, Leiph
Although using standard Taylor series coefficients for finite-difference operators is optimal in the sense that in the limit of infinitesimal space and time discretization, the solution approaches the correct analytic solution to the acousto-dynamic system of differential equations, other finite-difference operators may provide optimal computational run time given certain error bounds or source bandwidth constraints. This report describes the results of investigation of alternative optimal finite-difference coefficients based on several optimization/accuracy scenarios and provides recommendations for minimizing run time while retaining error within given error bounds.
Cloud Computing for Mission Design and Operations
NASA Technical Reports Server (NTRS)
Arrieta, Juan; Attiyah, Amy; Beswick, Robert; Gerasimantos, Dimitrios
2012-01-01
The space mission design and operations community already recognizes the value of cloud computing and virtualization. However, natural and valid concerns, like security, privacy, up-time, and vendor lock-in, have prevented a more widespread and expedited adoption into official workflows. In the interest of alleviating these concerns, we propose a series of guidelines for internally deploying a resource-oriented hub of data and algorithms. These guidelines provide a roadmap for implementing an architecture inspired in the cloud computing model: associative, elastic, semantical, interconnected, and adaptive. The architecture can be summarized as exposing data and algorithms as resource-oriented Web services, coordinated via messaging, and running on virtual machines; it is simple, and based on widely adopted standards, protocols, and tools. The architecture may help reduce common sources of complexity intrinsic to data-driven, collaborative interactions and, most importantly, it may provide the means for teams and agencies to evaluate the cloud computing model in their specific context, with minimal infrastructure changes, and before committing to a specific cloud services provider.
Programs To Optimize Spacecraft And Aircraft Trajectories
NASA Technical Reports Server (NTRS)
Brauer, G. L.; Petersen, F. M.; Cornick, D.E.; Stevenson, R.; Olson, D. W.
1994-01-01
POST/6D POST is set of two computer programs providing ability to target and optimize trajectories of powered or unpowered spacecraft or aircraft operating at or near rotating planet. POST treats point-mass, three-degree-of-freedom case. 6D POST treats more-general rigid-body, six-degree-of-freedom (with point masses) case. Used to solve variety of performance, guidance, and flight-control problems for atmospheric and orbital vehicles. Applications include computation of performance or capability of vehicle in ascent, or orbit, and during entry into atmosphere, simulation and analysis of guidance and flight-control systems, dispersion-type analyses and analyses of loads, general-purpose six-degree-of-freedom simulation of controlled and uncontrolled vehicles, and validation of performance in six degrees of freedom. Written in FORTRAN 77 and C language. Two machine versions available: one for SUN-series computers running SunOS(TM) (LAR-14871) and one for Silicon Graphics IRIS computers running IRIX(TM) operating system (LAR-14869).
InPRO: Automated Indoor Construction Progress Monitoring Using Unmanned Aerial Vehicles
NASA Astrophysics Data System (ADS)
Hamledari, Hesam
In this research, an envisioned automated intelligent robotic solution for automated indoor data collection and inspection that employs a series of unmanned aerial vehicles (UAV), entitled "InPRO", is presented. InPRO consists of four stages, namely: 1) automated path planning; 2) autonomous UAV-based indoor inspection; 3) automated computer vision-based assessment of progress; and, 4) automated updating of 4D building information models (BIM). The works presented in this thesis address the third stage of InPRO. A series of computer vision-based methods that automate the assessment of construction progress using images captured at indoor sites are introduced. The proposed methods employ computer vision and machine learning techniques to detect the components of under-construction indoor partitions. In particular, framing (studs), insulation, electrical outlets, and different states of drywall sheets (installing, plastering, and painting) are automatically detected using digital images. High accuracy rates, real-time performance, and operation without a priori information are indicators of the methods' promising performance.
A probabilistic method for constructing wave time-series at inshore locations using model scenarios
Long, Joseph W.; Plant, Nathaniel G.; Dalyander, P. Soupy; Thompson, David M.
2014-01-01
Continuous time-series of wave characteristics (height, period, and direction) are constructed using a base set of model scenarios and simple probabilistic methods. This approach utilizes an archive of computationally intensive, highly spatially resolved numerical wave model output to develop time-series of historical or future wave conditions without performing additional, continuous numerical simulations. The archive of model output contains wave simulations from a set of model scenarios derived from an offshore wave climatology. Time-series of wave height, period, direction, and associated uncertainties are constructed at locations included in the numerical model domain. The confidence limits are derived using statistical variability of oceanographic parameters contained in the wave model scenarios. The method was applied to a region in the northern Gulf of Mexico and assessed using wave observations at 12 m and 30 m water depths. Prediction skill for significant wave height is 0.58 and 0.67 at the 12 m and 30 m locations, respectively, with similar performance for wave period and direction. The skill of this simplified, probabilistic time-series construction method is comparable to existing large-scale, high-fidelity operational wave models but provides higher spatial resolution output at low computational expense. The constructed time-series can be developed to support a variety of applications including climate studies and other situations where a comprehensive survey of wave impacts on the coastal area is of interest.
An accurate boundary element method for the exterior elastic scattering problem in two dimensions
NASA Astrophysics Data System (ADS)
Bao, Gang; Xu, Liwei; Yin, Tao
2017-11-01
This paper is concerned with a Galerkin boundary element method solving the two dimensional exterior elastic wave scattering problem. The original problem is first reduced to the so-called Burton-Miller [1] boundary integral formulation, and essential mathematical features of its variational form are discussed. In numerical implementations, a newly-derived and analytically accurate regularization formula [2] is employed for the numerical evaluation of hyper-singular boundary integral operator. A new computational approach is employed based on the series expansions of Hankel functions for the computation of weakly-singular boundary integral operators during the reduction of corresponding Galerkin equations into a discrete linear system. The effectiveness of proposed numerical methods is demonstrated using several numerical examples.
NASA Technical Reports Server (NTRS)
Fijany, Amir (Inventor); Bejczy, Antal K. (Inventor)
1993-01-01
This is a real-time robotic controller and simulator which is a MIMD-SIMD parallel architecture for interfacing with an external host computer and providing a high degree of parallelism in computations for robotic control and simulation. It includes a host processor for receiving instructions from the external host computer and for transmitting answers to the external host computer. There are a plurality of SIMD microprocessors, each SIMD processor being a SIMD parallel processor capable of exploiting fine grain parallelism and further being able to operate asynchronously to form a MIMD architecture. Each SIMD processor comprises a SIMD architecture capable of performing two matrix-vector operations in parallel while fully exploiting parallelism in each operation. There is a system bus connecting the host processor to the plurality of SIMD microprocessors and a common clock providing a continuous sequence of clock pulses. There is also a ring structure interconnecting the plurality of SIMD microprocessors and connected to the clock for providing the clock pulses to the SIMD microprocessors and for providing a path for the flow of data and instructions between the SIMD microprocessors. The host processor includes logic for controlling the RRCS by interpreting instructions sent by the external host computer, decomposing the instructions into a series of computations to be performed by the SIMD microprocessors, using the system bus to distribute associated data among the SIMD microprocessors, and initiating activity of the SIMD microprocessors to perform the computations on the data by procedure call.
Computer model of Raritan River Basin water-supply system in central New Jersey
Dunne, Paul; Tasker, Gary D.
1996-01-01
This report describes a computer model of the Raritan River Basin water-supply system in central New Jersey. The computer model provides a technical basis for evaluating the effects of alternative patterns of operation of the Raritan River Basin water-supply system during extended periods of below-average precipitation. The computer model is a continuity-accounting model consisting of a series of interconnected nodes. At each node, the inflow volume, outflow volume, and change in storage are determined and recorded for each month. The model runs with a given set of operating rules and water-use requirements including releases, pumpages, and diversions. The model can be used to assess the hypothetical performance of the Raritan River Basin water- supply system in past years under alternative sets of operating rules. It also can be used to forecast the likelihood of specified outcomes, such as the depletion of reservoir contents below a specified threshold or of streamflows below statutory minimum passing flows, for a period of up to 12 months. The model was constructed on the basis of current reservoir capacities and the natural, unregulated monthly runoff values recorded at U.S. Geological Survey streamflow- gaging stations in the basin.
ERIC Educational Resources Information Center
Hulse, Ira; And Others
One of six documents describing the Management Information System for Vocational Education (MISVE), this document is intended for MISVE managers and electronic data processing (EDP) operations staff who would be responsible for the implementation and maintenance of the MISVE on the computer. (MISVE was designed to provide users with an advanced…
ERIC Educational Resources Information Center
Muiznieks, Viktors
This report provides a technical description and operating guidelines for the IMSAI 8080 microcomputer in the Department of Secondary Education at the University of Illinois. An overview of the microcomputer highlights the register array, address logic, arithmetic and logical unit, instruction register and control section, and the data bus buffer.…
NASA Technical Reports Server (NTRS)
1998-01-01
Stirling Technology Company developed the components for its BeCOOL line of Cryocoolers with the help of a series of NASA SBIRs (Small Business Innovative Research), through Goddard Space Flight Center and Marshall Space Flight Center. Features include a hermetically sealed design, compact size, and silent operation. The company has already placed several units with commercial customers for computer applications and laboratory use.
Thermodynamics of natural selection III: Landauer's principle in computation and chemistry.
Smith, Eric
2008-05-21
This is the third in a series of three papers devoted to energy flow and entropy changes in chemical and biological processes, and their relations to the thermodynamics of computation. The previous two papers have developed reversible chemical transformations as idealizations for studying physiology and natural selection, and derived bounds from the second law of thermodynamics, between information gain in an ensemble and the chemical work required to produce it. This paper concerns the explicit mapping of chemistry to computation, and particularly the Landauer decomposition of irreversible computations, in which reversible logical operations generating no heat are separated from heat-generating erasure steps which are logically irreversible but thermodynamically reversible. The Landauer arrangement of computation is shown to produce the same entropy-flow diagram as that of the chemical Carnot cycles used in the second paper of the series to idealize physiological cycles. The specific application of computation to data compression and error-correcting encoding also makes possible a Landauer analysis of the somewhat different problem of optimal molecular recognition, which has been considered as an information theory problem. It is shown here that bounds on maximum sequence discrimination from the enthalpy of complex formation, although derived from the same logical model as the Shannon theorem for channel capacity, arise from exactly the opposite model for erasure.
General-Purpose Serial Interface For Remote Control
NASA Technical Reports Server (NTRS)
Busquets, Anthony M.; Gupton, Lawrence E.
1990-01-01
Computer controls remote television camera. General-purpose controller developed to serve as interface between host computer and pan/tilt/zoom/focus functions on series of automated video cameras. Interface port based on 8251 programmable communications-interface circuit configured for tristated outputs, and connects controller system to any host computer with RS-232 input/output (I/O) port. Accepts byte-coded data from host, compares them with prestored codes in read-only memory (ROM), and closes or opens appropriate switches. Six output ports control opening and closing of as many as 48 switches. Operator controls remote television camera by speaking commands, in system including general-purpose controller.
Cutting tool form compensation system and method
Barkman, W.E.; Babelay, E.F. Jr.; Klages, E.J.
1993-10-19
A compensation system for a computer-controlled machining apparatus having a controller and including a cutting tool and a workpiece holder which are movable relative to one another along a preprogrammed path during a machining operation utilizes a camera and a vision computer for gathering information at a preselected stage of a machining operation relating to the actual shape and size of the cutting edge of the cutting tool and for altering the preprogrammed path in accordance with detected variations between the actual size and shape of the cutting edge and an assumed size and shape of the cutting edge. The camera obtains an image of the cutting tool against a background so that the cutting tool and background possess contrasting light intensities, and the vision computer utilizes the contrasting light intensities of the image to locate points therein which correspond to points along the actual cutting edge. Following a series of computations involving the determining of a tool center from the points identified along the tool edge, the results of the computations are fed to the controller where the preprogrammed path is altered as aforedescribed. 9 figures.
Cutting tool form compensaton system and method
Barkman, William E.; Babelay, Jr., Edwin F.; Klages, Edward J.
1993-01-01
A compensation system for a computer-controlled machining apparatus having a controller and including a cutting tool and a workpiece holder which are movable relative to one another along a preprogrammed path during a machining operation utilizes a camera and a vision computer for gathering information at a preselected stage of a machining operation relating to the actual shape and size of the cutting edge of the cutting tool and for altering the preprogrammed path in accordance with detected variations between the actual size and shape of the cutting edge and an assumed size and shape of the cutting edge. The camera obtains an image of the cutting tool against a background so that the cutting tool and background possess contrasting light intensities, and the vision computer utilizes the contrasting light intensities of the image to locate points therein which correspond to points along the actual cutting edge. Following a series of computations involving the determining of a tool center from the points identified along the tool edge, the results of the computations are fed to the controller where the preprogrammed path is altered as aforedescribed.
NASA Astrophysics Data System (ADS)
Olivares, M. A.; Guzman, C.; Rossel, V.; De La Fuente, A.
2013-12-01
Hydropower accounts for about 44% of installed capacity in Chile's Central Interconnected System, which serves most of the Chilean population. Hydropower reservoir projects can affect ecosystems by changing the hydrologic regime and water quality. Given its volumen regulation capacity, low operation costs and fast response to demand fluctuations, reservoir hydropower plants commonly operate on a load-following or hydropeaking scheme. This short-term operational pattern produces alterations in the hydrologic regime downstream the reservoir. In the case of thermally stratified reservoirs, peaking operations can affect the thermal structure of the reservoir, as well as the thermal regime downstream. In this study, we assessed the subdaily hydrologic and thermal alteration donwstream of Rapel reservoir in Central Chile for alternative operational scenarios, including a base case and several scenarios involving minimum instream flow (Qmin) and maximum hourly ramping rates (ΔQmax). Scenarios were simulated for the stratification season of summer 2009-2012 in a grid-wide short-term economic dispatch model which prescribes hourly power production by every power plant on a weekly horizon. Power time series are then translated into time series of turbined flows at each hydropower plants. Indicators of subdaily hydrologic alteration (SDHA) were computed for every scenario. Additionally, turbined flows were used as input data for a three-dimensional hydrodynamic model (CWR-ELCOM) of the reservoir which simulated the vertical temperature profile in the reservoir and the outflow temperature. For the time series of outflow temperatures we computed several indicators of subdaily thermal alteration (SDTA). Operational constraints reduce the values of both SDHA and SDTA indicators with respect to the base case. When constraints are applied separately, the indicators of SDHA decrease as each type of constraint (Qmin or ΔQmax) becomes more stringent. However, ramping rate constraints proved more effective than minimun instream flows. Combined constraints produced even better results. Results for the indicators of SDTA follow a similar trend than that of SDHA. More restrictive operations result in lower values for the indicators. However, the impact of the different constraint scenarios is smaller, as results look alike for all scenarios. Moreover, due to the mixing conditions associated to the operational schemes, mean temperatures increased with respect to the unconstrained case.
NASA Astrophysics Data System (ADS)
Lanari, Riccardo; Bonano, Manuela; Buonanno, Sabatino; Casu, Francesco; De Luca, Claudio; Fusco, Adele; Manunta, Michele; Manzo, Mariarosaria; Pepe, Antonio; Zinno, Ivana
2017-04-01
The SENTINEL-1 (S1) mission is designed to provide operational capability for continuous mapping of the Earth thanks to its two polar-orbiting satellites (SENTINEL-1A and B) performing C-band synthetic aperture radar (SAR) imaging. It is, indeed, characterized by enhanced revisit frequency, coverage and reliability for operational services and applications requiring long SAR data time series. Moreover, SENTINEL-1 is specifically oriented to interferometry applications with stringent requirements based on attitude and orbit accuracy and it is intrinsically characterized by small spatial and temporal baselines. Consequently, SENTINEL-1 data are particularly suitable to be exploited through advanced interferometric techniques such as the well-known DInSAR algorithm referred to as Small BAseline Subset (SBAS), which allows the generation of deformation time series and displacement velocity maps. In this work we present an advanced interferometric processing chain, based on the Parallel SBAS (P-SBAS) approach, for the massive processing of S1 Interferometric Wide Swath (IWS) data aimed at generating deformation time series in efficient, automatic and systematic way. Such a DInSAR chain is designed to exploit distributed computing infrastructures, and more specifically Cloud Computing environments, to properly deal with the storage and the processing of huge S1 datasets. In particular, since S1 IWS data are acquired with the innovative Terrain Observation with Progressive Scans (TOPS) mode, we could benefit from the structure of S1 data, which are composed by bursts that can be considered as separate acquisitions. Indeed, the processing is intrinsically parallelizable with respect to such independent input data and therefore we basically exploited this coarse granularity parallelization strategy in the majority of the steps of the SBAS processing chain. Moreover, we also implemented more sophisticated parallelization approaches, exploiting both multi-node and multi-core programming techniques. Currently, Cloud Computing environments make available large collections of computing resources and storage that can be effectively exploited through the presented S1 P-SBAS processing chain to carry out interferometric analyses at a very large scale, in reduced time. This allows us to deal also with the problems connected to the use of S1 P-SBAS chain in operational contexts, related to hazard monitoring and risk prevention and mitigation, where handling large amounts of data represents a challenging task. As a significant experimental result we performed a large spatial scale SBAS analysis relevant to the Central and Southern Italy by exploiting the Amazon Web Services Cloud Computing platform. In particular, we processed in parallel 300 S1 acquisitions covering the Italian peninsula from Lazio to Sicily through the presented S1 P-SBAS processing chain, generating 710 interferograms, thus finally obtaining the displacement time series of the whole processed area. This work has been partially supported by the CNR-DPC agreement, the H2020 EPOS-IP project (GA 676564) and the ESA GEP project.
Variable Generation Power Forecasting as a Big Data Problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haupt, Sue Ellen; Kosovic, Branko
To blend growing amounts of power from renewable resources into utility operations requires accurate forecasts. For both day ahead planning and real-time operations, the power from the wind and solar resources must be predicted based on real-time observations and a series of models that span the temporal and spatial scales of the problem, using the physical and dynamical knowledge as well as computational intelligence. Accurate prediction is a Big Data problem that requires disparate data, multiple models that are each applicable for a specific time frame, and application of computational intelligence techniques to successfully blend all of the model andmore » observational information in real-time and deliver it to the decision makers at utilities and grid operators. This paper describes an example system that has been used for utility applications and how it has been configured to meet utility needs while addressing the Big Data issues.« less
Variable Generation Power Forecasting as a Big Data Problem
Haupt, Sue Ellen; Kosovic, Branko
2016-10-10
To blend growing amounts of power from renewable resources into utility operations requires accurate forecasts. For both day ahead planning and real-time operations, the power from the wind and solar resources must be predicted based on real-time observations and a series of models that span the temporal and spatial scales of the problem, using the physical and dynamical knowledge as well as computational intelligence. Accurate prediction is a Big Data problem that requires disparate data, multiple models that are each applicable for a specific time frame, and application of computational intelligence techniques to successfully blend all of the model andmore » observational information in real-time and deliver it to the decision makers at utilities and grid operators. This paper describes an example system that has been used for utility applications and how it has been configured to meet utility needs while addressing the Big Data issues.« less
NASA Astrophysics Data System (ADS)
Courdurier, M.; Monard, F.; Osses, A.; Romero, F.
2015-09-01
In medical single-photon emission computed tomography (SPECT) imaging, we seek to simultaneously obtain the internal radioactive sources and the attenuation map using not only ballistic measurements but also first-order scattering measurements and assuming a very specific scattering regime. The problem is modeled using the radiative transfer equation by means of an explicit non-linear operator that gives the ballistic and scattering measurements as a function of the radioactive source and attenuation distributions. First, by differentiating this non-linear operator we obtain a linearized inverse problem. Then, under regularity hypothesis for the source distribution and attenuation map and considering small attenuations, we rigorously prove that the linear operator is invertible and we compute its inverse explicitly. This allows proof of local uniqueness for the non-linear inverse problem. Finally, using the previous inversion result for the linear operator, we propose a new type of iterative algorithm for simultaneous source and attenuation recovery for SPECT based on the Neumann series and a Newton-Raphson algorithm.
NASA Astrophysics Data System (ADS)
Alam Khan, Najeeb; Razzaq, Oyoon Abdul
2016-03-01
In the present work a wavelets approximation method is employed to solve fuzzy boundary value differential equations (FBVDEs). Essentially, a truncated Legendre wavelets series together with the Legendre wavelets operational matrix of derivative are utilized to convert FB- VDE into a simple computational problem by reducing it into a system of fuzzy algebraic linear equations. The capability of scheme is investigated on second order FB- VDE considered under generalized H-differentiability. Solutions are represented graphically showing competency and accuracy of this method.
A computer program for estimation from incomplete multinomial data
NASA Technical Reports Server (NTRS)
Credeur, K. R.
1978-01-01
Coding is given for maximum likelihood and Bayesian estimation of the vector p of multinomial cell probabilities from incomplete data. Also included is coding to calculate and approximate elements of the posterior mean and covariance matrices. The program is written in FORTRAN 4 language for the Control Data CYBER 170 series digital computer system with network operating system (NOS) 1.1. The program requires approximately 44000 octal locations of core storage. A typical case requires from 72 seconds to 92 seconds on CYBER 175 depending on the value of the prior parameter.
Computer-enhanced robotic telesurgery minimizes esophageal perforation during Heller myotomy.
Melvin, W Scott; Dundon, John M; Talamini, Mark; Horgan, Santiago
2005-10-01
Laparoscopic Heller myotomy has emerged as the treatment of choice for achalasia. However, intraoperative esophageal perforation remains a significant complication. Computer-enhanced operative techniques have the potential to improve outcomes for certain operative procedures. Robotic, computer-enhanced laparoscopic telemanipulators using 3-dimensional magnified imaging and motion scaling are designed uniquely to facilitate certain operations requiring fine-tissue manipulation. We hypothesized that computer-enhanced robotic Heller myotomy would reduce intraoperative complications compared with laparoscopic techniques. All patients undergoing an operation for achalasia at 3 institutions with a robotic surgery system (DaVinci; Intuitive Surgical Corporation, Sunnyvale, Calif) were followed-up prospectively. Demographics, perioperative course, complications, and hospital stay were recorded. Follow-up evaluation was obtained via a standardized symptom survey, office visits, and medical records. Data were compared with preoperative symptoms using a Mann-Whitney U test, and operating times were compared using the ANOVA test. Between August 2000 and August 2004 there were 104 patients who underwent a robotic Heller myotomy with partial fundoplicaton. There were 53 women and 51 men. All patients were symptomatic. The operative time was 140.55 minutes overall, but improved from 162.63 minutes to 113.50 minutes from 2000-2002 to 2003-2004 (P = .0001). There were no esophageal perforations. There were 8 minor complications and 1 patient required conversion to an open operation. Sixty-six (62.3%) patients were discharged on the first postoperative day and the average hospital stay was 1.5 days. A symptom survey was completed in 79 of 104 patients (76%) at follow-up evaluation. Symptoms improved in all patients with an average follow-up symptom score of 0.48 compared with 5.0 before the operation (P = .0001). Forty-three of the 79 patients from whom follow-up data were collected had a minimum follow-up period of 1 year. The follow-up period averaged 16 months. No patients required reoperation. Computer-enhanced robotic laparoscopic techniques provide a clear advantage over standard laparoscopy for the operative treatment of achalasia. We have shown in this large series that Heller myotomy can be completed using this technology without esophageal perforation. The application of computer-enhanced operative techniques appears to provide superior outcomes in selected procedures.
Conformal Dimensions via Large Charge Expansion
NASA Astrophysics Data System (ADS)
Banerjee, Debasish; Chandrasekharan, Shailesh; Orlando, Domenico
2018-02-01
We construct an efficient Monte Carlo algorithm that overcomes the severe signal-to-noise ratio problems and helps us to accurately compute the conformal dimensions of large-Q fields at the Wilson-Fisher fixed point in the O (2 ) universality class. Using it, we verify a recent proposal that conformal dimensions of strongly coupled conformal field theories with a global U (1 ) charge can be obtained via a series expansion in the inverse charge 1 /Q . We find that the conformal dimensions of the lowest operator with a fixed charge Q are almost entirely determined by the first few terms in the series.
Conformal Dimensions via Large Charge Expansion.
Banerjee, Debasish; Chandrasekharan, Shailesh; Orlando, Domenico
2018-02-09
We construct an efficient Monte Carlo algorithm that overcomes the severe signal-to-noise ratio problems and helps us to accurately compute the conformal dimensions of large-Q fields at the Wilson-Fisher fixed point in the O(2) universality class. Using it, we verify a recent proposal that conformal dimensions of strongly coupled conformal field theories with a global U(1) charge can be obtained via a series expansion in the inverse charge 1/Q. We find that the conformal dimensions of the lowest operator with a fixed charge Q are almost entirely determined by the first few terms in the series.
PASMet: a web-based platform for prediction, modelling and analyses of metabolic systems
Sriyudthsak, Kansuporn; Mejia, Ramon Francisco; Arita, Masanori; Hirai, Masami Yokota
2016-01-01
PASMet (Prediction, Analysis and Simulation of Metabolic networks) is a web-based platform for proposing and verifying mathematical models to understand the dynamics of metabolism. The advantages of PASMet include user-friendliness and accessibility, which enable biologists and biochemists to easily perform mathematical modelling. PASMet offers a series of user-functions to handle the time-series data of metabolite concentrations. The functions are organised into four steps: (i) Prediction of a probable metabolic pathway and its regulation; (ii) Construction of mathematical models; (iii) Simulation of metabolic behaviours; and (iv) Analysis of metabolic system characteristics. Each function contains various statistical and mathematical methods that can be used independently. Users who may not have enough knowledge of computing or programming can easily and quickly analyse their local data without software downloads, updates or installations. Users only need to upload their files in comma-separated values (CSV) format or enter their model equations directly into the website. Once the time-series data or mathematical equations are uploaded, PASMet automatically performs computation on server-side. Then, users can interactively view their results and directly download them to their local computers. PASMet is freely available with no login requirement at http://pasmet.riken.jp/ from major web browsers on Windows, Mac and Linux operating systems. PMID:27174940
The Parker-Sochacki Method--A Powerful New Method for Solving Systems of Differential Equations
NASA Astrophysics Data System (ADS)
Rudmin, Joseph W.
2001-04-01
The Parker-Sochacki Method--A Powerful New Method for Solving Systems of Differential Equations Joseph W. Rudmin (Physics Dept, James Madison University) A new system of solving systems of differential equations will be presented, which has been developed by J. Edgar Parker and James Sochacki, of the James Madison University Mathematics Department. The method produces MacClaurin Series solutions to systems of differential equations, with the coefficients in either algebraic or numerical form. The method yields high-degree solutions: 20th degree is easily obtainable. It is conceptually simple, fast, and extremely general. It has been applied to over a hundred systems of differential equations, some of which were previously unsolved, and has yet to fail to solve any system for which the MacClaurin series converges. The method is non-recursive: each coefficient in the series is calculated just once, in closed form, and its accuracy is limited only by the digital accuracy of the computer. Although the original differential equations may include any mathematical functions, the computational method includes ONLY the operations of addition, subtraction, and multiplication. Furthermore, it is perfectly suited to parallel -processing computer languages. Those who learn this system will never use Runge-Kutta or predictor-corrector methods again. Examples will be presented, including the classical many-body problem.
NASA Technical Reports Server (NTRS)
Dayton, J. A., Jr.; Kosmahl, H. G.; Ramins, P.; Stankiewicz, N.
1979-01-01
Experimental and analytical results are compared for two high performance, octave bandwidth TWT's that use depressed collectors (MDC's) to improve the efficiency. The computations were carried out with advanced, multidimensional computer programs that are described here in detail. These programs model the electron beam as a series of either disks or rings of charge and follow their multidimensional trajectories from the RF input of the ideal TWT, through the slow wave structure, through the magnetic refocusing system, to their points of impact in the depressed collector. Traveling wave tube performance, collector efficiency, and collector current distribution were computed and the results compared with measurements for a number of TWT-MDC systems. Power conservation and correct accounting of TWT and collector losses were observed. For the TWT's operating at saturation, very good agreement was obtained between the computed and measured collector efficiencies. For a TWT operating 3 and 6 dB below saturation, excellent agreement between computed and measured collector efficiencies was obtained in some cases but only fair agreement in others. However, deviations can largely be explained by small differences in the computed and actual spent beam energy distributions. The analytical tools used here appear to be sufficiently refined to design efficient collectors for this class of TWT. However, for maximum efficiency, some experimental optimization (e.g., collector voltages and aperture sizes) will most likely be required.
ERIC Educational Resources Information Center
Gumus, Sedat; Atalmis, Erkan Hasan
2011-01-01
Organization for Economic Co-Operation and Development (OECD) has conducted a series of educational assessments in many OECD and non-OECD countries to support their sustainable economic growth since 2000. These assessments are named Program for International Student Achievement (PISA); they focus on the capabilities of 15-year olds in three main…
ERIC Educational Resources Information Center
Cowan, Christina E.
This module is part of a series designed to be used by life science students for instruction in the application of physical theory to ecosystem operation. Most modules contain computer programs which are built around a particular application of a physical process. This module deals specifically with concepts that are basic to fluid flow and…
ERIC Educational Resources Information Center
Simpson, James R.
This module is part of a series on Physical Processes in Terrestrial and Aquatic Ecosystems. The materials were designed to be used by life science students for instruction in the application of physical theory to ecosystem operation. Most modules contain computer programs which are built around a particular application of a physical process.…
ERIC Educational Resources Information Center
Cowan, Christina E.
This module is part of a series designed to be used by life science students for instruction in the application of physical theory to ecosystem operation. Most modules contain computer programs which are built around a particular application of a physical process. This module explores some of the characteristics of aquatic organisms which can be…
Flutter Analysis for Turbomachinery Using Volterra Series
NASA Technical Reports Server (NTRS)
Liou, Meng-Sing; Yao, Weigang
2014-01-01
The objective of this paper is to describe an accurate and efficient reduced order modeling method for aeroelastic (AE) analysis and for determining the flutter boundary. Without losing accuracy, we develop a reduced order model based on the Volterra series to achieve significant savings in computational cost. The aerodynamic force is provided by a high-fidelity solution from the Reynolds-averaged Navier-Stokes (RANS) equations; the structural mode shapes are determined from the finite element analysis. The fluid-structure coupling is then modeled by the state-space formulation with the structural displacement as input and the aerodynamic force as output, which in turn acts as an external force to the aeroelastic displacement equation for providing the structural deformation. NASA's rotor 67 blade is used to study its aeroelastic characteristics under the designated operating condition. First, the CFD results are validated against measured data available for the steady state condition. Then, the accuracy of the developed reduced order model is compared with the full-order solutions. Finally the aeroelastic solutions of the blade are computed and a flutter boundary is identified, suggesting that the rotor, with the material property chosen for the study, is structurally stable at the operating condition, free of encountering flutter.
NASA Technical Reports Server (NTRS)
Hrbud, Ivana; VanDyke, Melissa; Houts, Mike; Goodfellow, Keith; Schafer, Charles (Technical Monitor)
2001-01-01
The Safe Affordable Fission Engine (SAFE) test series addresses Phase 1 Space Fission Systems issues in particular non-nuclear testing and system integration issues leading to the testing and non-nuclear demonstration of a 400-kW fully integrated flight unit. The first part of the SAFE 30 test series demonstrated operation of the simulated nuclear core and heat pipe system. Experimental data acquired in a number of different test scenarios will validate existing computational models, demonstrated system flexibility (fast start-ups, multiple start-ups/shut downs), simulate predictable failure modes and operating environments. The objective of the second part is to demonstrate an integrated propulsion system consisting of a core, conversion system and a thruster where the system converts thermal heat into jet power. This end-to-end system demonstration sets a precedent for ground testing of nuclear electric propulsion systems. The paper describes the SAFE 30 end-to-end system demonstration and its subsystems.
SHARPs - A Near-Real-Time Space Weather Data Product from HMI
NASA Astrophysics Data System (ADS)
Bobra, M.; Turmon, M.; Baldner, C.; Sun, X.; Hoeksema, J. T.
2012-12-01
A data product from the Helioseismic and Magnetic Imager (HMI) on the Solar Dynamics Observatory (SDO), called Space-weather HMI Active Region Patches (SHARPs), is now available through the SDO Joint Science Operations Center (JSOC) and the Virtual Solar Observatory. SHARPs are magnetically active regions identified on the solar disk and tracked automatically in time. SHARP data are processed within a few hours of the observation time. The SHARP data series contains active region-sized disambiguated vector magnetic field data in both Lambert Cylindrical Equal-Area and CCD coordinates on a 12 minute cadence. The series also provides simultaneous HMI maps of the line-of-sight magnetic field, continuum intensity, and velocity on the same ~0.5 arc-second pixel grid. In addition, the SHARP data series provides space weather quantities computed on the inverted, disambiguated, and remapped data. The values for each tracked region are computed and updated in near real time. We present space weather results for several X-class flares; furthermore, we compare said space weather quantities with helioseismic quantities calculated using ring-diagram analysis.
Continuous variables logic via coupled automata using a DNAzyme cascade with feedback.
Lilienthal, S; Klein, M; Orbach, R; Willner, I; Remacle, F; Levine, R D
2017-03-01
The concentration of molecules can be changed by chemical reactions and thereby offer a continuous readout. Yet computer architecture is cast in textbooks in terms of binary valued, Boolean variables. To enable reactive chemical systems to compute we show how, using the Cox interpretation of probability theory, one can transcribe the equations of chemical kinetics as a sequence of coupled logic gates operating on continuous variables. It is discussed how the distinct chemical identity of a molecule allows us to create a common language for chemical kinetics and Boolean logic. Specifically, the logic AND operation is shown to be equivalent to a bimolecular process. The logic XOR operation represents chemical processes that take place concurrently. The values of the rate constants enter the logic scheme as inputs. By designing a reaction scheme with a feedback we endow the logic gates with a built in memory because their output then depends on the input and also on the present state of the system. Technically such a logic machine is an automaton. We report an experimental realization of three such coupled automata using a DNAzyme multilayer signaling cascade. A simple model verifies analytically that our experimental scheme provides an integrator generating a power series that is third order in time. The model identifies two parameters that govern the kinetics and shows how the initial concentrations of the substrates are the coefficients in the power series.
COMPARISON OF PARALLEL AND SERIES HYBRID POWERTRAINS FOR TRANSIT BUS APPLICATION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, Zhiming; Daw, C Stuart; Smith, David E
2016-01-01
The fuel economy and emissions of both conventional and hybrid buses equipped with emissions aftertreatment were evaluated via computational simulation for six representative city bus drive cycles. Both series and parallel configurations for the hybrid case were studied. The simulation results indicate that series hybrid buses have the greatest overall advantage in fuel economy. The series and parallel hybrid buses were predicted to produce similar CO and HC tailpipe emissions but were also predicted to have reduced NOx tailpipe emissions compared to the conventional bus in higher speed cycles. For the New York bus cycle (NYBC), which has the lowestmore » average speed among the cycles evaluated, the series bus tailpipe emissions were somewhat higher than they were for the conventional bus, while the parallel hybrid bus had significantly lower tailpipe emissions. All three bus powertrains were found to require periodic active DPF regeneration to maintain PM control. Plug-in operation of series hybrid buses appears to offer significant fuel economy benefits and is easily employed due to the relatively large battery capacity that is typical of the series hybrid configuration.« less
GPU-accelerated algorithms for many-particle continuous-time quantum walks
NASA Astrophysics Data System (ADS)
Piccinini, Enrico; Benedetti, Claudia; Siloi, Ilaria; Paris, Matteo G. A.; Bordone, Paolo
2017-06-01
Many-particle continuous-time quantum walks (CTQWs) represent a resource for several tasks in quantum technology, including quantum search algorithms and universal quantum computation. In order to design and implement CTQWs in a realistic scenario, one needs effective simulation tools for Hamiltonians that take into account static noise and fluctuations in the lattice, i.e. Hamiltonians containing stochastic terms. To this aim, we suggest a parallel algorithm based on the Taylor series expansion of the evolution operator, and compare its performances with those of algorithms based on the exact diagonalization of the Hamiltonian or a 4th order Runge-Kutta integration. We prove that both Taylor-series expansion and Runge-Kutta algorithms are reliable and have a low computational cost, the Taylor-series expansion showing the additional advantage of a memory allocation not depending on the precision of calculation. Both algorithms are also highly parallelizable within the SIMT paradigm, and are thus suitable for GPGPU computing. In turn, we have benchmarked 4 NVIDIA GPUs and 3 quad-core Intel CPUs for a 2-particle system over lattices of increasing dimension, showing that the speedup provided by GPU computing, with respect to the OPENMP parallelization, lies in the range between 8x and (more than) 20x, depending on the frequency of post-processing. GPU-accelerated codes thus allow one to overcome concerns about the execution time, and make it possible simulations with many interacting particles on large lattices, with the only limit of the memory available on the device.
Operator bases, S-matrices, and their partition functions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Henning, Brian; Lu, Xiaochuan; Melia, Tom
Relativistic quantum systems that admit scattering experiments are quantitatively described by effective field theories, where S-matrix kinematics and symmetry considerations are encoded in the operator spectrum of the EFT. Here in this paper we use the S-matrix to derive the structure of the EFT operator basis, providing complementary descriptions in (i) position space utilizing the conformal algebra and cohomology and (ii) momentum space via an algebraic formulation in terms of a ring of momenta with kinematics implemented as an ideal. These frameworks systematically handle redundancies associated with equations of motion (on-shell) and integration by parts (momentum conservation). We introduce amore » partition function, termed the Hilbert series, to enumerate the operator basis — correspondingly, the S-matrix — and derive a matrix integral expression to compute the Hilbert series. The expression is general, easily applied in any spacetime dimension, with arbitrary field content and (linearly realized) symmetries. In addition to counting, we discuss construction of the basis. Simple algorithms follow from the algebraic formulation in momentum space. We explicitly compute the basis for operators involving up to n = 5 scalar fields. This construction universally applies to fields with spin, since the operator basis for scalars encodes the momentum dependence of n-point amplitudes. We discuss in detail the operator basis for non-linearly realized symmetries. In the presence of massless particles, there is freedom to impose additional structure on the S- matrix in the form of soft limits. The most naÏve implementation for massless scalars leads to the operator basis for pions, which we confirm using the standard CCWZ formulation for non-linear realizations. Finally, although primarily discussed in the language of EFT, some of our results — conceptual and quantitative — may be of broader use in studying conformal field theories as well as the AdS/CFT correspondence.« less
Operator bases, S-matrices, and their partition functions
Henning, Brian; Lu, Xiaochuan; Melia, Tom; ...
2017-10-27
Relativistic quantum systems that admit scattering experiments are quantitatively described by effective field theories, where S-matrix kinematics and symmetry considerations are encoded in the operator spectrum of the EFT. Here in this paper we use the S-matrix to derive the structure of the EFT operator basis, providing complementary descriptions in (i) position space utilizing the conformal algebra and cohomology and (ii) momentum space via an algebraic formulation in terms of a ring of momenta with kinematics implemented as an ideal. These frameworks systematically handle redundancies associated with equations of motion (on-shell) and integration by parts (momentum conservation). We introduce amore » partition function, termed the Hilbert series, to enumerate the operator basis — correspondingly, the S-matrix — and derive a matrix integral expression to compute the Hilbert series. The expression is general, easily applied in any spacetime dimension, with arbitrary field content and (linearly realized) symmetries. In addition to counting, we discuss construction of the basis. Simple algorithms follow from the algebraic formulation in momentum space. We explicitly compute the basis for operators involving up to n = 5 scalar fields. This construction universally applies to fields with spin, since the operator basis for scalars encodes the momentum dependence of n-point amplitudes. We discuss in detail the operator basis for non-linearly realized symmetries. In the presence of massless particles, there is freedom to impose additional structure on the S- matrix in the form of soft limits. The most naÏve implementation for massless scalars leads to the operator basis for pions, which we confirm using the standard CCWZ formulation for non-linear realizations. Finally, although primarily discussed in the language of EFT, some of our results — conceptual and quantitative — may be of broader use in studying conformal field theories as well as the AdS/CFT correspondence.« less
NASA Technical Reports Server (NTRS)
Rogers, J. L., Jr.
1973-01-01
The NASTRAN computer program is capable of executing on three different types of computers: (1) the CDC 6000 series, (2) the IBM 360-370 series, and (3) the Univac 1100 series. A typical activity requiring transfer of data between dissimilar computers is the analysis of a large structure such as the space shuttle by substructuring. Models of portions of the vehicle which have been analyzed by subcontractors using their computers must be integrated into a model of the complete structure by the prime contractor on his computer. Presently the transfer of NASTRAN matrices or tables between two different types of computers is accomplished by punched cards or a magnetic tape containing card images. These methods of data transfer do not satisfy the requirements for intercomputer data transfer associated with a substructuring activity. To provide a more satisfactory transfer of data, two new programs, RDUSER and WRTUSER, were created.
Scholes, Corey; Sahni, Varun; Lustig, Sebastien; Parker, David A; Coolican, Myles R J
2014-03-01
The introduction of patient-specific instruments (PSI) for guiding bone cuts could increase the incidence of malalignment in primary total knee arthroplasty. The purpose of this study was to assess the agreement between one type of patient-specific instrumentation (Zimmer PSI) and the pre-operative plan with respect to bone cuts and component alignment during TKR using imageless computer navigation. A consecutive series of 30 femoral and tibial guides were assessed in-theatre by the same surgeon using computer navigation. Following surgical exposure, the PSI cutting guides were placed on the joint surface and alignment assessed using the navigation tracker. The difference between in-theatre data and the pre-operative plan was recorded and analysed. The error between in-theatre measurements and pre-operative plan for the femoral and tibial components exceeded 3° for 3 and 17% of the sample, respectively, while the error for total coronal alignment exceeded 3° for 27% of the sample. The present results indicate that alignment with Zimmer PSI cutting blocks, assessed by imageless navigation, does not match the pre-operative plan in a proportion of cases. To prevent unnecessary increases in the incidence of malalignment in primary TKR, it is recommended that these devices should not be used without objective verification of alignment, either in real-time or with post-operative imaging. Further work is required to identify the source of discrepancies and validate these devices prior to routine use. II.
Cognitive engineering models in space systems
NASA Technical Reports Server (NTRS)
Mitchell, Christine M.
1993-01-01
NASA space systems, including mission operations on the ground and in space, are complex, dynamic, predominantly automated systems in which the human operator is a supervisory controller. Models of cognitive functions in complex systems are needed to describe human performance and form the theoretical basis of operator workstation design, including displays, controls, and decision aids. Currently, there several candidate modeling methodologies. They include the Rasmussen abstraction/aggregation hierarchy and decision ladder, the goal-means network, the problem behavior graph, and the operator function model. The research conducted under the sponsorship of this grant focuses on the extension of the theoretical structure of the operator function model and its application to NASA Johnson mission operations and space station applications. The initial portion of this research consists of two parts. The first is a series of technical exchanges between NASA Johnson and Georgia Tech researchers. The purpose is to identify candidate applications for the current operator function model; prospects include mission operations and the Data Management System Testbed. The second portion will address extensions of the operator function model to tailor it to the specific needs of Johnson applications. At this point, we have accomplished two things. During a series of conversations with JSC researchers, we have defined the technical goal of the research supported by this grant to be the structural definition of the operator function model and its computer implementation, OFMspert. Both the OFM and OFMspert have matured to the point that they require infrastructure to facilitate use by researchers not involved in the evolution of the tools. The second accomplishment this year was the identification of the Payload Deployment and Retrieval System (PDRS) as a candidate system for the case study. In conjunction with government and contractor personnel in the Human-Computer Interaction Lab, the PDRS was identified as the most accessible system for the demonstration. Pursuant to this a PDRS simulation was obtained from the HCIL and an initial knowledge engineering effort was conducted to understand the operator's tasks in the PDRS application. The preliminary results of the knowledge engineering effort and an initial formulation of an operator function model (OFM) are contained in the appendices.
NASA Astrophysics Data System (ADS)
Park, Chan-Hee; Lee, Cholwoo
2016-04-01
Raspberry Pi series is a low cost, smaller than credit-card sized computers that various operating systems such as linux and recently even Windows 10 are ported to run on. Thanks to massive production and rapid technology development, the price of various sensors that can be attached to Raspberry Pi has been dropping at an increasing speed. Therefore, the device can be an economic choice as a small portable computer to monitor temporal hydrogeological data in fields. In this study, we present a Raspberry Pi system that measures a flow rate, and temperature of groundwater at sites, stores them into mysql database, and produces interactive figures and tables such as google charts online or bokeh offline for further monitoring and analysis. Since all the data are to be monitored on internet, any computers or mobile devices can be good monitoring tools at convenience. The measured data are further integrated with OpenGeoSys, one of the hydrogeological models that is also ported to the Raspberry Pi series. This leads onsite hydrogeological modeling fed by temporal sensor data to meet various needs.
A computer program for analyzing unresolved Mossbauer hyperfine spectra
NASA Technical Reports Server (NTRS)
Schiess, J. R.; Singh, J. J.
1978-01-01
The program for analyzing unresolved Mossbauer hyperfine spectra was written in FORTRAN 4 language for the Control Data CYBER 170 series digital computer system with network operating system 1.1. With the present dimensions, the program requires approximately 36,000 octal locations of core storage. A typical case involving two innermost coordination shells in which the amplitudes and the peak positions of all three components were estimated in 25 iterations requires 30 seconds on CYBER 173. The program was applied to determine the effects of various near neighbor impurity shells on hyperfine fields in dilute FeAl alloys.
Triangle Computer Science Distinguished Lecture Series
2018-01-30
scientific inquiry - the cell, the brain, the market - as well as in the models developed by scientists over the centuries for studying them. Human...the great objects of scientific inquiry - the cell, the brain, the market - as well as in the models developed by scientists over the centuries for...in principle , secure system operation can be achieved. Massive-Scale Streaming Analytics David Bader, Georgia Institute of Technology (telecast from
ERIC Educational Resources Information Center
Stevenson, R. D.
This module is part of a series designed to be used by life science students for instruction in the application of physical theory to ecosystem operation. Most modules contain computer programs which are built around a particular application of a physical process. This module describes heat transfer processes involved in the exchange of heat…
Another elementary proof of the Jordan form of a matrix
NASA Astrophysics Data System (ADS)
Budhi, Wono Setya
2012-05-01
In this paper we establish the Jordan Form for a matrix using the elementary concepts of vector differentiation and partial fractions. The idea comes from the resolvent of the operator. For the matrix, the Laurent series is finite and easy to compute through rational representation. We also give a proof of some famous theorems in matrix analysis as consequences from the result.
ERIC Educational Resources Information Center
Stevenson, R. D.
These materials were designed to be used by life science students for instruction in the application of physical theory to ecosystem operation. Most modules contain computer programs which are built around a particular application of a physical process. Several modules in the thermodynamic series considered the application of the First Law to…
A flexible tool for diagnosing water, energy, and entropy budgets in climate models
NASA Astrophysics Data System (ADS)
Lembo, Valerio; Lucarini, Valerio
2017-04-01
We have developed a new flexible software for studying the global energy budget, the hydrological cycle, and the material entropy production of global climate models. The program receives as input radiative, latent and sensible energy fluxes, with the requirement that the variable names are in agreement with the Climate and Forecast (CF) conventions for the production of NetCDF datasets. Annual mean maps, meridional sections and time series are computed by means of Climate Data Operators (CDO) collection of command line operators developed at Max-Planck Institute for Meteorology (MPI-M). If a land-sea mask is provided, the program also computes the required quantities separately on the continents and oceans. Depending on the user's choice, the program also calls the MATLAB software to compute meridional heat transports and location and intensities of the peaks in the two hemispheres. We are currently planning to adapt the program in order to be included in the Earth System Model eValuation Tool (ESMValTool) community diagnostics.
Photometric analysis in the Kepler Science Operations Center pipeline
NASA Astrophysics Data System (ADS)
Twicken, Joseph D.; Clarke, Bruce D.; Bryson, Stephen T.; Tenenbaum, Peter; Wu, Hayley; Jenkins, Jon M.; Girouard, Forrest; Klaus, Todd C.
2010-07-01
We describe the Photometric Analysis (PA) software component and its context in the Kepler Science Operations Center (SOC) Science Processing Pipeline. The primary tasks of this module are to compute the photometric flux and photocenters (centroids) for over 160,000 long cadence (~thirty minute) and 512 short cadence (~one minute) stellar targets from the calibrated pixels in their respective apertures. We discuss science algorithms for long and short cadence PA: cosmic ray cleaning; background estimation and removal; aperture photometry; and flux-weighted centroiding. We discuss the end-to-end propagation of uncertainties for the science algorithms. Finally, we present examples of photometric apertures, raw flux light curves, and centroid time series from Kepler flight data. PA light curves, centroid time series, and barycentric timestamp corrections are exported to the Multi-mission Archive at Space Telescope [Science Institute] (MAST) and are made available to the general public in accordance with the NASA/Kepler data release policy.
Photometric Analysis in the Kepler Science Operations Center Pipeline
NASA Technical Reports Server (NTRS)
Twicken, Joseph D.; Clarke, Bruce D.; Bryson, Stephen T.; Tenenbaum, Peter; Wu, Hayley; Jenkins, Jon M.; Girouard, Forrest; Klaus, Todd C.
2010-01-01
We describe the Photometric Analysis (PA) software component and its context in the Kepler Science Operations Center (SOC) pipeline. The primary tasks of this module are to compute the photometric flux and photocenters (centroids) for over 160,000 long cadence (thirty minute) and 512 short cadence (one minute) stellar targets from the calibrated pixels in their respective apertures. We discuss the science algorithms for long and short cadence PA: cosmic ray cleaning; background estimation and removal; aperture photometry; and flux-weighted centroiding. We discuss the end-to-end propagation of uncertainties for the science algorithms. Finally, we present examples of photometric apertures, raw flux light curves, and centroid time series from Kepler flight data. PA light curves, centroid time series, and barycentric timestamp corrections are exported to the Multi-mission Archive at Space Telescope [Science Institute] (MAST) and are made available to the general public in accordance with the NASA/Kepler data release policy.
Recursive-operator method in vibration problems for rod systems
NASA Astrophysics Data System (ADS)
Rozhkova, E. V.
2009-12-01
Using linear differential equations with constant coefficients describing one-dimensional dynamical processes as an example, we show that the solutions of these equations and systems are related to the solution of the corresponding numerical recursion relations and one does not have to compute the roots of the corresponding characteristic equations. The arbitrary functions occurring in the general solution of the homogeneous equations are determined by the initial and boundary conditions or are chosen from various classes of analytic functions. The solutions of the inhomogeneous equations are constructed in the form of integro-differential series acting on the right-hand side of the equation, and the coefficients of the series are determined from the same recursion relations. The convergence of formal solutions as series of a more general recursive-operator construction was proved in [1]. In the special case where the solutions of the equation can be represented in separated variables, the power series can be effectively summed, i.e., expressed in terms of elementary functions, and coincide with the known solutions. In this case, to determine the natural vibration frequencies, one obtains algebraic rather than transcendental equations, which permits exactly determining the imaginary and complex roots of these equations without using the graphic method [2, pp. 448-449]. The correctness of the obtained formulas (differentiation formulas, explicit expressions for the series coefficients, etc.) can be verified directly by appropriate substitutions; therefore, we do not prove them here.
NASA Astrophysics Data System (ADS)
Ablinger, J.; Behring, A.; Blümlein, J.; De Freitas, A.; von Manteuffel, A.; Schneider, C.
2016-05-01
Three loop ladder and V-topology diagrams contributing to the massive operator matrix element AQg are calculated. The corresponding objects can all be expressed in terms of nested sums and recurrences depending on the Mellin variable N and the dimensional parameter ε. Given these representations, the desired Laurent series expansions in ε can be obtained with the help of our computer algebra toolbox. Here we rely on generalized hypergeometric functions and Mellin-Barnes representations, on difference ring algorithms for symbolic summation, on an optimized version of the multivariate Almkvist-Zeilberger algorithm for symbolic integration, and on new methods to calculate Laurent series solutions of coupled systems of differential equations. The solutions can be computed for general coefficient matrices directly for any basis also performing the expansion in the dimensional parameter in case it is expressible in terms of indefinite nested product-sum expressions. This structural result is based on new results of our difference ring theory. In the cases discussed we deal with iterative sum- and integral-solutions over general alphabets. The final results are expressed in terms of special sums, forming quasi-shuffle algebras, such as nested harmonic sums, generalized harmonic sums, and nested binomially weighted (cyclotomic) sums. Analytic continuations to complex values of N are possible through the recursion relations obeyed by these quantities and their analytic asymptotic expansions. The latter lead to a host of new constants beyond the multiple zeta values, the infinite generalized harmonic and cyclotomic sums in the case of V-topologies.
NASA Astrophysics Data System (ADS)
Philip, Sajeev; Martin, Randall V.; Keller, Christoph A.
2016-05-01
Chemistry-transport models involve considerable computational expense. Fine temporal resolution offers accuracy at the expense of computation time. Assessment is needed of the sensitivity of simulation accuracy to the duration of chemical and transport operators. We conduct a series of simulations with the GEOS-Chem chemistry-transport model at different temporal and spatial resolutions to examine the sensitivity of simulated atmospheric composition to operator duration. Subsequently, we compare the species simulated with operator durations from 10 to 60 min as typically used by global chemistry-transport models, and identify the operator durations that optimize both computational expense and simulation accuracy. We find that longer continuous transport operator duration increases concentrations of emitted species such as nitrogen oxides and carbon monoxide since a more homogeneous distribution reduces loss through chemical reactions and dry deposition. The increased concentrations of ozone precursors increase ozone production with longer transport operator duration. Longer chemical operator duration decreases sulfate and ammonium but increases nitrate due to feedbacks with in-cloud sulfur dioxide oxidation and aerosol thermodynamics. The simulation duration decreases by up to a factor of 5 from fine (5 min) to coarse (60 min) operator duration. We assess the change in simulation accuracy with resolution by comparing the root mean square difference in ground-level concentrations of nitrogen oxides, secondary inorganic aerosols, ozone and carbon monoxide with a finer temporal or spatial resolution taken as "truth". Relative simulation error for these species increases by more than a factor of 5 from the shortest (5 min) to longest (60 min) operator duration. Chemical operator duration twice that of the transport operator duration offers more simulation accuracy per unit computation. However, the relative simulation error from coarser spatial resolution generally exceeds that from longer operator duration; e.g., degrading from 2° × 2.5° to 4° × 5° increases error by an order of magnitude. We recommend prioritizing fine spatial resolution before considering different operator durations in offline chemistry-transport models. We encourage chemistry-transport model users to specify in publications the durations of operators due to their effects on simulation accuracy.
OceanXtremes: Scalable Anomaly Detection in Oceanographic Time-Series
NASA Astrophysics Data System (ADS)
Wilson, B. D.; Armstrong, E. M.; Chin, T. M.; Gill, K. M.; Greguska, F. R., III; Huang, T.; Jacob, J. C.; Quach, N.
2016-12-01
The oceanographic community must meet the challenge to rapidly identify features and anomalies in complex and voluminous observations to further science and improve decision support. Given this data-intensive reality, we are developing an anomaly detection system, called OceanXtremes, powered by an intelligent, elastic Cloud-based analytic service backend that enables execution of domain-specific, multi-scale anomaly and feature detection algorithms across the entire archive of 15 to 30-year ocean science datasets.Our parallel analytics engine is extending the NEXUS system and exploits multiple open-source technologies: Apache Cassandra as a distributed spatial "tile" cache, Apache Spark for in-memory parallel computation, and Apache Solr for spatial search and storing pre-computed tile statistics and other metadata. OceanXtremes provides these key capabilities: Parallel generation (Spark on a compute cluster) of 15 to 30-year Ocean Climatologies (e.g. sea surface temperature or SST) in hours or overnight, using simple pixel averages or customizable Gaussian-weighted "smoothing" over latitude, longitude, and time; Parallel pre-computation, tiling, and caching of anomaly fields (daily variables minus a chosen climatology) with pre-computed tile statistics; Parallel detection (over the time-series of tiles) of anomalies or phenomena by regional area-averages exceeding a specified threshold (e.g. high SST in El Nino or SST "blob" regions), or more complex, custom data mining algorithms; Shared discovery and exploration of ocean phenomena and anomalies (facet search using Solr), along with unexpected correlations between key measured variables; Scalable execution for all capabilities on a hybrid Cloud, using our on-premise OpenStack Cloud cluster or at Amazon. The key idea is that the parallel data-mining operations will be run "near" the ocean data archives (a local "network" hop) so that we can efficiently access the thousands of files making up a three decade time-series. The presentation will cover the architecture of OceanXtremes, parallelization of the climatology computation and anomaly detection algorithms using Spark, example results for SST and other time-series, and parallel performance metrics.
A high efficiency motor/generator for magnetically suspended flywheel energy storage system
NASA Technical Reports Server (NTRS)
Niemeyer, W. L.; Studer, P.; Kirk, J. A.; Anand, D. K.; Zmood, R. B.
1989-01-01
The authors discuss the theory and design of a brushless direct current motor for use in a flywheel energy storage system. The motor design is optimized for a nominal 4.5-in outside diameter operating within a speed range of 33,000-66,000 revolutions per minute with a 140-V maximum supply voltage. The equations which govern the motor's operation are used to compute a series of acceptable design parameter combinations for ideal operation. Engineering tradeoffs are then performed to minimize the irrecoverable energy loss while remaining within the design constraint boundaries. A final integrated structural design whose features allow it to be incorporated with the 500-Wh magnetically suspended flywheel is presented.
Reconciliation of the cloud computing model with US federal electronic health record regulations
2011-01-01
Cloud computing refers to subscription-based, fee-for-service utilization of computer hardware and software over the Internet. The model is gaining acceptance for business information technology (IT) applications because it allows capacity and functionality to increase on the fly without major investment in infrastructure, personnel or licensing fees. Large IT investments can be converted to a series of smaller operating expenses. Cloud architectures could potentially be superior to traditional electronic health record (EHR) designs in terms of economy, efficiency and utility. A central issue for EHR developers in the US is that these systems are constrained by federal regulatory legislation and oversight. These laws focus on security and privacy, which are well-recognized challenges for cloud computing systems in general. EHRs built with the cloud computing model can achieve acceptable privacy and security through business associate contracts with cloud providers that specify compliance requirements, performance metrics and liability sharing. PMID:21727204
Reconciliation of the cloud computing model with US federal electronic health record regulations.
Schweitzer, Eugene J
2012-01-01
Cloud computing refers to subscription-based, fee-for-service utilization of computer hardware and software over the Internet. The model is gaining acceptance for business information technology (IT) applications because it allows capacity and functionality to increase on the fly without major investment in infrastructure, personnel or licensing fees. Large IT investments can be converted to a series of smaller operating expenses. Cloud architectures could potentially be superior to traditional electronic health record (EHR) designs in terms of economy, efficiency and utility. A central issue for EHR developers in the US is that these systems are constrained by federal regulatory legislation and oversight. These laws focus on security and privacy, which are well-recognized challenges for cloud computing systems in general. EHRs built with the cloud computing model can achieve acceptable privacy and security through business associate contracts with cloud providers that specify compliance requirements, performance metrics and liability sharing.
McHugh, Stuart
1976-01-01
The material in this report can be grouped into two categories: 1) programs that compute tilts produced by a vertically oriented expanding rectangular dislocation loop in an elastic or viscoelastic material and 2) programs that compute the shear stresses, strains, and shear displacements in a three-phase half-space (i.e. a half-space containing a vertical slab). Each section describes the relevant theory, and provides a detailed guide to the operation of the programs. A series of examples is provided at the end of each section.
Methods of geometrical integration in accelerator physics
NASA Astrophysics Data System (ADS)
Andrianov, S. N.
2016-12-01
In the paper we consider a method of geometric integration for a long evolution of the particle beam in cyclic accelerators, based on the matrix representation of the operator of particles evolution. This method allows us to calculate the corresponding beam evolution in terms of two-dimensional matrices including for nonlinear effects. The ideology of the geometric integration introduces in appropriate computational algorithms amendments which are necessary for preserving the qualitative properties of maps presented in the form of the truncated series generated by the operator of evolution. This formalism extends both on polarized and intense beams. Examples of practical applications are described.
Djurdjevic, Tanja; Rehwald, Rafael; Knoflach, Michael; Matosevic, Benjamin; Kiechl, Stefan; Gizewski, Elke Ruth; Glodny, Bernhard; Grams, Astrid Ellen
2017-03-01
After intraarterial recanalisation (IAR), the haemorrhage and the blood-brain barrier (BBB) disruption can be distinguished using dual-energy computed tomography (DECT). The aim of the present study was to investigate whether future infarction development can be predicted from DECT. DECT scans of 20 patients showing 45 BBB disrupted areas after IAR were assessed and compared with follow-up examinations. Receiver operator characteristic (ROC) analyses using densities from the iodine map (IM) and virtual non-contrast (VNC) were performed. Future infarction areas are denser than future non-infarction areas on IM series (23.44 ± 24.86 vs. 5.77 ± 2.77; p < 0.0001) and more hypodense on VNC series (29.71 ± 3.33 vs. 35.33 ± 3.50; p < 0.0001). ROC analyses for the IM series showed an area under the curve (AUC) of 0.99 (cut-off: <9.97 HU; p < 0.05; sensitivity 91.18 %; specificity 100.00 %; accuracy 0.93) for the prediction of future infarctions. The AUC for the prediction of haemorrhagic infarctions was 0.78 (cut-off >17.13 HU; p < 0.05; sensitivity 90.00 %; specificity 62.86 %; accuracy 0.69). The VNC series allowed prediction of infarction volume. Future infarction development after IAR can be reliably predicted with the IM series. The prediction of haemorrhages and of infarction size is less reliable. • The IM series (DECT) can predict future infarction development after IAR. • Later haemorrhages can be predicted using the IM and the BW series. • The volume of definable hypodense areas in VNC correlates with infarction volume.
Mathematical models for space shuttle ground systems
NASA Technical Reports Server (NTRS)
Tory, E. G.
1985-01-01
Math models are a series of algorithms, comprised of algebraic equations and Boolean Logic. At Kennedy Space Center, math models for the Space Shuttle Systems are performed utilizing the Honeywell 66/80 digital computers, Modcomp II/45 Minicomputers and special purpose hardware simulators (MicroComputers). The Shuttle Ground Operations Simulator operating system provides the language formats, subroutines, queueing schemes, execution modes and support software to write, maintain and execute the models. The ground systems presented consist primarily of the Liquid Oxygen and Liquid Hydrogen Cryogenic Propellant Systems, as well as liquid oxygen External Tank Gaseous Oxygen Vent Hood/Arm and the Vehicle Assembly Building (VAB) High Bay Cells. The purpose of math modeling is to simulate the ground hardware systems and to provide an environment for testing in a benign mode. This capability allows the engineers to check out application software for loading and launching the vehicle, and to verify the Checkout, Control, & Monitor Subsystem within the Launch Processing System. It is also used to train operators and to predict system response and status in various configurations (normal operations, emergency and contingent operations), including untried configurations or those too dangerous to try under real conditions, i.e., failure modes.
Quantification of correlations in quantum many-particle systems.
Byczuk, Krzysztof; Kuneš, Jan; Hofstetter, Walter; Vollhardt, Dieter
2012-02-24
We introduce a well-defined and unbiased measure of the strength of correlations in quantum many-particle systems which is based on the relative von Neumann entropy computed from the density operator of correlated and uncorrelated states. The usefulness of this general concept is demonstrated by quantifying correlations of interacting electrons in the Hubbard model and in a series of transition-metal oxides using dynamical mean-field theory.
Three-dimensional computer-aided human factors engineering analysis of a grafting robot.
Chiu, Y C; Chen, S; Wu, G J; Lin, Y H
2012-07-01
The objective of this research was to conduct a human factors engineering analysis of a grafting robot design using computer-aided 3D simulation technology. A prototype tubing-type grafting robot for fruits and vegetables was the subject of a series of case studies. To facilitate the incorporation of human models into the operating environment of the grafting robot, I-DEAS graphic software was applied to establish individual models of the grafting robot in line with Jack ergonomic analysis. Six human models (95th percentile, 50th percentile, and 5th percentile by height for both males and females) were employed to simulate the operating conditions and working postures in a real operating environment. The lower back and upper limb stresses of the operators were analyzed using the lower back analysis (LBA) and rapid upper limb assessment (RULA) functions in Jack. The experimental results showed that if a leg space is introduced under the robot, the operator can sit closer to the robot, which reduces the operator's level of lower back and upper limbs stress. The proper environmental layout for Taiwanese operators for minimum levels of lower back and upper limb stress are to set the grafting operation at 23.2 cm away from the operator at a height of 85 cm and with 45 cm between the rootstock and scion units.
Efficient parallelization of analytic bond-order potentials for large-scale atomistic simulations
NASA Astrophysics Data System (ADS)
Teijeiro, C.; Hammerschmidt, T.; Drautz, R.; Sutmann, G.
2016-07-01
Analytic bond-order potentials (BOPs) provide a way to compute atomistic properties with controllable accuracy. For large-scale computations of heterogeneous compounds at the atomistic level, both the computational efficiency and memory demand of BOP implementations have to be optimized. Since the evaluation of BOPs is a local operation within a finite environment, the parallelization concepts known from short-range interacting particle simulations can be applied to improve the performance of these simulations. In this work, several efficient parallelization methods for BOPs that use three-dimensional domain decomposition schemes are described. The schemes are implemented into the bond-order potential code BOPfox, and their performance is measured in a series of benchmarks. Systems of up to several millions of atoms are simulated on a high performance computing system, and parallel scaling is demonstrated for up to thousands of processors.
TIMESERIESSTREAMING.VI: LabVIEW program for reliable data streaming of large analog time series
NASA Astrophysics Data System (ADS)
Czerwinski, Fabian; Oddershede, Lene B.
2011-02-01
With modern data acquisition devices that work fast and very precise, scientists often face the task of dealing with huge amounts of data. These need to be rapidly processed and stored onto a hard disk. We present a LabVIEW program which reliably streams analog time series of MHz sampling. Its run time has virtually no limitation. We explicitly show how to use the program to extract time series from two experiments: For a photodiode detection system that tracks the position of an optically trapped particle and for a measurement of ionic current through a glass capillary. The program is easy to use and versatile as the input can be any type of analog signal. Also, the data streaming software is simple, highly reliable, and can be easily customized to include, e.g., real-time power spectral analysis and Allan variance noise quantification. Program summaryProgram title: TimeSeriesStreaming.VI Catalogue identifier: AEHT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHT_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 250 No. of bytes in distributed program, including test data, etc.: 63 259 Distribution format: tar.gz Programming language: LabVIEW ( http://www.ni.com/labview/) Computer: Any machine running LabVIEW 8.6 or higher Operating system: Windows XP and Windows 7 RAM: 60-360 Mbyte Classification: 3 Nature of problem: For numerous scientific and engineering applications, it is highly desirable to have an efficient, reliable, and flexible program to perform data streaming of time series sampled with high frequencies and possibly for long time intervals. This type of data acquisition often produces very large amounts of data not easily streamed onto a computer hard disk using standard methods. Solution method: This LabVIEW program is developed to directly stream any kind of time series onto a hard disk. Due to optimized timing and usage of computational resources, such as multicores and protocols for memory usage, this program provides extremely reliable data acquisition. In particular, the program is optimized to deal with large amounts of data, e.g., taken with high sampling frequencies and over long time intervals. The program can be easily customized for time series analyses. Restrictions: Only tested in Windows-operating LabVIEW environments, must use TDMS format, acquisition cards must be LabVIEW compatible, driver DAQmx installed. Running time: As desirable: microseconds to hours
Comparison of Parallel and Series Hybrid Power Trains for Transit Bus Applications
Gao, Zhiming; Daw, C. Stuart; Smith, David E.; ...
2016-08-01
The fuel economy and emissions of conventional and hybrid buses equipped with emissions after treatment were evaluated via computational simulation for six representative city bus drive cycles. Both series and parallel configurations for the hybrid case were studied. The simulation results indicated that series hybrid buses have the greatest overall advantage in fuel economy. The series and parallel hybrid buses were predicted to produce similar carbon monoxide and hydrocarbon tailpipe emissions but were also predicted to have reduced tailpipe emissions of nitrogen oxides compared with the conventional bus in higher speed cycles. For the New York bus cycle, which hasmore » the lowest average speed among the cycles evaluated, the series bus tailpipe emissions were somewhat higher than they were for the conventional bus; the parallel hybrid bus had significantly lower tailpipe emissions. All three bus power trains were found to require periodic active diesel particulate filter regeneration to maintain control of particulate matter. Finally, plug-in operation of series hybrid buses appears to offer significant fuel economy benefits and is easily employed because of the relatively large battery capacity that is typical of the series hybrid configuration.« less
Comparison of Parallel and Series Hybrid Power Trains for Transit Bus Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, Zhiming; Daw, C. Stuart; Smith, David E.
The fuel economy and emissions of conventional and hybrid buses equipped with emissions after treatment were evaluated via computational simulation for six representative city bus drive cycles. Both series and parallel configurations for the hybrid case were studied. The simulation results indicated that series hybrid buses have the greatest overall advantage in fuel economy. The series and parallel hybrid buses were predicted to produce similar carbon monoxide and hydrocarbon tailpipe emissions but were also predicted to have reduced tailpipe emissions of nitrogen oxides compared with the conventional bus in higher speed cycles. For the New York bus cycle, which hasmore » the lowest average speed among the cycles evaluated, the series bus tailpipe emissions were somewhat higher than they were for the conventional bus; the parallel hybrid bus had significantly lower tailpipe emissions. All three bus power trains were found to require periodic active diesel particulate filter regeneration to maintain control of particulate matter. Finally, plug-in operation of series hybrid buses appears to offer significant fuel economy benefits and is easily employed because of the relatively large battery capacity that is typical of the series hybrid configuration.« less
NASA Astrophysics Data System (ADS)
Phillips, D. A.; Herring, T.; Melbourne, T. I.; Murray, M. H.; Szeliga, W. M.; Floyd, M.; Puskas, C. M.; King, R. W.; Boler, F. M.; Meertens, C. M.; Mattioli, G. S.
2017-12-01
The Geodesy Advancing Geosciences and EarthScope (GAGE) Facility, operated by UNAVCO, provides a diverse suite of geodetic data, derived products and cyberinfrastructure services to support community Earth science research and education. GPS data and products including decadal station position time series and velocities are provided for 2000+ continuous GPS stations from the Plate Boundary Observatory (PBO) and other networks distributed throughout the high Arctic, North America, and Caribbean regions. The position time series contain a multitude of signals in addition to the secular motions, including coseismic and postseismic displacements, interseismic strain accumulation, and transient signals associated with hydrologic and other processes. We present our latest velocity field solutions, new time series offset estimate products, and new time series examples associated with various phenomena. Position time series, and the signals they contain, are inherently dependent upon analysis parameters such as network scaling and reference frame realization. The estimation of scale changes for example, a common practice, has large impacts on vertical motion estimates. GAGE/PBO velocities and time series are currently provided in IGS (IGb08) and North America (NAM08, IGb08 rotated to a fixed North America Plate) reference frames. We are reprocessing all data (1996 to present) as part of the transition from IGb08 to IGS14 that began in 2017. New NAM14 and IGS14 data products are discussed. GAGE/PBO GPS data products are currently generated using onsite computing clusters. As part of an NSF funded EarthCube Building Blocks project called "Deploying MultiFacility Cyberinfrastructure in Commercial and Private Cloud-based Systems (GeoSciCloud)", we are investigating performance, cost, and efficiency differences between local computing resources and cloud based resources. Test environments include a commercial cloud provider (Amazon/AWS), NSF cloud-like infrastructures within XSEDE (TACC, the Texas Advanced Computing Center), and in-house cyberinfrastructures. Preliminary findings from this effort are presented. Web services developed by UNAVCO to facilitate the discovery, customization and dissemination of GPS data and products are also presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moryakov, A. V., E-mail: sailor@orc.ru
2016-12-15
An algorithm for solving the linear Cauchy problem for large systems of ordinary differential equations is presented. The algorithm for systems of first-order differential equations is implemented in the EDELWEISS code with the possibility of parallel computations on supercomputers employing the MPI (Message Passing Interface) standard for the data exchange between parallel processes. The solution is represented by a series of orthogonal polynomials on the interval [0, 1]. The algorithm is characterized by simplicity and the possibility to solve nonlinear problems with a correction of the operator in accordance with the solution obtained in the previous iterative process.
Automatic Processing of Reactive Polymers
NASA Technical Reports Server (NTRS)
Roylance, D.
1985-01-01
A series of process modeling computer codes were examined. The codes use finite element techniques to determine the time-dependent process parameters operative during nonisothermal reactive flows such as can occur in reaction injection molding or composites fabrication. The use of these analytical codes to perform experimental control functions is examined; since the models can determine the state of all variables everywhere in the system, they can be used in a manner similar to currently available experimental probes. A small but well instrumented reaction vessel in which fiber-reinforced plaques are cured using computer control and data acquisition was used. The finite element codes were also extended to treat this particular process.
A new method of real-time detection of changes in periodic data stream
NASA Astrophysics Data System (ADS)
Lyu, Chen; Lu, Guoliang; Cheng, Bin; Zheng, Xiangwei
2017-07-01
The change point detection in periodic time series is much desirable in many practical usages. We present a novel algorithm for this task, which includes two phases: 1) anomaly measure- on the basis of a typical regression model, we propose a new computation method to measure anomalies in time series which does not require any reference data from other measurement(s); 2) change detection- we introduce a new martingale test for detection which can be operated in an unsupervised and nonparametric way. We have conducted extensive experiments to systematically test our algorithm. The results make us believe that our algorithm can be directly applicable in many real-world change-point-detection applications.
Expanded serial communication capability for the transport systems research vehicle laptop computers
NASA Technical Reports Server (NTRS)
Easley, Wesley C.
1991-01-01
A recent upgrade of the Transport Systems Research Vehicle (TSRV) operated by the Advanced Transport Operating Systems Program Office at the NASA Langley Research Center included installation of a number of Grid 1500 series laptop computers. Each unit is a 80386-based IBM PC clone. RS-232 data busses are needed for TSRV flight research programs, and it has been advantageous to extend the application of the Grids in this area. Use was made of the expansion features of the Grid internal bus to add a user programmable serial communication channel. Software to allow use of the Grid bus expansion has been written and placed in a Turbo C library for incorporation into applications programs in a transparent manner via function calls. Port setup; interrupt-driven, two-way data transfer; and software flow control are built into the library functions.
Challenges of developing an electro-optical system for measuring man's operational envelope
NASA Technical Reports Server (NTRS)
Woolford, B.
1985-01-01
In designing work stations and restraint systems, and in planning tasks to be performed in space, a knowledge of the capabilities of the operator is essential. Answers to such questions as whether a specific control or work surface can be reached from a given restraint and how much force can be applied are of particular interest. A computer-aided design system has been developed for designing and evaluating work stations, etc., and the Anthropometric Measurement Laboratory (AML) has been charged with obtaining the data to be used in design and modeling. Traditional methods of measuring reach and force are very labor intensive and require bulky equipment. The AML has developed a series of electro-optical devices for collecting reach data easily, in computer readable form, with portable systems. The systems developed, their use, and data collected with them are described.
A solution to the surface intersection problem. [Boolean functions in geometric modeling
NASA Technical Reports Server (NTRS)
Timer, H. G.
1977-01-01
An application-independent geometric model within a data base framework should support the use of Boolean operators which allow the user to construct a complex model by appropriately combining a series of simple models. The use of these operators leads to the concept of implicitly and explicitly defined surfaces. With an explicitly defined model, the surface area may be computed by simply summing the surface areas of the bounding surfaces. For an implicitly defined model, the surface area computation must deal with active and inactive regions. Because the surface intersection problem involves four unknowns and its solution is a space curve, the parametric coordinates of each surface must be determined as a function of the arc length. Various subproblems involved in the general intersection problem are discussed, and the mathematical basis for their solution is presented along with a program written in FORTRAN IV for implementation on the IBM 370 TSO system.
Software Engineering Laboratory Ada performance study: Results and implications
NASA Technical Reports Server (NTRS)
Booth, Eric W.; Stark, Michael E.
1992-01-01
The SEL is an organization sponsored by NASA/GSFC to investigate the effectiveness of software engineering technologies applied to the development of applications software. The SEL was created in 1977 and has three organizational members: NASA/GSFC, Systems Development Branch; The University of Maryland, Computer Sciences Department; and Computer Sciences Corporation, Systems Development Operation. The goals of the SEL are as follows: (1) to understand the software development process in the GSFC environments; (2) to measure the effect of various methodologies, tools, and models on this process; and (3) to identify and then to apply successful development practices. The activities, findings, and recommendations of the SEL are recorded in the Software Engineering Laboratory Series, a continuing series of reports that include the Ada Performance Study Report. This paper describes the background of Ada in the Flight Dynamics Division (FDD), the objectives and scope of the Ada Performance Study, the measurement approach used, the performance tests performed, the major test results, and the implications for future FDD Ada development efforts.
Scheduling Operations for Massive Heterogeneous Clusters
NASA Technical Reports Server (NTRS)
Humphrey, John; Spagnoli, Kyle
2013-01-01
High-performance computing (HPC) programming has become increasingly difficult with the advent of hybrid supercomputers consisting of multicore CPUs and accelerator boards such as the GPU. Manual tuning of software to achieve high performance on this type of machine has been performed by programmers. This is needlessly difficult and prone to being invalidated by new hardware, new software, or changes in the underlying code. A system was developed for task-based representation of programs, which when coupled with a scheduler and runtime system, allows for many benefits, including higher performance and utilization of computational resources, easier programming and porting, and adaptations of code during runtime. The system consists of a method of representing computer algorithms as a series of data-dependent tasks. The series forms a graph, which can be scheduled for execution on many nodes of a supercomputer efficiently by a computer algorithm. The schedule is executed by a dispatch component, which is tailored to understand all of the hardware types that may be available within the system. The scheduler is informed by a cluster mapping tool, which generates a topology of available resources and their strengths and communication costs. Software is decoupled from its hardware, which aids in porting to future architectures. A computer algorithm schedules all operations, which for systems of high complexity (i.e., most NASA codes), cannot be performed optimally by a human. The system aids in reducing repetitive code, such as communication code, and aids in the reduction of redundant code across projects. It adds new features to code automatically, such as recovering from a lost node or the ability to modify the code while running. In this project, the innovators at the time of this reporting intend to develop two distinct technologies that build upon each other and both of which serve as building blocks for more efficient HPC usage. First is the scheduling and dynamic execution framework, and the second is scalable linear algebra libraries that are built directly on the former.
NASA Astrophysics Data System (ADS)
Brockmann, J. M.; Schuh, W.-D.
2011-07-01
The estimation of the global Earth's gravity field parametrized as a finite spherical harmonic series is computationally demanding. The computational effort depends on the one hand on the maximal resolution of the spherical harmonic expansion (i.e. the number of parameters to be estimated) and on the other hand on the number of observations (which are several millions for e.g. observations from the GOCE satellite missions). To circumvent these restrictions, a massive parallel software based on high-performance computing (HPC) libraries as ScaLAPACK, PBLAS and BLACS was designed in the context of GOCE HPF WP6000 and the GOCO consortium. A prerequisite for the use of these libraries is that all matrices are block-cyclic distributed on a processor grid comprised by a large number of (distributed memory) computers. Using this set of standard HPC libraries has the benefit that once the matrices are distributed across the computer cluster, a huge set of efficient and highly scalable linear algebra operations can be used.
Investigation related to hydrogen isotopes separation by cryogenic distillation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bornea, A.; Zamfirache, M.; Stefanescu, I.
2008-07-15
Research conducted in the last fifty years has shown that one of the most efficient techniques of removing tritium from the heavy water used as moderator and coolant in CANDU reactors (as that operated at Cernavoda (Romania)) is hydrogen cryogenic distillation. Designing and implementing the concept of cryogenic distillation columns require experiments to be conducted as well as computer simulations. Particularly, computer simulations are of great importance when designing and evaluating the performances of a column or a series of columns. Experimental data collected from laboratory work will be used as input for computer simulations run at larger scale (formore » The Pilot Plant for Tritium and Deuterium Separation) in order to increase the confidence in the simulated results. Studies carried out were focused on the following: - Quantitative analyses of important parameters such as the number of theoretical plates, inlet area, reflux flow, flow-rates extraction, working pressure, etc. - Columns connected in series in such a way to fulfil the separation requirements. Experiments were carried out on a laboratory-scale installation to investigate the performance of contact elements with continuous packing. The packing was manufactured in our institute. (authors)« less
The full spectrum of AdS5/CFT4 I: representation theory and one-loop Q-system
NASA Astrophysics Data System (ADS)
Marboe, Christian; Volin, Dmytro
2018-04-01
With the formulation of the quantum spectral curve for the AdS5/CFT4 integrable system, it became potentially possible to compute its full spectrum with high efficiency. This is the first paper in a series devoted to the explicit design of such computations, with no restrictions to particular subsectors being imposed. We revisit the representation theoretical classification of possible states in the spectrum and map the symmetry multiplets to solutions of the quantum spectral curve at zero coupling. To this end it is practical to introduce a generalisation of Young diagrams to the case of non-compact representations and define algebraic Q-systems directly on these diagrams. Furthermore, we propose an algorithm to explicitly solve such Q-systems that circumvents the traditional usage of Bethe equations and simplifies the computation effort. For example, our algorithm quickly obtains explicit analytic results for all 495 multiplets that accommodate single-trace operators in N=4 SYM with classical conformal dimension up to \\frac{13}{2} . We plan to use these results as the seed for solving the quantum spectral curve perturbatively to high loop orders in the next paper of the series.
Sirleo, Luigi; Innocenti, Massimo; Innocenti, Matteo; Civinini, Roberto; Carulli, Christian; Matassi, Fabrizio
2018-02-01
To evaluate the feedback from post-operative three-dimensional computed tomography (3D-CT) on femoral tunnel placement in the learning process, to obtain an anatomic anterior cruciate ligament (ACL) reconstruction. A series of 60 consecutive patients undergoing primary ACL reconstruction using autologous hamstrings single-bundle outside-in technique were prospectively included in the study. ACL reconstructions were performed by the same trainee-surgeon during his learning phase of anatomic ACL femoral tunnel placement. A CT scan with dedicated tunnel study was performed in all patients within 48 h after surgery. The data obtained from the CT scan were processed into a three-dimensional surface model, and a true medial view of the lateral femoral condyle was used for the femoral tunnel placement analysis. Two independent examiners analysed the tunnel placements. The centre of femoral tunnel was measured using a quadrant method as described by Bernard and Hertel. The coordinates measured were compared with anatomic coordinates values described in the literature [deep-to-shallow distance (X-axis) 28.5%; high-to-low distance (Y-axis) 35.2%]. Tunnel placement was evaluated in terms of accuracy and precision. After each ACL reconstruction, results were shown to the surgeon to receive an instant feedback in order to achieve accurate correction and improve tunnel placement for the next surgery. Complications and arthroscopic time were also recorded. Results were divided into three consecutive series (1, 2, 3) of 20 patients each. A trend to placing femoral tunnel slightly shallow in deep-to-shallow distance and slightly high in high-to-low distance was observed in the first and the second series. A progressive improvement in tunnel position was recorded from the first to second series and from the second to the third series. Both accuracy (+52.4%) and precision (+55.7%) increased from the first to the third series (p < 0.001). Arthroscopic time decreased from a mean of 105 min in the first series to 57 min in the third series (p < 0.001). After 50 ACL reconstructions, a satisfactory anatomic femoral tunnel was reached. Feedback from post-operative 3D-CT is effective in the learning process to improve accuracy and precision of femoral tunnel placement in order to obtain anatomic ACL reconstruction and helps to reduce also arthroscopic time and learning curve. For clinical relevance, trainee-surgeons should use feedback from post-operative 3DCT to learn anatomic ACL femoral tunnel placement and apply it appropriately. Consecutive case series, Level IV.
Economic-Analysis Program for a Communication System
NASA Technical Reports Server (NTRS)
Chamberlain, R. G.
1986-01-01
Prices and profits of alternative designs compared. Objective of Land Mobile Satellite Service Finance Report (LMSS) program is to provide means for comparing alternative designs of LMSS systems. Program is Multiplan worksheet program. Labels used in worksheet chosen for satellite-based cellular communication service, but analysis not restricted to such cases. LMSS written for interactive execution with Multiplan (version 1.2) and implemented on IBM PC series computer operating under DOS (version 2.11).
Programmable Direct-Memory-Access Controller
NASA Technical Reports Server (NTRS)
Hendry, David F.
1990-01-01
Proposed programmable direct-memory-access controller (DMAC) operates with computer systems of 32000 series, which have 32-bit data buses and use addresses of 24 (or potentially 32) bits. Controller functions with or without help of central processing unit (CPU) and starts itself. Includes such advanced features as ability to compare two blocks of memory for equality and to search block of memory for specific value. Made as single very-large-scale integrated-circuit chip.
Power processor for a 30cm ion thruster
NASA Technical Reports Server (NTRS)
Biess, J. J.; Inouye, L. Y.
1974-01-01
A thermal vacuum power processor for the NASA Lewis 30cm Mercury Ion Engine was designed, fabricated and tested to determine compliance with electrical specifications. The power processor breadboard used the silicon controlled rectifier (SCR) series resonant inverter as the basic power stage to process all the power to an ion engine. The power processor includes a digital interface unit to process all input commands and internal telemetry signals so that operation is compatible with a central computer system. The breadboard was tested in a thermal vacuum environment. Integration tests were performed with the ion engine and demonstrate operational compatibility and reliable operation without any component failures. Electromagnetic interference data were also recorded on the design to provide information on the interaction with total spacecraft.
NASA Technical Reports Server (NTRS)
Cooke, C. H.
1975-01-01
STICAP (Stiff Circuit Analysis Program) is a FORTRAN 4 computer program written for the CDC-6400-6600 computer series and SCOPE 3.0 operating system. It provides the circuit analyst a tool for automatically computing the transient responses and frequency responses of large linear time invariant networks, both stiff and nonstiff (algorithms and numerical integration techniques are described). The circuit description and user's program input language is engineer-oriented, making simple the task of using the program. Engineering theories underlying STICAP are examined. A user's manual is included which explains user interaction with the program and gives results of typical circuit design applications. Also, the program structure from a systems programmer's viewpoint is depicted and flow charts and other software documentation are given.
Navab, Nassir; Fellow, Miccai; Hennersperger, Christoph; Frisch, Benjamin; Fürst, Bernhard
2016-10-01
In the last decade, many researchers in medical image computing and computer assisted interventions across the world focused on the development of the Virtual Physiological Human (VPH), aiming at changing the practice of medicine from classification and treatment of diseases to that of modeling and treating patients. These projects resulted in major advancements in segmentation, registration, morphological, physiological and biomechanical modeling based on state of art medical imaging as well as other sensory data. However, a major issue which has not yet come into the focus is personalizing intra-operative imaging, allowing for optimal treatment. In this paper, we discuss the personalization of imaging and visualization process with particular focus on satisfying the challenging requirements of computer assisted interventions. We discuss such requirements and review a series of scientific contributions made by our research team to tackle some of these major challenges. Copyright © 2016. Published by Elsevier B.V.
Fan, Daoqing; Zhu, Xiaoqing; Dong, Shaojun; Wang, Erkang
2017-07-05
DNA is believed to be a promising candidate for molecular logic computation, and the fluorogenic/colorimetric substrates of G-quadruplex DNAzyme (G4zyme) are broadly used as label-free output reporters of DNA logic circuits. Herein, for the first time, tyramine-HCl (a fluorogenic substrate of G4zyme) is applied to DNA logic computation and a series of label-free DNA-input logic gates, including elementary AND, OR, and INHIBIT logic gates, as well as a two to one encoder, are constructed. Furthermore, a DNA caliper that can measure the base number of target DNA as low as three bases is also fabricated. This DNA caliper can also perform concatenated AND-AND logic computation to fulfil the requirements of sophisticated logic computing. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Performance of FORTRAN floating-point operations on the Flex/32 multicomputer
NASA Technical Reports Server (NTRS)
Crockett, Thomas W.
1987-01-01
A series of experiments has been run to examine the floating-point performance of FORTRAN programs on the Flex/32 (Trademark) computer. The experiments are described, and the timing results are presented. The time required to execute a floating-point operation is found to vary considerbaly depending on a number of factors. One factor of particular interest from an algorithm design standpoint is the difference in speed between common memory accesses and local memory accesses. Common memory accesses were found to be slower, and guidelines are given for determinig when it may be cost effective to copy data from common to local memory.
Computational approach to Thornley's problem by bivariate operational calculus
NASA Astrophysics Data System (ADS)
Bazhlekova, E.; Dimovski, I.
2012-10-01
Thornley's problem is an initial-boundary value problem with a nonlocal boundary condition for linear onedimensional reaction-diffusion equation, used as a mathematical model of spiral phyllotaxis in botany. Applying a bivariate operational calculus we find explicit representation of the solution, containing two convolution products of special solutions and the arbitrary initial and boundary functions. We use a non-classical convolution with respect to the space variable, extending in this way the classical Duhamel principle. The special solutions involved are represented in the form of fast convergent series. Numerical examples are considered to show the application of the present technique and to analyze the character of the solution.
Usability of a Low-Cost Head Tracking Computer Access Method following Stroke.
Mah, Jasmine; Jutai, Jeffrey W; Finestone, Hillel; Mckee, Hilary; Carter, Melanie
2015-01-01
Assistive technology devices for computer access can facilitate social reintegration and promote independence for people who have had a stroke. This work describes the exploration of the usefulness and acceptability of a new computer access device called the Nouse™ (Nose-as-mouse). The device uses standard webcam and video recognition algorithms to map the movement of the user's nose to a computer cursor, thereby allowing hands-free computer operation. Ten participants receiving in- or outpatient stroke rehabilitation completed a series of standardized and everyday computer tasks using the Nouse™ and then completed a device usability questionnaire. Task completion rates were high (90%) for computer activities only in the absence of time constraints. Most of the participants were satisfied with ease of use (70%) and liked using the Nouse™ (60%), indicating they could resume most of their usual computer activities apart from word-processing using the device. The findings suggest that hands-free computer access devices like the Nouse™ may be an option for people who experience upper motor impairment caused by stroke and are highly motivated to resume personal computing. More research is necessary to further evaluate the effectiveness of this technology, especially in relation to other computer access assistive technology devices.
NASA Astrophysics Data System (ADS)
Boehnlein, Thomas R.; Kramb, Victoria
2018-04-01
Proper formal documentation of computer acquired NDE experimental data generated during research is critical to the longevity and usefulness of the data. Without documentation describing how and why the data was acquired, NDE research teams lose capability such as their ability to generate new information from previously collected data or provide adequate information so that their work can be replicated by others seeking to validate their research. Despite the critical nature of this issue, NDE data is still being generated in research labs without appropriate documentation. By generating documentation in series with data, equal priority is given to both activities during the research process. One way to achieve this is to use a reactive documentation system (RDS). RDS prompts an operator to document the data as it is generated rather than relying on the operator to decide when and what to document. This paper discusses how such a system can be implemented in a dynamic environment made up of in-house and third party NDE data acquisition systems without creating additional burden on the operator. The reactive documentation approach presented here is agnostic enough that the principles can be applied to any operator controlled, computer based, data acquisition system.
SeaWiFS long-term solar diffuser reflectance and sensor noise analyses.
Eplee, Robert E; Patt, Frederick S; Barnes, Robert A; McClain, Charles R
2007-02-10
The NASA Ocean Biology Processing Group's Calibration and Validation (Cal/Val) team has undertaken an analysis of the mission-long Sea-Viewing Wide Field-of-View Sensor (SeaWiFS) solar calibration time series to assess the long-term degradation of the solar diffuser reflectance over 9 years on orbit. The SeaWiFS diffuser is an aluminum plate coated with YB71 paint. The bidirectional reflectance distribution function of the diffuser was not fully characterized before launch, so the Cal/Val team has implemented a regression of the solar incidence angles and the drift in the node of the satellite's orbit against the diffuser time series to correct for solar incidence angle effects. An exponential function with a time constant of 200 days yields the best fit to the diffuser time series. The decrease in diffuser reflectance over the mission is wavelength dependent, ranging from 9% in the blue (412 nm) to 5% in the red and near infrared (670-865 nm). The Cal/Val team has developed a methodology for computing the signal-to-noise ratio (SNR) for SeaWiFS on orbit from the diffuser time series corrected for both the varying solar incidence angles and the diffuser reflectance degradation. A sensor noise model is used to compare on-orbit SNRs computed for radiances reflected from the diffuser with prelaunch SNRs measured at typical radiances specified for the instrument. To within the uncertainties in the measurements, the SNRs for SeaWiFS have not changed over the mission. The on-orbit performance of the SeaWiFS solar diffuser should offer insight into the long-term on-orbit performance of solar diffusers on other instruments, such as the Moderate-Resolution Imaging Spectrometer [currently flying on the Earth Observing System (EOS) Terra and Aqua satellites], the Visible and Infrared Radiometer Suite [scheduled to fly on the NASA National Polar-orbiting Operational Environmental Satellite System (NPOESS) and NPOESS Preparatory Project (NPP) satellites] and the Advanced Baseline Imager [scheduled to fly on the National Oceanic and Atmospheric Administration Geostationary Environmental Operational Satellite Series R (GOES-R) satellites].
SeaWiFS long-term solar diffuser reflectance and sensor noise analyses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eplee, Robert E. Jr.; Patt, Frederick S.; Barnes, Robert A.
The NASA Ocean Biology Processing Group's Calibration and Validation(Cal/Val) team has undertaken an analysis of the mission-long Sea-Viewing Wide Field-of-View Sensor (SeaWiFS)solar calibration time series to assess the long-term degradation of the solar diffuser reflectance over 9 years on orbit. The SeaWiFS diffuser is an aluminum plate coated with YB71 paint. The bidirectional reflectance distribution function of the diffuser was not fully characterized before launch,so the Cal/Val team has implemented a regression of the solar incidence angles and the drift in the node of the satellite's orbit against the diffuser time series to correct for solar incidence angle effects. Anmore » exponential function with a time constant of 200 days yields the best fit to the diffuser time series.The decrease in diffuser reflectance over the mission is wavelength dependent,ranging from 9% in the blue(412 nm) to 5% in the red and near infrared(670-865 nm). The Cal/Val team has developed a methodology for computing the signal-to-noise ratio (SNR) for SeaWiFS on orbit from the diffuser time series corrected for both the varying solar incidence angles and the diffuser reflectance degradation. A sensor noise model is used to compare on-orbit SNRs computed for radiances reflected from the diffuser with prelaunch SNRs measured at typical radiances specified for the instrument. To within the uncertainties in the measurements, the SNRs for SeaWiFS have not changed over the mission. The on-orbit performance of the SeaWiFS solar diffuser should offer insight into the long-term on-orbit performance of solar diffusers on other instruments, such as the Moderate-Resolution Imaging Spectrometer [currently flying on the Earth Observing System (EOS) Terra and Aqua satellites], the Visible and Infrared Radiometer Suite [scheduled to fly on the NASA National Polar-orbiting Operational Environmental Satellite System (NPOESS) and NPOESS Preparatory Project (NPP) satellites] and the Advanced Baseline Imager [scheduled to fly on the National Oceanic and Atmospheric Administration Geostationary Environmental Operational Satellite Series R (GOES-R) satellites].« less
SeaWiFS long-term solar diffuser reflectance and sensor noise analyses
NASA Astrophysics Data System (ADS)
Eplee, Robert E., Jr.; Patt, Frederick S.; Barnes, Robert A.; McClain, Charles R.
2007-02-01
The NASA Ocean Biology Processing Group's Calibration and Validation (Cal/Val) team has undertaken an analysis of the mission-long Sea-Viewing Wide Field-of-View Sensor (SeaWiFS) solar calibration time series to assess the long-term degradation of the solar diffuser reflectance over 9 years on orbit. The SeaWiFS diffuser is an aluminum plate coated with YB71 paint. The bidirectional reflectance distribution function of the diffuser was not fully characterized before launch, so the Cal/Val team has implemented a regression of the solar incidence angles and the drift in the node of the satellite's orbit against the diffuser time series to correct for solar incidence angle effects. An exponential function with a time constant of 200 days yields the best fit to the diffuser time series. The decrease in diffuser reflectance over the mission is wavelength dependent, ranging from 9% in the blue (412 nm) to 5% in the red and near infrared (670-865 nm). The Cal/Val team has developed a methodology for computing the signal-to-noise ratio (SNR) for SeaWiFS on orbit from the diffuser time series corrected for both the varying solar incidence angles and the diffuser reflectance degradation. A sensor noise model is used to compare on-orbit SNRs computed for radiances reflected from the diffuser with prelaunch SNRs measured at typical radiances specified for the instrument. To within the uncertainties in the measurements, the SNRs for SeaWiFS have not changed over the mission. The on-orbit performance of the SeaWiFS solar diffuser should offer insight into the long-term on-orbit performance of solar diffusers on other instruments, such as the Moderate-Resolution Imaging Spectrometer [currently flying on the Earth Observing System (EOS) Terra and Aqua satellites], the Visible and Infrared Radiometer Suite [scheduled to fly on the NASA National Polar-orbiting Operational Environmental Satellite System (NPOESS) and NPOESS Preparatory Project (NPP) satellites] and the Advanced Baseline Imager [scheduled to fly on the National Oceanic and Atmospheric Administration Geostationary Environmental Operational Satellite Series R (GOES-R) satellites].
Lilienthal, S.; Klein, M.; Orbach, R.; Willner, I.; Remacle, F.
2017-01-01
The concentration of molecules can be changed by chemical reactions and thereby offer a continuous readout. Yet computer architecture is cast in textbooks in terms of binary valued, Boolean variables. To enable reactive chemical systems to compute we show how, using the Cox interpretation of probability theory, one can transcribe the equations of chemical kinetics as a sequence of coupled logic gates operating on continuous variables. It is discussed how the distinct chemical identity of a molecule allows us to create a common language for chemical kinetics and Boolean logic. Specifically, the logic AND operation is shown to be equivalent to a bimolecular process. The logic XOR operation represents chemical processes that take place concurrently. The values of the rate constants enter the logic scheme as inputs. By designing a reaction scheme with a feedback we endow the logic gates with a built in memory because their output then depends on the input and also on the present state of the system. Technically such a logic machine is an automaton. We report an experimental realization of three such coupled automata using a DNAzyme multilayer signaling cascade. A simple model verifies analytically that our experimental scheme provides an integrator generating a power series that is third order in time. The model identifies two parameters that govern the kinetics and shows how the initial concentrations of the substrates are the coefficients in the power series. PMID:28507669
An End-To-End Test of A Simulated Nuclear Electric Propulsion System
NASA Technical Reports Server (NTRS)
VanDyke, Melissa; Hrbud, Ivana; Goddfellow, Keith; Rodgers, Stephen L. (Technical Monitor)
2002-01-01
The Safe Affordable Fission Engine (SAFE) test series addresses Phase I Space Fission Systems issues in it particular non-nuclear testing and system integration issues leading to the testing and non-nuclear demonstration of a 400-kW fully integrated flight unit. The first part of the SAFE 30 test series demonstrated operation of the simulated nuclear core and heat pipe system. Experimental data acquired in a number of different test scenarios will validate existing computational models, demonstrated system flexibility (fast start-ups, multiple start-ups/shut downs), simulate predictable failure modes and operating environments. The objective of the second part is to demonstrate an integrated propulsion system consisting of a core, conversion system and a thruster where the system converts thermal heat into jet power. This end-to-end system demonstration sets a precedent for ground testing of nuclear electric propulsion systems. The paper describes the SAFE 30 end-to-end system demonstration and its subsystems.
31 CFR 351.66 - What book-entry Series EE savings bonds are included in the computation?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 31 Money and Finance: Treasury 2 2010-07-01 2010-07-01 false What book-entry Series EE savings... DEBT OFFERING OF UNITED STATES SAVINGS BONDS, SERIES EE Book-Entry Series EE Savings Bonds § 351.66 What book-entry Series EE savings bonds are included in the computation? (a) We include all bonds that...
31 CFR 351.66 - What book-entry Series EE savings bonds are included in the computation?
Code of Federal Regulations, 2011 CFR
2011-07-01
... 31 Money and Finance:Treasury 2 2011-07-01 2011-07-01 false What book-entry Series EE savings... DEBT OFFERING OF UNITED STATES SAVINGS BONDS, SERIES EE Book-Entry Series EE Savings Bonds § 351.66 What book-entry Series EE savings bonds are included in the computation? (a) We include all bonds that...
ERIC Educational Resources Information Center
Science Teacher, 1989
1989-01-01
Reviews seven software programs: (1) "Science Baseball: Biology" (testing a variety of topics); (2) "Wildways: Understanding Wildlife Conservation"; (3) "Earth Science Computer Test Bank"; (4) "Biology Computer Test Bank"; (5) "Computer Play & Learn Series" (a series of drill and test…
Simulated trajectories error analysis program, version 2. Volume 2: Programmer's manual
NASA Technical Reports Server (NTRS)
Vogt, E. D.; Adams, G. L.; Working, M. M.; Ferguson, J. B.; Bynum, M. R.
1971-01-01
A series of three computer programs for the mathematical analysis of navigation and guidance of lunar and interplanetary trajectories was developed. All three programs require the integration of n-body trajectories for both interplanetary and lunar missions. The virutal mass technique is used in all three programs. The user's manual contains the information necessary to operate the programs. The input and output quantities of the programs are described. Sample cases are given and discussed.
ERIC Educational Resources Information Center
Congress of the U. S., Washington, DC. House Committee on Government Operations.
This document provides a complete record of testimony presented at a series of hearings before the U.S. Congress on the electronic collection and dissemination of information by federal agencies. In looking at the effect of new computer and communications technology on government information activities and practices, the hearings considered such…
1986-10-31
Reference Card Given to Participants) Cognoter Reference Select = LeftButton Menu = MiddleButton TitleBar menu for tool operations Item menu for item...collaborative tools and their uses, the Colab system and the Cognoter presentation tool were implemented and used for both real and posed idea organization...tasks. To test the system design and its effect on structured problem-solving, many early Colab/ Cognoter meetings were monitored and a series of
Simplification of multiple Fourier series - An example of algorithmic approach
NASA Technical Reports Server (NTRS)
Ng, E. W.
1981-01-01
This paper describes one example of multiple Fourier series which originate from a problem of spectral analysis of time series data. The example is exercised here with an algorithmic approach which can be generalized for other series manipulation on a computer. The generalized approach is presently pursued towards applications to a variety of multiple series and towards a general purpose algorithm for computer algebra implementation.
31 CFR 359.51 - What book-entry Series I savings bonds are included in the computation?
Code of Federal Regulations, 2011 CFR
2011-07-01
... 31 Money and Finance:Treasury 2 2011-07-01 2011-07-01 false What book-entry Series I savings bonds... DEBT OFFERING OF UNITED STATES SAVINGS BONDS, SERIES I Book-Entry Series I Savings Bonds § 359.51 What book-entry Series I savings bonds are included in the computation? (a) We include all bonds that you...
Benchmarking of Neutron Production of Heavy-Ion Transport Codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Remec, Igor; Ronningen, Reginald M.; Heilbronn, Lawrence
Accurate prediction of radiation fields generated by heavy ion interactions is important in medical applications, space missions, and in design and operation of rare isotope research facilities. In recent years, several well-established computer codes in widespread use for particle and radiation transport calculations have been equipped with the capability to simulate heavy ion transport and interactions. To assess and validate these capabilities, we performed simulations of a series of benchmark-quality heavy ion experiments with the computer codes FLUKA, MARS15, MCNPX, and PHITS. We focus on the comparisons of secondary neutron production. Results are encouraging; however, further improvements in models andmore » codes and additional benchmarking are required.« less
Benchmarking of Heavy Ion Transport Codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Remec, Igor; Ronningen, Reginald M.; Heilbronn, Lawrence
Accurate prediction of radiation fields generated by heavy ion interactions is important in medical applications, space missions, and in designing and operation of rare isotope research facilities. In recent years, several well-established computer codes in widespread use for particle and radiation transport calculations have been equipped with the capability to simulate heavy ion transport and interactions. To assess and validate these capabilities, we performed simulations of a series of benchmark-quality heavy ion experiments with the computer codes FLUKA, MARS15, MCNPX, and PHITS. We focus on the comparisons of secondary neutron production. Results are encouraging; however, further improvements in models andmore » codes and additional benchmarking are required.« less
NASA Astrophysics Data System (ADS)
Dimova, Dilyana; Bajorath, Jürgen
2017-07-01
Computational scaffold hopping aims to identify core structure replacements in active compounds. To evaluate scaffold hopping potential from a principal point of view, regardless of the computational methods that are applied, a global analysis of conventional scaffolds in analog series from compound activity classes was carried out. The majority of analog series was found to contain multiple scaffolds, thus enabling the detection of intra-series scaffold hops among closely related compounds. More than 1000 activity classes were found to contain increasing proportions of multi-scaffold analog series. Thus, using such activity classes for scaffold hopping analysis is likely to overestimate the scaffold hopping (core structure replacement) potential of computational methods, due to an abundance of artificial scaffold hops that are possible within analog series.
Stochastic uncertainty analysis for unconfined flow systems
Liu, Gaisheng; Zhang, Dongxiao; Lu, Zhiming
2006-01-01
A new stochastic approach proposed by Zhang and Lu (2004), called the Karhunen‐Loeve decomposition‐based moment equation (KLME), has been extended to solving nonlinear, unconfined flow problems in randomly heterogeneous aquifers. This approach is on the basis of an innovative combination of Karhunen‐Loeve decomposition, polynomial expansion, and perturbation methods. The random log‐transformed hydraulic conductivity field (lnKS) is first expanded into a series in terms of orthogonal Gaussian standard random variables with their coefficients obtained as the eigenvalues and eigenfunctions of the covariance function of lnKS. Next, head h is decomposed as a perturbation expansion series Σh(m), where h(m) represents the mth‐order head term with respect to the standard deviation of lnKS. Then h(m) is further expanded into a polynomial series of m products of orthogonal Gaussian standard random variables whose coefficients hi1,i2,...,im(m) are deterministic and solved sequentially from low to high expansion orders using MODFLOW‐2000. Finally, the statistics of head and flux are computed using simple algebraic operations on hi1,i2,...,im(m). A series of numerical test results in 2‐D and 3‐D unconfined flow systems indicated that the KLME approach is effective in estimating the mean and (co)variance of both heads and fluxes and requires much less computational effort as compared to the traditional Monte Carlo simulation technique.
Xu, Fei-Fan; Chen, Jin-Hong; Leung, Gilberto Ka Kit; Hao, Shu-Yu; Xu, Long; Hou, Zong-Gang; Mao, Xiang; Shi, Guang-Zhi; Li, Jing-Sheng; Liu, Bai-Yun
2014-01-01
Post-operative volume of subdural fluid is considered to correlate with recurrence in chronic subdural haematoma (CSDH). Information on the applications of computer-assisted volumetric analysis in patients with CSDHs is lacking. To investigate the relationship between haematoma recurrence and longitudinal changes in subdural fluid volume using CT volumetric analysis. Fifty-four patients harbouring 64 CSDHs were studied prospectively. The association between recurrence rate and CT findings were investigated. Eleven patients (20.4%) experienced post-operative recurrence. Higher pre-operative (over 120 ml) and/or pre-discharge subdural fluid volumes (over 22 ml) were significantly associated with recurrence; the probability of non-recurrence for values below these thresholds were 92.7% and 95.2%, respectively. CSDHs with larger pre-operative (over 15.1 mm) and/or residual (over 11.7 mm) widths also had significantly increased recurrence rates. Bilateral CSDHs were not found to be more likely to recur in this series. On receiver-operating characteristic curve, the areas under curve for the magnitude of changes in subdural fluid volume were greater than a single time-point measure of either width or volume of the subdural fluid cavity. Close imaging follow-up is important for CSDH patients for recurrence prediction. Using quantitative CT volumetric analysis, strong evidence was provided that changes in the residual fluid volume during the 'self-resolution' period can be used as significantly radiological predictors of recurrence.
Why the null matters: statistical tests, random walks and evolution.
Sheets, H D; Mitchell, C E
2001-01-01
A number of statistical tests have been developed to determine what type of dynamics underlie observed changes in morphology in evolutionary time series, based on the pattern of change within the time series. The theory of the 'scaled maximum', the 'log-rate-interval' (LRI) method, and the Hurst exponent all operate on the same principle of comparing the maximum change, or rate of change, in the observed dataset to the maximum change expected of a random walk. Less change in a dataset than expected of a random walk has been interpreted as indicating stabilizing selection, while more change implies directional selection. The 'runs test' in contrast, operates on the sequencing of steps, rather than on excursion. Applications of these tests to computer generated, simulated time series of known dynamical form and various levels of additive noise indicate that there is a fundamental asymmetry in the rate of type II errors of the tests based on excursion: they are all highly sensitive to noise in models of directional selection that result in a linear trend within a time series, but are largely noise immune in the case of a simple model of stabilizing selection. Additionally, the LRI method has a lower sensitivity than originally claimed, due to the large range of LRI rates produced by random walks. Examination of the published results of these tests show that they have seldom produced a conclusion that an observed evolutionary time series was due to directional selection, a result which needs closer examination in light of the asymmetric response of these tests.
Reprint Series: Computation of Pi. RS-7.
ERIC Educational Resources Information Center
Schaaf, William L., Ed.
This is one in a series of SMSG supplementary and enrichment pamphlets for high school students. This series makes available expository articles which appeared in a variety of mathematical periodicals. Topics covered include: (1) the latest about pi; (2) a series useful in the computation of pi; (3) an ENIAC determination of pi and e to more than…
Recursive linearization of multibody dynamics equations of motion
NASA Technical Reports Server (NTRS)
Lin, Tsung-Chieh; Yae, K. Harold
1989-01-01
The equations of motion of a multibody system are nonlinear in nature, and thus pose a difficult problem in linear control design. One approach is to have a first-order approximation through the numerical perturbations at a given configuration, and to design a control law based on the linearized model. Here, a linearized model is generated analytically by following the footsteps of the recursive derivation of the equations of motion. The equations of motion are first written in a Newton-Euler form, which is systematic and easy to construct; then, they are transformed into a relative coordinate representation, which is more efficient in computation. A new computational method for linearization is obtained by applying a series of first-order analytical approximations to the recursive kinematic relationships. The method has proved to be computationally more efficient because of its recursive nature. It has also turned out to be more accurate because of the fact that analytical perturbation circumvents numerical differentiation and other associated numerical operations that may accumulate computational error, thus requiring only analytical operations of matrices and vectors. The power of the proposed linearization algorithm is demonstrated, in comparison to a numerical perturbation method, with a two-link manipulator and a seven degrees of freedom robotic manipulator. Its application to control design is also demonstrated.
Rasmussen, Patrick P.; Gray, John R.; Glysson, G. Douglas; Ziegler, Andrew C.
2009-01-01
In-stream continuous turbidity and streamflow data, calibrated with measured suspended-sediment concentration data, can be used to compute a time series of suspended-sediment concentration and load at a stream site. Development of a simple linear (ordinary least squares) regression model for computing suspended-sediment concentrations from instantaneous turbidity data is the first step in the computation process. If the model standard percentage error (MSPE) of the simple linear regression model meets a minimum criterion, this model should be used to compute a time series of suspended-sediment concentrations. Otherwise, a multiple linear regression model using paired instantaneous turbidity and streamflow data is developed and compared to the simple regression model. If the inclusion of the streamflow variable proves to be statistically significant and the uncertainty associated with the multiple regression model results in an improvement over that for the simple linear model, the turbidity-streamflow multiple linear regression model should be used to compute a suspended-sediment concentration time series. The computed concentration time series is subsequently used with its paired streamflow time series to compute suspended-sediment loads by standard U.S. Geological Survey techniques. Once an acceptable regression model is developed, it can be used to compute suspended-sediment concentration beyond the period of record used in model development with proper ongoing collection and analysis of calibration samples. Regression models to compute suspended-sediment concentrations are generally site specific and should never be considered static, but they represent a set period in a continually dynamic system in which additional data will help verify any change in sediment load, type, and source.
NASA Technical Reports Server (NTRS)
Kast, J. R.
1988-01-01
The Upper Atmosphere Research Satellite (UARS) is a three-axis stabilized Earth-pointing spacecraft in a low-Earth orbit. The UARS onboard computer (OBC) uses a Fourier Power Series (FPS) ephemeris representation that includes 42 position and 42 velocity coefficients per axis, with position residuals at 10-minute intervals. New coefficients and 32 hours of residuals are uploaded daily. This study evaluated two backup methods that permit the OBC to compute an approximate spacecraft ephemeris in the event that new ephemeris data cannot be uplinked for several days: (1) extending the use of the FPS coefficients previously uplinked, and (2) switching to a simple circular orbit approximation designed and tested (but not implemented) for LANDSAT-D. The FPS method provides greater accuracy during the backup period and does not require additional ground operational procedures for generating and uplinking an additional ephemeris table. The tradeoff is that the high accuracy of the FPS will be degraded slightly by adopting the longer fit period necessary to obtain backup accuracy for an extended period of time. The results for UARS show that extended use of the FPS is superior to the circular orbit approximation for short-term ephemeris backup.
1994-01-01
linear non -differential equations in series. This makes it easier to control the result, and an exact and accurate solution is obtained without...battery operated and controlled by an industry standard computer 1161. The HF unit contains a step-recovery diode transmitter and two quasi -TEM antennas...16]. All of these procedures can take advantage of exact non -linear analysis or experimental power characterization and are therefore "full non
Analytic War Plans: Adaptive Force-Employment Logic in the RAND Strategy Assessment System (RSAS)
1990-07-01
Agents already mentioned, there is a Control Agent, which can act for the RSAS user in doing things the user would otherwise do interactively with the...single or series of connected operations to be carried out simultaneously or in succession . It is usually based upon stated assumptions and is the...which can act for the RSAS user in doing things the user would otherwise do interactively with the computer. 5 Control Agent has three modes of
DeHart, Mark D.; Baker, Benjamin A.; Ortensi, Javier
2017-07-27
The Transient Test Reactor (TREAT) at Idaho National Laboratory will resume operations in late 2017 after a 23 year hiatus while maintained in a cold standby state. Over that time period, computational power and simulation capabilities have increased substantially and now allow for new multiphysics modeling possibilities that were not practical or feasible for most of TREAT's operational history. Hence the return of TREAT to operational service provides a unique opportunity to apply state-of-the-art software and associated methods in the modeling and simulation of general three-dimensional steady state and kinetic behavior for reactor operation, and for coupling of the coremore » power transient model to experiment simulations. However, measurements taken in previous operations were intended to predict power deposition in experimental samples, with little consideration of three-dimensional core power distributions. Hence, interpretation of data for the purpose of validation of modern methods can be challenging. For the research discussed herein, efforts are described for the process of proper interpretation of data from the most recent calibration experiments performed in the core, the M8 calibration series (M8-CAL). These measurements were taken between 1990 and 1993 using a set of fission wires and test fuel pins to estimate the power deposition that would be produced in fast reactor test fuel pins during the M8 experiment series. Because of the decision to place TREAT into a standby state in 1994, the M8 series of transients were never performed. However, potentially valuable information relevant for validation is available in the M8-CAL measurement data, if properly interpreted. This article describes the current state of the process of recovery of useful data from M8-CAL measurements and quantification of biases and uncertainties to potentially apply to the validation of multiphysics methods.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeHart, Mark D.; Baker, Benjamin A.; Ortensi, Javier
The Transient Test Reactor (TREAT) at Idaho National Laboratory will resume operations in late 2017 after a 23 year hiatus while maintained in a cold standby state. Over that time period, computational power and simulation capabilities have increased substantially and now allow for new multiphysics modeling possibilities that were not practical or feasible for most of TREAT's operational history. Hence the return of TREAT to operational service provides a unique opportunity to apply state-of-the-art software and associated methods in the modeling and simulation of general three-dimensional steady state and kinetic behavior for reactor operation, and for coupling of the coremore » power transient model to experiment simulations. However, measurements taken in previous operations were intended to predict power deposition in experimental samples, with little consideration of three-dimensional core power distributions. Hence, interpretation of data for the purpose of validation of modern methods can be challenging. For the research discussed herein, efforts are described for the process of proper interpretation of data from the most recent calibration experiments performed in the core, the M8 calibration series (M8-CAL). These measurements were taken between 1990 and 1993 using a set of fission wires and test fuel pins to estimate the power deposition that would be produced in fast reactor test fuel pins during the M8 experiment series. Because of the decision to place TREAT into a standby state in 1994, the M8 series of transients were never performed. However, potentially valuable information relevant for validation is available in the M8-CAL measurement data, if properly interpreted. This article describes the current state of the process of recovery of useful data from M8-CAL measurements and quantification of biases and uncertainties to potentially apply to the validation of multiphysics methods.« less
Water sprays in space retrieval operations. [for disabled spacecraft detumbling and despinning
NASA Technical Reports Server (NTRS)
Freesland, D. C.
1978-01-01
The water spray technique (WST) for nullifying the angular momentum of a disabled spacecraft is examined. Such a despinning operation is necessary before a disabled spacecraft can be retrieved by the Space Shuttle. The WST involving the use of liquid sprays appears to be less complex and costly than other techniques proposed to despin a disabled vehicle. A series of experiments have been conducted to determine physical properties of water sprays exhausting into a vacuum. A computer model is built which together with the experimental results yields satellite despin performance parameters. The selection and retrieval of an actual disabled spacecraft is considered to demonstrate an application of the WST.
The increasing influence of medical image processing in clinical neuroimaging.
Barillot, Christian
2007-01-20
This paper review the evolution of clinical neuroinformatics domain in the passed and gives an outlook how this research field will evolve in clinical neurology (e.g. Epilepsy, Multiple Sclerosis, Dementia) and neurosurgery (e.g. image guided surgery, intra-operative imaging, the definition of the Operation Room of the future). These different issues, as addressed by the VisAGeS research team, are discussed in more details and the benefits of a close collaboration between clinical scientists (radiologist, neurologist and neurosurgeon) and computer scientists are shown to give adequate answers to the series of problems which needs to be solved for a more effective use of medical images in clinical neurosciences.
ACTS 118x: High Speed TCP Interoperability Testing
NASA Technical Reports Server (NTRS)
Brooks, David E.; Buffinton, Craig; Beering, Dave R.; Welch, Arun; Ivancic, William D.; Zernic, Mike; Hoder, Douglas J.
1999-01-01
With the recent explosion of the Internet and the enormous business opportunities available to communication system providers, great interest has developed in improving the efficiency of data transfer over satellite links using the Transmission Control Protocol (TCP) of the Internet Protocol (IP) suite. The NASA's ACTS experiments program initiated a series of TCP experiments to demonstrate scalability of TCP/IP and determine to what extent the protocol can be optimized over a 622 Mbps satellite link. Through partnerships with the government technology oriented labs, computer, telecommunication, and satellite industries NASA Glenn was able to: (1) promote the development of interoperable, high-performance TCP/IP implementations across multiple computing / operating platforms; (2) work with the satellite industry to answer outstanding questions regarding the use of standard protocols (TCP/IP and ATM) for the delivery of advanced data services, and for use in spacecraft architectures; and (3) conduct a series of TCP/IP interoperability tests over OC12 ATM over a satellite network in a multi-vendor environment using ACTS. The experiments' various network configurations and the results are presented.
State-Space Analysis of Granger-Geweke Causality Measures with Application to fMRI.
Solo, Victor
2016-05-01
The recent interest in the dynamics of networks and the advent, across a range of applications, of measuring modalities that operate on different temporal scales have put the spotlight on some significant gaps in the theory of multivariate time series. Fundamental to the description of network dynamics is the direction of interaction between nodes, accompanied by a measure of the strength of such interactions. Granger causality and its associated frequency domain strength measures (GEMs) (due to Geweke) provide a framework for the formulation and analysis of these issues. In pursuing this setup, three significant unresolved issues emerge. First, computing GEMs involves computing submodels of vector time series models, for which reliable methods do not exist. Second, the impact of filtering on GEMs has never been definitively established. Third, the impact of downsampling on GEMs has never been established. In this work, using state-space methods, we resolve all these issues and illustrate the results with some simulations. Our analysis is motivated by some problems in (fMRI) brain imaging, to which we apply it, but it is of general applicability.
State-Space Analysis of Granger-Geweke Causality Measures with Application to fMRI
Solo, Victor
2017-01-01
The recent interest in the dynamics of networks and the advent, across a range of applications, of measuring modalities that operate on different temporal scales have put the spotlight on some significant gaps in the theory of multivariate time series. Fundamental to the description of network dynamics is the direction of interaction between nodes, accompanied by a measure of the strength of such interactions. Granger causality and its associated frequency domain strength measures (GEMs) (due to Geweke) provide a framework for the formulation and analysis of these issues. In pursuing this setup, three significant unresolved issues emerge. First, computing GEMs involves computing submodels of vector time series models, for which reliable methods do not exist. Second, the impact of filtering on GEMs has never been definitively established. Third, the impact of downsampling on GEMs has never been established. In this work, using state-space methods, we resolve all these issues and illustrate the results with some simulations. Our analysis is motivated by some problems in (fMRI) brain imaging, to which we apply it, but it is of general applicability. PMID:26942749
Cui, Yiqian; Shi, Junyou; Wang, Zili
2015-11-01
Quantum Neural Networks (QNN) models have attracted great attention since it innovates a new neural computing manner based on quantum entanglement. However, the existing QNN models are mainly based on the real quantum operations, and the potential of quantum entanglement is not fully exploited. In this paper, we proposes a novel quantum neuron model called Complex Quantum Neuron (CQN) that realizes a deep quantum entanglement. Also, a novel hybrid networks model Complex Rotation Quantum Dynamic Neural Networks (CRQDNN) is proposed based on Complex Quantum Neuron (CQN). CRQDNN is a three layer model with both CQN and classical neurons. An infinite impulse response (IIR) filter is embedded in the Networks model to enable the memory function to process time series inputs. The Levenberg-Marquardt (LM) algorithm is used for fast parameter learning. The networks model is developed to conduct time series predictions. Two application studies are done in this paper, including the chaotic time series prediction and electronic remaining useful life (RUL) prediction. Copyright © 2015 Elsevier Ltd. All rights reserved.
Multiscale entropy-based methods for heart rate variability complexity analysis
NASA Astrophysics Data System (ADS)
Silva, Luiz Eduardo Virgilio; Cabella, Brenno Caetano Troca; Neves, Ubiraci Pereira da Costa; Murta Junior, Luiz Otavio
2015-03-01
Physiologic complexity is an important concept to characterize time series from biological systems, which associated to multiscale analysis can contribute to comprehension of many complex phenomena. Although multiscale entropy has been applied to physiological time series, it measures irregularity as function of scale. In this study we purpose and evaluate a set of three complexity metrics as function of time scales. Complexity metrics are derived from nonadditive entropy supported by generation of surrogate data, i.e. SDiffqmax, qmax and qzero. In order to access accuracy of proposed complexity metrics, receiver operating characteristic (ROC) curves were built and area under the curves was computed for three physiological situations. Heart rate variability (HRV) time series in normal sinus rhythm, atrial fibrillation, and congestive heart failure data set were analyzed. Results show that proposed metric for complexity is accurate and robust when compared to classic entropic irregularity metrics. Furthermore, SDiffqmax is the most accurate for lower scales, whereas qmax and qzero are the most accurate when higher time scales are considered. Multiscale complexity analysis described here showed potential to assess complex physiological time series and deserves further investigation in wide context.
Koltun, G.F.
2015-01-01
Streamflow hydrographs were plotted for modeled/computed time series for the Ohio River near the USGS Sardis gage and the Ohio River at the Hannibal Lock and Dam. In general, the time series at these two locations compared well. Some notable differences include the exclusive presence of short periods of negative streamflows in the USGS 15-minute time-series data for the gage on the Ohio River above Sardis, Ohio, and the occurrence of several peak streamflows in the USACE gate/hydropower time series for the Hannibal Lock and Dam that were appreciably larger than corresponding peaks in the other time series, including those modeled/computed for the downstream Sardis gage
NASA Technical Reports Server (NTRS)
Rising, J. J.; Kairys, A. A.; Maass, C. A.; Siegart, C. D.; Rakness, W. L.; Mijares, R. D.; King, R. W.; Peterson, R. S.; Hurley, S. R.; Wickson, D.
1982-01-01
A limited authority pitch active control system (PACS) was developed for a wide body jet transport (L-1011) with a flying horizontal stabilizer. Two dual channel digital computers and the associated software provide command signals to a dual channel series servo which controls the stabilizer power actuators. Input sensor signals to the computer are pitch rate, column-trim position, and dynamic pressure. Control laws are given for the PACS and the system architecture is defined. The piloted flight simulation and vehicle system simulation tests performed to verify control laws and system operation prior to installation on the aircraft are discussed. Modifications to the basic aircraft are described. Flying qualities of the aircraft with the PACS on and off were evaluated. Handling qualities for cruise and high speed flight conditions with the c.g. at 39% mac ( + 1% stability margin) and PACS operating were judged to be as good as the handling qualities with the c.g. at 25% (+15% stability margin) and PACS off.
NASA Technical Reports Server (NTRS)
Roberts, Christopher L.; Smith, Sonya T.; Vicroy, Dan D.
2000-01-01
Several of our major airports are operating at or near their capacity limit, increasing congestion and delays for travelers. As a result, the National Aeronautics and Space Administration (NASA) has been working in conjunction with the Federal Aviation Administration (FAA), airline operators, and the airline industry to increase airport capacity and safety. As more and more airplanes are placed into the terminal area the probability of encountering wake turbulence is increased. The NASA Langley Research Center conducted a series of flight tests from 1995 through 1997 to develop a wake encounter and wake-measurement data set with the accompanying atmospheric state information. The purpose of this research is to use the data from those flights to compute the wake-induced forced and moments exerted on the aircraft The calculated forces and moments will then be compiled into a database that can be used by wake vortex researchers to compare with experimental and computational results.
An implicit spatial and high-order temporal finite difference scheme for 2D acoustic modelling
NASA Astrophysics Data System (ADS)
Wang, Enjiang; Liu, Yang
2018-01-01
The finite difference (FD) method exhibits great superiority over other numerical methods due to its easy implementation and small computational requirement. We propose an effective FD method, characterised by implicit spatial and high-order temporal schemes, to reduce both the temporal and spatial dispersions simultaneously. For the temporal derivative, apart from the conventional second-order FD approximation, a special rhombus FD scheme is included to reach high-order accuracy in time. Compared with the Lax-Wendroff FD scheme, this scheme can achieve nearly the same temporal accuracy but requires less floating-point operation times and thus less computational cost when the same operator length is adopted. For the spatial derivatives, we adopt the implicit FD scheme to improve the spatial accuracy. Apart from the existing Taylor series expansion-based FD coefficients, we derive the least square optimisation based implicit spatial FD coefficients. Dispersion analysis and modelling examples demonstrate that, our proposed method can effectively decrease both the temporal and spatial dispersions, thus can provide more accurate wavefields.
NASADIG - NASA DEVICE INDEPENDENT GRAPHICS LIBRARY (AMDAHL VERSION)
NASA Technical Reports Server (NTRS)
Rogers, J. E.
1994-01-01
The NASA Device Independent Graphics Library, NASADIG, can be used with many computer-based engineering and management applications. The library gives the user the opportunity to translate data into effective graphic displays for presentation. The software offers many features which allow the user flexibility in creating graphics. These include two-dimensional plots, subplot projections in 3D-space, surface contour line plots, and surface contour color-shaded plots. Routines for three-dimensional plotting, wireframe surface plots, surface plots with hidden line removal, and surface contour line plots are provided. Other features include polar and spherical coordinate plotting, world map plotting utilizing either cylindrical equidistant or Lambert equal area projection, plot translation, plot rotation, plot blowup, splines and polynomial interpolation, area blanking control, multiple log/linear axes, legends and text control, curve thickness control, and multiple text fonts (18 regular, 4 bold). NASADIG contains several groups of subroutines. Included are subroutines for plot area and axis definition; text set-up and display; area blanking; line style set-up, interpolation, and plotting; color shading and pattern control; legend, text block, and character control; device initialization; mixed alphabets setting; and other useful functions. The usefulness of many routines is dependent on the prior definition of basic parameters. The program's control structure uses a serial-level construct with each routine restricted for activation at some prescribed level(s) of problem definition. NASADIG provides the following output device drivers: Selanar 100XL, VECTOR Move/Draw ASCII and PostScript files, Tektronix 40xx, 41xx, and 4510 Rasterizer, DEC VT-240 (4014 mode), IBM AT/PC compatible with SmartTerm 240 emulator, HP Lasergrafix Film Recorder, QMS 800/1200, DEC LN03+ Laserprinters, and HP LaserJet (Series III). NASADIG is written in FORTRAN and is available for several platforms. NASADIG 5.7 is available for DEC VAX series computers running VMS 5.0 or later (MSC-21801), Cray X-MP and Y-MP series computers running UNICOS (COS-10049), and Amdahl 5990 mainframe computers running UTS (COS-10050). NASADIG 5.1 is available for UNIX-based operating systems (MSC-22001). The UNIX version has been successfully implemented on Sun4 series computers running SunOS, SGI IRIS computers running IRIX, Hewlett Packard 9000 computers running HP-UX, and Convex computers running Convex OS (MSC-22001). The standard distribution medium for MSC-21801 is a set of two 6250 BPI 9-track magnetic tapes in DEC VAX BACKUP format. It is also available on a set of two TK50 tape cartridges in DEC VAX BACKUP format. The standard distribution medium for COS-10049 and COS-10050 is a 6250 BPI 9-track magnetic tape in UNIX tar format. Other distribution media and formats may be available upon request. The standard distribution medium for MSC-22001 is a .25 inch streaming magnetic tape cartridge (Sun QIC-24) in UNIX tar format. Alternate distribution media and formats are available upon request. With minor modification, the UNIX source code can be ported to other platforms including IBM PC/AT series computers and compatibles. NASADIG is also available bundled with TRASYS, the Thermal Radiation Analysis System (COS-10026, DEC VAX version; COS-10040, CRAY version).
NASADIG - NASA DEVICE INDEPENDENT GRAPHICS LIBRARY (UNIX VERSION)
NASA Technical Reports Server (NTRS)
Rogers, J. E.
1994-01-01
The NASA Device Independent Graphics Library, NASADIG, can be used with many computer-based engineering and management applications. The library gives the user the opportunity to translate data into effective graphic displays for presentation. The software offers many features which allow the user flexibility in creating graphics. These include two-dimensional plots, subplot projections in 3D-space, surface contour line plots, and surface contour color-shaded plots. Routines for three-dimensional plotting, wireframe surface plots, surface plots with hidden line removal, and surface contour line plots are provided. Other features include polar and spherical coordinate plotting, world map plotting utilizing either cylindrical equidistant or Lambert equal area projection, plot translation, plot rotation, plot blowup, splines and polynomial interpolation, area blanking control, multiple log/linear axes, legends and text control, curve thickness control, and multiple text fonts (18 regular, 4 bold). NASADIG contains several groups of subroutines. Included are subroutines for plot area and axis definition; text set-up and display; area blanking; line style set-up, interpolation, and plotting; color shading and pattern control; legend, text block, and character control; device initialization; mixed alphabets setting; and other useful functions. The usefulness of many routines is dependent on the prior definition of basic parameters. The program's control structure uses a serial-level construct with each routine restricted for activation at some prescribed level(s) of problem definition. NASADIG provides the following output device drivers: Selanar 100XL, VECTOR Move/Draw ASCII and PostScript files, Tektronix 40xx, 41xx, and 4510 Rasterizer, DEC VT-240 (4014 mode), IBM AT/PC compatible with SmartTerm 240 emulator, HP Lasergrafix Film Recorder, QMS 800/1200, DEC LN03+ Laserprinters, and HP LaserJet (Series III). NASADIG is written in FORTRAN and is available for several platforms. NASADIG 5.7 is available for DEC VAX series computers running VMS 5.0 or later (MSC-21801), Cray X-MP and Y-MP series computers running UNICOS (COS-10049), and Amdahl 5990 mainframe computers running UTS (COS-10050). NASADIG 5.1 is available for UNIX-based operating systems (MSC-22001). The UNIX version has been successfully implemented on Sun4 series computers running SunOS, SGI IRIS computers running IRIX, Hewlett Packard 9000 computers running HP-UX, and Convex computers running Convex OS (MSC-22001). The standard distribution medium for MSC-21801 is a set of two 6250 BPI 9-track magnetic tapes in DEC VAX BACKUP format. It is also available on a set of two TK50 tape cartridges in DEC VAX BACKUP format. The standard distribution medium for COS-10049 and COS-10050 is a 6250 BPI 9-track magnetic tape in UNIX tar format. Other distribution media and formats may be available upon request. The standard distribution medium for MSC-22001 is a .25 inch streaming magnetic tape cartridge (Sun QIC-24) in UNIX tar format. Alternate distribution media and formats are available upon request. With minor modification, the UNIX source code can be ported to other platforms including IBM PC/AT series computers and compatibles. NASADIG is also available bundled with TRASYS, the Thermal Radiation Analysis System (COS-10026, DEC VAX version; COS-10040, CRAY version).
Li, Zhi; Yang, Rong-Tao; Li, Zu-Bing
2015-09-01
Computer-assisted navigation has been widely used in oral and maxillofacial surgery. The purpose of this study was to describe the applications of computer-assisted navigation for the minimally invasive reduction of isolated zygomatic arch fractures. All patients identified as having isolated zygomatic arch fractures presenting to the authors' department from April 2013 through November 2014 were included in this prospective study. Minimally invasive reductions of isolated zygomatic arch fractures were performed on these patients under the guidance of computer-assisted navigation. The reduction status was evaluated by postoperative computed tomography (CT) 1 week after the operation. Postoperative complications and facial contours were evaluated during follow-up. Functional recovery was evaluated by the difference between the preoperative maximum interincisal mouth opening and that at the final follow-up. Twenty-three patients were included in this case series. The operation proceeded well in all patients. Postoperatively, all patients displayed uneventful healing without postoperative complication. Postoperative CT showed exact reduction in all cases. Satisfactory facial contour and functional recovery were observed in all patients. The preoperative maximal mouth opening ranged from 8 to 25 mm, and the maximal mouth opening at the final follow-up ranged from 36 to 42 mm. Computer-assisted navigation can be used not only for guiding zygomatic arch fracture reduction, but also for assessing reduction. Computer-assisted navigation is an effective and minimally invasive technique that can be applied in the reduction of isolated zygomatic arch fractures. Copyright © 2015 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Adhikari, Surendra; Ivins, Erik R.; Larour, Eric
2016-03-01
A classical Green's function approach for computing gravitationally consistent sea-level variations associated with mass redistribution on the earth's surface employed in contemporary sea-level models naturally suits the spectral methods for numerical evaluation. The capability of these methods to resolve high wave number features such as small glaciers is limited by the need for large numbers of pixels and high-degree (associated Legendre) series truncation. Incorporating a spectral model into (components of) earth system models that generally operate on a mesh system also requires repetitive forward and inverse transforms. In order to overcome these limitations, we present a method that functions efficiently on an unstructured mesh, thus capturing the physics operating at kilometer scale yet capable of simulating geophysical observables that are inherently of global scale with minimal computational cost. The goal of the current version of this model is to provide high-resolution solid-earth, gravitational, sea-level and rotational responses for earth system models operating in the domain of the earth's outer fluid envelope on timescales less than about 1 century when viscous effects can largely be ignored over most of the globe. The model has numerous important geophysical applications. For example, we compute time-varying computations of global geodetic and sea-level signatures associated with recent ice-sheet changes that are derived from space gravimetry observations. We also demonstrate the capability of our model to simultaneously resolve kilometer-scale sources of the earth's time-varying surface mass transport, derived from high-resolution modeling of polar ice sheets, and predict the corresponding local and global geodetic signatures.
Monte Carlo Perturbation Theory Estimates of Sensitivities to System Dimensions
Burke, Timothy P.; Kiedrowski, Brian C.
2017-12-11
Here, Monte Carlo methods are developed using adjoint-based perturbation theory and the differential operator method to compute the sensitivities of the k-eigenvalue, linear functions of the flux (reaction rates), and bilinear functions of the forward and adjoint flux (kinetics parameters) to system dimensions for uniform expansions or contractions. The calculation of sensitivities to system dimensions requires computing scattering and fission sources at material interfaces using collisions occurring at the interface—which is a set of events with infinitesimal probability. Kernel density estimators are used to estimate the source at interfaces using collisions occurring near the interface. The methods for computing sensitivitiesmore » of linear and bilinear ratios are derived using the differential operator method and adjoint-based perturbation theory and are shown to be equivalent to methods previously developed using a collision history–based approach. The methods for determining sensitivities to system dimensions are tested on a series of fast, intermediate, and thermal critical benchmarks as well as a pressurized water reactor benchmark problem with iterated fission probability used for adjoint-weighting. The estimators are shown to agree within 5% and 3σ of reference solutions obtained using direct perturbations with central differences for the majority of test problems.« less
Inferring cortical function in the mouse visual system through large-scale systems neuroscience.
Hawrylycz, Michael; Anastassiou, Costas; Arkhipov, Anton; Berg, Jim; Buice, Michael; Cain, Nicholas; Gouwens, Nathan W; Gratiy, Sergey; Iyer, Ramakrishnan; Lee, Jung Hoon; Mihalas, Stefan; Mitelut, Catalin; Olsen, Shawn; Reid, R Clay; Teeter, Corinne; de Vries, Saskia; Waters, Jack; Zeng, Hongkui; Koch, Christof
2016-07-05
The scientific mission of the Project MindScope is to understand neocortex, the part of the mammalian brain that gives rise to perception, memory, intelligence, and consciousness. We seek to quantitatively evaluate the hypothesis that neocortex is a relatively homogeneous tissue, with smaller functional modules that perform a common computational function replicated across regions. We here focus on the mouse as a mammalian model organism with genetics, physiology, and behavior that can be readily studied and manipulated in the laboratory. We seek to describe the operation of cortical circuitry at the computational level by comprehensively cataloging and characterizing its cellular building blocks along with their dynamics and their cell type-specific connectivities. The project is also building large-scale experimental platforms (i.e., brain observatories) to record the activity of large populations of cortical neurons in behaving mice subject to visual stimuli. A primary goal is to understand the series of operations from visual input in the retina to behavior by observing and modeling the physical transformations of signals in the corticothalamic system. We here focus on the contribution that computer modeling and theory make to this long-term effort.
Code of Federal Regulations, 2010 CFR
2010-07-01
... for Series I savings bonds. (b) Computation of amount for gifts. Bonds purchased or transferred as gifts will be included in the computation of the purchase limitation for the account of the recipient... and Series I savings bonds may I purchase in one year? 363.52 Section 363.52 Money and Finance...
A novel weight determination method for time series data aggregation
NASA Astrophysics Data System (ADS)
Xu, Paiheng; Zhang, Rong; Deng, Yong
2017-09-01
Aggregation in time series is of great importance in time series smoothing, predicting and other time series analysis process, which makes it crucial to address the weights in times series correctly and reasonably. In this paper, a novel method to obtain the weights in time series is proposed, in which we adopt induced ordered weighted aggregation (IOWA) operator and visibility graph averaging (VGA) operator and linearly combine the weights separately generated by the two operator. The IOWA operator is introduced to the weight determination of time series, through which the time decay factor is taken into consideration. The VGA operator is able to generate weights with respect to the degree distribution in the visibility graph constructed from the corresponding time series, which reflects the relative importance of vertices in time series. The proposed method is applied to two practical datasets to illustrate its merits. The aggregation of Construction Cost Index (CCI) demonstrates the ability of proposed method to smooth time series, while the aggregation of The Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX) illustrate how proposed method maintain the variation tendency of original data.
Designing Facilities for Collaborative Operations
NASA Technical Reports Server (NTRS)
Norris, Jeffrey; Powell, Mark; Backes, Paul; Steinke, Robert; Tso, Kam; Wales, Roxana
2003-01-01
A methodology for designing operational facilities for collaboration by multiple experts has begun to take shape as an outgrowth of a project to design such facilities for scientific operations of the planned 2003 Mars Exploration Rover (MER) mission. The methodology could also be applicable to the design of military "situation rooms" and other facilities for terrestrial missions. It was recognized in this project that modern mission operations depend heavily upon the collaborative use of computers. It was further recognized that tests have shown that layout of a facility exerts a dramatic effect on the efficiency and endurance of the operations staff. The facility designs (for example, see figure) and the methodology developed during the project reflect this recognition. One element of the methodology is a metric, called effective capacity, that was created for use in evaluating proposed MER operational facilities and may also be useful for evaluating other collaboration spaces, including meeting rooms and military situation rooms. The effective capacity of a facility is defined as the number of people in the facility who can be meaningfully engaged in its operations. A person is considered to be meaningfully engaged if the person can (1) see, hear, and communicate with everyone else present; (2) see the material under discussion (typically data on a piece of paper, computer monitor, or projection screen); and (3) provide input to the product under development by the group. The effective capacity of a facility is less than the number of people that can physically fit in the facility. For example, a typical office that contains a desktop computer has an effective capacity of .4, while a small conference room that contains a projection screen has an effective capacity of around 10. Little or no benefit would be derived from allowing the number of persons in an operational facility to exceed its effective capacity: At best, the operations staff would be underutilized; at worst, operational performance would deteriorate. Elements of this methodology were applied to the design of three operations facilities for a series of rover field tests. These tests were observed by human-factors researchers and their conclusions are being used to refine and extend the methodology to be used in the final design of the MER operations facility. Further work is underway to evaluate the use of personal digital assistant (PDA) units as portable input interfaces and communication devices in future mission operations facilities. A PDA equipped for wireless communication and Ethernet, Bluetooth, or another networking technology would cost less than a complete computer system, and would enable a collaborator to communicate electronically with computers and with other collaborators while moving freely within the virtual environment created by a shared immersive graphical display.
Benchmarking of neutron production of heavy-ion transport codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Remec, I.; Ronningen, R. M.; Heilbronn, L.
Document available in abstract form only, full text of document follows: Accurate prediction of radiation fields generated by heavy ion interactions is important in medical applications, space missions, and in design and operation of rare isotope research facilities. In recent years, several well-established computer codes in widespread use for particle and radiation transport calculations have been equipped with the capability to simulate heavy ion transport and interactions. To assess and validate these capabilities, we performed simulations of a series of benchmark-quality heavy ion experiments with the computer codes FLUKA, MARS15, MCNPX, and PHITS. We focus on the comparisons of secondarymore » neutron production. Results are encouraging; however, further improvements in models and codes and additional benchmarking are required. (authors)« less
Fast, adaptive summation of point forces in the two-dimensional Poisson equation
NASA Technical Reports Server (NTRS)
Van Dommelen, Leon; Rundensteiner, Elke A.
1989-01-01
A comparatively simple procedure is presented for the direct summation of the velocity field introduced by point vortices which significantly reduces the required number of operations by replacing selected partial sums by asymptotic series. Tables are presented which demonstrate the speed of this algorithm in terms of the mere doubling of computational time in dealing with a doubling of the number of vortices; current methods involve a computational time extension by a factor of 4. This procedure need not be restricted to the solution of the Poisson equation, and may be applied to other problems involving groups of points in which the interaction between elements of different groups can be simplified when the distance between groups is sufficiently great.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, D.A.
1996-06-01
This manual describes a dose assessment system used to estimate the population or collective dose commitments received via both airborne and waterborne pathways by persons living within a 2- to 80-kilometer region of a commercial operating power reactor for a specific year of effluent releases. Computer programs, data files, and utility routines are included which can be used in conjunction with an IBM or compatible personal computer to produce the required dose commitments and their statistical distributions. In addition, maximum individual airborne and waterborne dose commitments are estimated and compared to 10 CFR Part 50, Appendix 1, design objectives. Thismore » supplement is the last report in the NUREG/CR-2850 series.« less
Development of the HERMIES III mobile robot research testbed at Oak Ridge National Laboratory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Manges, W.W.; Hamel, W.R.; Weisbin, C.R.
1988-01-01
The latest robot in the Hostile Environment Robotic Machine Intelligence Experiment Series (HERMIES) is now under development at the Center for Engineering Systems Advanced Research (CESAR) in the Oak Ridge National Laboratory. The HERMIES III robot incorporates a larger than human size 7-degree-of-freedom manipulator mounted on a 2-degree-of-freedom mobile platform including a variety of sensors and computers. The deployment of this robot represents a significant increase in research capabilities for the CESAR laboratory. The initial on-board computer capacity of the robot exceeds that of 20 Vax 11/780s. The navigation and vision algorithms under development make extensive use of the on-boardmore » NCUBE hypercube computer while the sensors are interfaced through five VME computers running the OS-9 real-time, multitasking operating system. This paper describes the motivation, key issues, and detailed design trade-offs of implementing the first phase (basic functionality) of the HERMIES III robot. 10 refs., 7 figs.« less
DET/MPS - The GSFC Energy Balance Programs
NASA Technical Reports Server (NTRS)
Jagielski, J. M.
1994-01-01
Direct Energy Transfer (DET) and MultiMission Spacecraft Modular Power System (MPS) computer programs perform mathematical modeling and simulation to aid in design and analysis of DET and MPS spacecraft power system performance in order to determine energy balance of subsystem. DET spacecraft power system feeds output of solar photovoltaic array and nickel cadmium batteries directly to spacecraft bus. MPS system, Standard Power Regulator Unit (SPRU) utilized to operate array at array's peak power point. DET and MPS perform minute-by-minute simulation of performance of power system. Results of simulation focus mainly on output of solar array and characteristics of batteries. Both packages limited in terms of orbital mechanics, they have sufficient capability to calculate data on eclipses and performance of arrays for circular or near-circular orbits. DET and MPS written in FORTRAN-77 with some VAX FORTRAN-type extensions. Both available in three versions: GSC-13374, for DEC VAX-series computers running VMS. GSC-13443, for UNIX-based computers. GSC-13444, for Apple Macintosh computers.
NASA Technical Reports Server (NTRS)
1988-01-01
Martin Marietta Aero and Naval Systems has advanced the CAD art to a very high level at its Robotics Laboratory. One of the company's major projects is construction of a huge Field Material Handling Robot for the Army's Human Engineering Lab. Design of FMR, intended to move heavy and dangerous material such as ammunition, was a triumph in CAD Engineering. Separate computer problems modeled the robot's kinematics and dynamics, yielding such parameters as the strength of materials required for each component, the length of the arms, their degree of freedom and power of hydraulic system needed. The Robotics Lab went a step further and added data enabling computer simulation and animation of the robot's total operational capability under various loading and unloading conditions. NASA computer program (IAC), integrated Analysis Capability Engineering Database was used. Program contains a series of modules that can stand alone or be integrated with data from sensors or software tools.
XSECT: A computer code for generating fuselage cross sections - user's manual
NASA Technical Reports Server (NTRS)
Ames, K. R.
1982-01-01
A computer code, XSECT, has been developed to generate fuselage cross sections from a given area distribution and wing definition. The cross sections are generated to match the wing definition while conforming to the area requirement. An iterative procedure is used to generate each cross section. Fuselage area balancing may be included in this procedure if desired. The code is intended as an aid for engineers who must first design a wing under certain aerodynamic constraints and then design a fuselage for the wing such that the contraints remain satisfied. This report contains the information necessary for accessing and executing the code, which is written in FORTRAN to execute on the Cyber 170 series computers (NOS operating system) and produces graphical output for a Tektronix 4014 CRT. The LRC graphics software is used in combination with the interface between this software and the PLOT 10 software.
Interactive computer graphics applications for compressible aerodynamics
NASA Technical Reports Server (NTRS)
Benson, Thomas J.
1994-01-01
Three computer applications have been developed to solve inviscid compressible fluids problems using interactive computer graphics. The first application is a compressible flow calculator which solves for isentropic flow, normal shocks, and oblique shocks or centered expansions produced by two dimensional ramps. The second application couples the solutions generated by the first application to a more graphical presentation of the results to produce a desk top simulator of three compressible flow problems: 1) flow past a single compression ramp; 2) flow past two ramps in series; and 3) flow past two opposed ramps. The third application extends the results of the second to produce a design tool which solves for the flow through supersonic external or mixed compression inlets. The applications were originally developed to run on SGI or IBM workstations running GL graphics. They are currently being extended to solve additional types of flow problems and modified to operate on any X-based workstation.
The design and implementation of CRT displays in the TCV real-time simulation
NASA Technical Reports Server (NTRS)
Leavitt, J. B.; Tariq, S. I.; Steinmetz, G. G.
1975-01-01
The design and application of computer graphics to the Terminal Configured Vehicle (TCV) program were described. A Boeing 737-100 series aircraft was modified with a second flight deck and several computers installed in the passenger cabin. One of the elements in support of the TCV program is a sophisticated simulation system developed to duplicate the operation of the aft flight deck. This facility consists of an aft flight deck simulator, equipped with realistic flight instrumentation, a CDC 6600 computer, and an Adage graphics terminal; this terminal presents to the simulator pilot displays similar to those used on the aircraft with equivalent man-machine interactions. These two displays form the primary flight instrumentation for the pilot and are dynamic images depicting critical flight information. The graphics terminal is a high speed interactive refresh-type graphics system. To support the cockpit display, two remote CRT's were wired in parallel with two of the Adage scopes.
Compact modeling of CRS devices based on ECM cells for memory, logic and neuromorphic applications.
Linn, E; Menzel, S; Ferch, S; Waser, R
2013-09-27
Dynamic physics-based models of resistive switching devices are of great interest for the realization of complex circuits required for memory, logic and neuromorphic applications. Here, we apply such a model of an electrochemical metallization (ECM) cell to complementary resistive switches (CRSs), which are favorable devices to realize ultra-dense passive crossbar arrays. Since a CRS consists of two resistive switching devices, it is straightforward to apply the dynamic ECM model for CRS simulation with MATLAB and SPICE, enabling study of the device behavior in terms of sweep rate and series resistance variations. Furthermore, typical memory access operations as well as basic implication logic operations can be analyzed, revealing requirements for proper spike and level read operations. This basic understanding facilitates applications of massively parallel computing paradigms required for neuromorphic applications.
Automatising the analysis of stochastic biochemical time-series
2015-01-01
Background Mathematical and computational modelling of biochemical systems has seen a lot of effort devoted to the definition and implementation of high-performance mechanistic simulation frameworks. Within these frameworks it is possible to analyse complex models under a variety of configurations, eventually selecting the best setting of, e.g., parameters for a target system. Motivation This operational pipeline relies on the ability to interpret the predictions of a model, often represented as simulation time-series. Thus, an efficient data analysis pipeline is crucial to automatise time-series analyses, bearing in mind that errors in this phase might mislead the modeller's conclusions. Results For this reason we have developed an intuitive framework-independent Python tool to automate analyses common to a variety of modelling approaches. These include assessment of useful non-trivial statistics for simulation ensembles, e.g., estimation of master equations. Intuitive and domain-independent batch scripts will allow the researcher to automatically prepare reports, thus speeding up the usual model-definition, testing and refinement pipeline. PMID:26051821
Computational Analysis for Rocket-Based Combined-Cycle Systems During Rocket-Only Operation
NASA Technical Reports Server (NTRS)
Steffen, C. J., Jr.; Smith, T. D.; Yungster, S.; Keller, D. J.
2000-01-01
A series of Reynolds-averaged Navier-Stokes calculations were employed to study the performance of rocket-based combined-cycle systems operating in an all-rocket mode. This parametric series of calculations were executed within a statistical framework, commonly known as design of experiments. The parametric design space included four geometric and two flowfield variables set at three levels each, for a total of 729 possible combinations. A D-optimal design strategy was selected. It required that only 36 separate computational fluid dynamics (CFD) solutions be performed to develop a full response surface model, which quantified the linear, bilinear, and curvilinear effects of the six experimental variables. The axisymmetric, Reynolds-averaged Navier-Stokes simulations were executed with the NPARC v3.0 code. The response used in the statistical analysis was created from Isp efficiency data integrated from the 36 CFD simulations. The influence of turbulence modeling was analyzed by using both one- and two-equation models. Careful attention was also given to quantify the influence of mesh dependence, iterative convergence, and artificial viscosity upon the resulting statistical model. Thirteen statistically significant effects were observed to have an influence on rocket-based combined-cycle nozzle performance. It was apparent that the free-expansion process, directly downstream of the rocket nozzle, can influence the Isp efficiency. Numerical schlieren images and particle traces have been used to further understand the physical phenomena behind several of the statistically significant results.
Electric prototype power processor for a 30cm ion thruster
NASA Technical Reports Server (NTRS)
Biess, J. J.; Inouye, L. Y.; Schoenfeld, A. D.
1977-01-01
An electrical prototype power processor unit was designed, fabricated and tested with a 30 cm mercury ion engine for primary space propulsion. The power processor unit used the thyristor series resonant inverter as the basic power stage for the high power beam and discharge supplies. A transistorized series resonant inverter processed the remaining power for the low power outputs. The power processor included a digital interface unit to process all input commands and internal telemetry signals so that electric propulsion systems could be operated with a central computer system. The electrical prototype unit included design improvement in the power components such as thyristors, transistors, filters and resonant capacitors, and power transformers and inductors in order to reduce component weight, to minimize losses, and to control the component temperature rise. A design analysis for the electrical prototype is also presented on the component weight, losses, part count and reliability estimate. The electrical prototype was tested in a thermal vacuum environment. Integration tests were performed with a 30 cm ion engine and demonstrated operational compatibility. Electromagnetic interference data was also recorded on the design to provide information for spacecraft integration.
High-Speed Solution of Spacecraft Trajectory Problems Using Taylor Series Integration
NASA Technical Reports Server (NTRS)
Scott, James R.; Martini, Michael C.
2010-01-01
It has been known for some time that Taylor series (TS) integration is among the most efficient and accurate numerical methods in solving differential equations. However, the full benefit of the method has yet to be realized in calculating spacecraft trajectories, for two main reasons. First, most applications of Taylor series to trajectory propagation have focused on relatively simple problems of orbital motion or on specific problems and have not provided general applicability. Second, applications that have been more general have required use of a preprocessor, which inevitably imposes constraints on computational efficiency. The latter approach includes the work of Berryman et al., who solved the planetary n-body problem with relativistic effects. Their work specifically noted the computational inefficiencies arising from use of a preprocessor and pointed out the potential benefit of manually coding derivative routines. In this Engineering Note, we report on a systematic effort to directly implement Taylor series integration in an operational trajectory propagation code: the Spacecraft N-Body Analysis Program (SNAP). The present Taylor series implementation is unique in that it applies to spacecraft virtually anywhere in the solar system and can be used interchangeably with another integration method. SNAP is a high-fidelity trajectory propagator that includes force models for central body gravitation with N X N harmonics, other body gravitation with N X N harmonics, solar radiation pressure, atmospheric drag (for Earth orbits), and spacecraft thrusting (including shadowing). The governing equations are solved using an eighth-order Runge-Kutta Fehlberg (RKF) single-step method with variable step size control. In the present effort, TS is implemented by way of highly integrated subroutines that can be used interchangeably with RKF. This makes it possible to turn TS on or off during various phases of a mission. Current TS force models include central body gravitation with the J2 spherical harmonic, other body gravitation, thrust, constant atmospheric drag from Earth's atmosphere, and solar radiation pressure for a sphere under constant illumination. The purpose of this Engineering Note is to demonstrate the performance of TS integration in an operational trajectory analysis code and to compare it with a standard method, eighth-order RKF. Results show that TS is 16.6 times faster on average and is more accurate in 87.5% of the cases presented.
Computation of canonical correlation and best predictable aspect of future for time series
NASA Technical Reports Server (NTRS)
Pourahmadi, Mohsen; Miamee, A. G.
1989-01-01
The canonical correlation between the (infinite) past and future of a stationary time series is shown to be the limit of the canonical correlation between the (infinite) past and (finite) future, and computation of the latter is reduced to a (generalized) eigenvalue problem involving (finite) matrices. This provides a convenient and essentially, finite-dimensional algorithm for computing canonical correlations and components of a time series. An upper bound is conjectured for the largest canonical correlation.
The contaminant analysis automation robot implementation for the automated laboratory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Younkin, J.R.; Igou, R.E.; Urenda, T.D.
1995-12-31
The Contaminant Analysis Automation (CAA) project defines the automated laboratory as a series of standard laboratory modules (SLM) serviced by a robotic standard support module (SSM). These SLMs are designed to allow plug-and-play integration into automated systems that perform standard analysis methods (SAM). While the SLMs are autonomous in the execution of their particular chemical processing task, the SAM concept relies on a high-level task sequence controller (TSC) to coordinate the robotic delivery of materials requisite for SLM operations, initiate an SLM operation with the chemical method dependent operating parameters, and coordinate the robotic removal of materials from the SLMmore » when its commands and events has been established to allow ready them for transport operations as well as performing the Supervisor and Subsystems (GENISAS) software governs events from the SLMs and robot. The Intelligent System Operating Environment (ISOE) enables the inter-process communications used by GENISAS. CAA selected the Hewlett-Packard Optimized Robot for Chemical Analysis (ORCA) and its associated Windows based Methods Development Software (MDS) as the robot SSM. The MDS software is used to teach the robot each SLM position and required material port motions. To allow the TSC to command these SLM motions, a hardware and software implementation was required that allowed message passing between different operating systems. This implementation involved the use of a Virtual Memory Extended (VME) rack with a Force CPU-30 computer running VxWorks; a real-time multitasking operating system, and a Radiuses PC compatible VME computer running MDS. A GENISAS server on The Force computer accepts a transport command from the TSC, a GENISAS supervisor, over Ethernet and notifies software on the RadiSys PC of the pending command through VMEbus shared memory. The command is then delivered to the MDS robot control software using a Windows Dynamic Data Exchange conversation.« less
Sensor sentinel computing device
Damico, Joseph P.
2016-08-02
Technologies pertaining to authenticating data output by sensors in an industrial environment are described herein. A sensor sentinel computing device receives time-series data from a sensor by way of a wireline connection. The sensor sentinel computing device generates a validation signal that is a function of the time-series signal. The sensor sentinel computing device then transmits the validation signal to a programmable logic controller in the industrial environment.
NASA Technical Reports Server (NTRS)
Dalee, Robert C.; Bacskay, Allen S.; Knox, James C.
1990-01-01
An overview of the CASE/A-ECLSS series modeling package is presented. CASE/A is an analytical tool that has supplied engineering productivity accomplishments during ECLSS design activities. A components verification program was performed to assure component modeling validity based on test data from the Phase II comparative test program completed at the Marshall Space Flight Center. An integrated plotting feature has been added to the program which allows the operator to analyze on-screen data trends or get hard copy plots from within the CASE/A operating environment. New command features in the areas of schematic, output, and model management, and component data editing have been incorporated to enhance the engineer's productivity during a modeling program.
Evolution of the Hubble Space Telescope Safing Systems
NASA Technical Reports Server (NTRS)
Pepe, Joyce; Myslinski, Michael
2006-01-01
The Hubble Space Telescope (HST) was launched on April 24 1990, with an expected lifespan of 15 years. Central to the spacecraft design was the concept of a series of on-orbit shuttle servicing missions permitting astronauts to replace failed equipment, update the scientific instruments and keep the HST at the forefront of astronomical discoveries. One key to the success of the Hubble mission has been the robust Safing systems designed to monitor the performance of the observatory and to react to keep the spacecraft safe in the event of equipment anomaly. The spacecraft Safing System consists of a range of software tests in the primary flight computer that evaluate the performance of mission critical hardware, safe modes that are activated when the primary control mode is deemed inadequate for protecting the vehicle, and special actions that the computer can take to autonomously reconfigure critical hardware. The HST Safing System was structured to autonomously detect electrical power system, data management system, and pointing control system malfunctions and to configure the vehicle to ensure safe operation without ground intervention for up to 72 hours. There is also a dedicated safe mode computer that constantly monitors a keep-alive signal from the primary computer. If this signal stops, the safe mode computer shuts down the primary computer and takes over control of the vehicle, putting it into a safe, low-power configuration. The HST Safing system has continued to evolve as equipment has aged, as new hardware has been installed on the vehicle, and as the operation modes have matured during the mission. Along with the continual refinement of the limits used in the safing tests, several new tests have been added to the monitoring system, and new safe modes have been added to the flight software. This paper will focus on the evolution of the HST Safing System and Safing tests, and the importance of this evolution to prolonging the science operations of the telescope.
Computer Courseware Evaluations. January, 1983 to May, 1985. A Series of Reports.
ERIC Educational Resources Information Center
Alberta Dept. of Education, Edmonton. Curriculum Branch Clearinghouse.
Fourth in a series, this cumulative report reviews Apple computer courseware and some IBM courseware (Business and Math sections) authorized by Alberta Education from January 1983 through May 1985. It provides detailed evaluations of 168 authorized titles in business education (17), computer literacy (12), early childhood education (8), language…
ERIC Educational Resources Information Center
Elmore, Donald E.; Guayasamin, Ryann C.; Kieffer, Madeleine E.
2010-01-01
As computational modeling plays an increasingly central role in biochemical research, it is important to provide students with exposure to common modeling methods in their undergraduate curriculum. This article describes a series of computer labs designed to introduce undergraduate students to energy minimization, molecular dynamics simulations,…
3-D reconstruction of neurons from multichannel confocal laser scanning image series.
Wouterlood, Floris G
2014-04-10
A confocal laser scanning microscope (CLSM) collects information from a thin, focal plane and ignores out-of-focus information. Scanning of a specimen, with stepwise axial (Z-) movement of the stage in between each scan, produces Z-series of confocal images of a tissue volume, which then can be used to 3-D reconstruct structures of interest. The operator first configures separate channels (e.g., laser, filters, and detector settings) for each applied fluorochrome and then acquires Z-series of confocal images: one series per channel. Channel signal separation is extremely important. Measures to avoid bleaching are vital. Post-acquisition deconvolution of the image series is often performed to increase resolution before 3-D reconstruction takes place. In the 3-D reconstruction programs described in this unit, reconstructions can be inspected in real time from any viewing angle. By altering viewing angles and by switching channels off and on, the spatial relationships of 3-D-reconstructed structures with respect to structures visualized in other channels can be studied. Since each brand of CLSM, computer program, and 3-D reconstruction package has its own proprietary set of procedures, a general approach is provided in this protocol wherever possible. Copyright © 2014 John Wiley & Sons, Inc.
INM. Integrated Noise Model Version 4.11. User’s Guide - Supplement
1993-12-01
KB of Random Access Memory (RAM) or 3 MB of RAM, if operating the INM from a RAM disk, as discussed in Section 1.2.1 below; 0 Math co-processor, Series... accessible from the Data Base using the ACDB11.EXE computer program, supplied with the Version 4.11 release. With the exception of INM airplane numbers 1, 6...9214 10760 -- -.-- 27 7053 6215 9470 10703 --- --- - 28 SS7 5940 SS94 729S . ... ... 29 4223 4884 7897 9214 10760 ..... 30 sots 6474 7939 8774
Design and Development of a Real-Time Model Attitude Measurement System for Hypersonic Facilities
NASA Technical Reports Server (NTRS)
Jones, Thomas W.; Lunsford, Charles B.
2005-01-01
A series of wind tunnel tests have been conducted to evaluate a multi-camera videogrammetric system designed to measure model attitude in hypersonic facilities. The technique utilizes processed video data and applies photogrammetric principles for point tracking to compute model position including pitch, roll and yaw variables. A discussion of the constraints encountered during the design, development, and testing process, including lighting, vibration, operational range and optical access is included. Initial measurement results from the NASA Langley Research Center (LaRC) 31-Inch Mach 10 tunnel are presented.
Design and Development of a Real-Time Model Attitude Measurement System for Hypersonic Facilities
NASA Technical Reports Server (NTRS)
Jones, Thomas W.; Lunsford, Charles B.
2004-01-01
A series of wind tunnel tests have been conducted to evaluate a multi-camera videogrammetric system designed to measure model attitude in hypersonic facilities. The technique utilizes processed video data and applies photogrammetric principles for point tracking to compute model position including pitch, roll and yaw variables. A discussion of the constraints encountered during the design, development, and testing process, including lighting, vibration, operational range and optical access is included. Initial measurement results from the NASA Langley Research Center (LaRC) 31-Inch Mach 10 tunnel are presented.
LUMIS: Land Use Management and Information Systems; coordinate oriented program documentation
NASA Technical Reports Server (NTRS)
1976-01-01
An integrated geographic information system to assist program managers and planning groups in metropolitan regions is presented. The series of computer software programs and procedures involved in data base construction uses the census DIME file and point-in-polygon architectures. The system is described in two parts: (1) instructions to operators with regard to digitizing and editing procedures, and (2) application of data base construction algorithms to achieve map registration, assure the topological integrity of polygon files, and tabulate land use acreages within administrative districts.
NASA Technical Reports Server (NTRS)
Cramer, K. Elliott; Syed, Hazari I.
1995-01-01
This user's manual describes the installation and operation of TIA, the Thermal-Imaging acquisition and processing Application, developed by the Nondestructive Evaluation Sciences Branch at NASA Langley Research Center, Hampton, Virginia. TIA is a user friendly graphical interface application for the Macintosh 2 and higher series computers. The software has been developed to interface with the Perceptics/Westinghouse Pixelpipe(TM) and PixelStore(TM) NuBus cards and the GW Instruments MacADIOS(TM) input-output (I/O) card for the Macintosh for imaging thermal data. The software is also capable of performing generic image-processing functions.
Cassidy conducts BASS Experiment Test Operations
2013-04-05
ISS035-E-015081 (5 April 2013) --- Astronaut Chris Cassidy, Expedition 35 flight engineer, conducts a session of the Burning and Suppression of Solids (BASS) experiment onboard the Earth-orbiting International Space Station. Following a series of preparations, Cassidy conducted a run of the experiment, which examined the burning and extinction characteristics of a wide variety of fuel samples in microgravity and will guide strategies for extinguishing fires in microgravity. BASS results contribute to the combustion computational models used in the design of fire detection and suppression systems in microgravity and on Earth.
ANOPP programmer's reference manual for the executive System. [aircraft noise prediction program
NASA Technical Reports Server (NTRS)
Gillian, R. E.; Brown, C. G.; Bartlett, R. W.; Baucom, P. H.
1977-01-01
Documentation for the Aircraft Noise Prediction Program as of release level 01/00/00 is presented in a manual designed for programmers having a need for understanding the internal design and logical concepts of the executive system software. Emphasis is placed on providing sufficient information to modify the system for enhancements or error correction. The ANOPP executive system includes software related to operating system interface, executive control, and data base management for the Aircraft Noise Prediction Program. It is written in Fortran IV for use on CDC Cyber series of computers.
Spacelab data analysis and interactive control study
NASA Technical Reports Server (NTRS)
Tarbell, T. D.; Drake, J. F.
1980-01-01
The study consisted of two main tasks, a series of interviews of Spacelab users and a survey of data processing and display equipment. Findings from the user interviews on questions of interactive control, downlink data formats, and Spacelab computer software development are presented. Equipment for quick look processing and display of scientific data in the Spacelab Payload Operations Control Center (POCC) was surveyed. Results of this survey effort are discussed in detail, along with recommendations for NASA development of several specific display systems which meet common requirements of many Spacelab experiments.
Focus issue: series on computational and systems biology.
Gough, Nancy R
2011-09-06
The application of computational biology and systems biology is yielding quantitative insight into cellular regulatory phenomena. For the month of September, Science Signaling highlights research featuring computational approaches to understanding cell signaling and investigation of signaling networks, a series of Teaching Resources from a course in systems biology, and various other articles and resources relevant to the application of computational biology and systems biology to the study of signal transduction.
Capattery double layer capacitor life performance
NASA Astrophysics Data System (ADS)
Evans, David A.; Clark, Nancy H.; Baca, W. E.; Miller, John R.; Barker, Thomas B.
Double layer capacitors (DLCs) have received increased use in computer memory backup applications for consumer products during the past ten years. Their extraordinarily high capacitance density along with their maintenance-free operation makes them particularly suited for these products. These same features also make DLCs very attractive in military type applications. Unfortunately, lifetime performance data has not been reported in the literature for any DLC component. Our objective in this study was to investigate the effects that voltage and temperature have on the properties and performance of single and series-connected DLCs as a function of time. Evans model RE110474, 0.47-farad, 11.0-volt Capatteries were evaluated. These components have a tantalum package, use welded construction, and contain a glass-to-metal seal, all incorporated to circumvent the typical DLC failure modes of electrolyte loss and container corrosion. A five-level, two-factor Central Composite Design was used in the study. Single and series-connected Capatteries rated at 85 C, 11.0-volts operation were subjected to test temperatures between 25 and 95 C, and voltages between 0 and 12.9 volts (9 test conditions). Measured responses included capacitance, equivalent series resistance, and discharge time. Data were analyzed using a regression analysis to obtain response functions relating DLC properties to their voltage, temperature, and test time history. These results are described and should aid system and component engineers in using DLCs in critical applications.
Low-cost USB interface for operant research using Arduino and Visual Basic.
Escobar, Rogelio; Pérez-Herrera, Carlos A
2015-03-01
This note describes the design of a low-cost interface using Arduino microcontroller boards and Visual Basic programming for operant conditioning research. The board executes one program in Arduino programming language that polls the state of the inputs and generates outputs in an operant chamber. This program communicates through a USB port with another program written in Visual Basic 2010 Express Edition running on a laptop, desktop, netbook computer, or even a tablet equipped with Windows operating system. The Visual Basic program controls schedules of reinforcement and records real-time data. A single Arduino board can be used to control a total of 52 inputs/output lines, and multiple Arduino boards can be used to control multiple operant chambers. An external power supply and a series of micro relays are required to control 28-V DC devices commonly used in operant chambers. Instructions for downloading and using the programs to generate simple and concurrent schedules of reinforcement are provided. Testing suggests that the interface is reliable, accurate, and could serve as an inexpensive alternative to commercial equipment. © Society for the Experimental Analysis of Behavior.
2HOT: An Improved Parallel Hashed Oct-Tree N-Body Algorithm for Cosmological Simulation
Warren, Michael S.
2014-01-01
We report on improvements made over the past two decades to our adaptive treecode N-body method (HOT). A mathematical and computational approach to the cosmological N-body problem is described, with performance and scalability measured up to 256k (2 18 ) processors. We present error analysis and scientific application results from a series of more than ten 69 billion (4096 3 ) particle cosmological simulations, accounting for 4×10 20 floating point operations. These results include the first simulations using the new constraints on the standard model of cosmology from the Planck satellite. Our simulations set a new standard for accuracy andmore » scientific throughput, while meeting or exceeding the computational efficiency of the latest generation of hybrid TreePM N-body methods.« less
Inductive Approaches to Improving Diagnosis and Design for Diagnosability
NASA Technical Reports Server (NTRS)
Fisher, Douglas H. (Principal Investigator)
1995-01-01
The first research area under this grant addresses the problem of classifying time series according to their morphological features in the time domain. A supervised learning system called CALCHAS, which induces a classification procedure for signatures from preclassified examples, was developed. For each of several signature classes, the system infers a model that captures the class's morphological features using Bayesian model induction and the minimum message length approach to assign priors. After induction, a time series (signature) is classified in one of the classes when there is enough evidence to support that decision. Time series with sufficiently novel features, belonging to classes not present in the training set, are recognized as such. A second area of research assumes two sources of information about a system: a model or domain theory that encodes aspects of the system under study and data from actual system operations over time. A model, when it exists, represents strong prior expectations about how a system will perform. Our work with a diagnostic model of the RCS (Reaction Control System) of the Space Shuttle motivated the development of SIG, a system which combines information from a model (or domain theory) and data. As it tracks RCS behavior, the model computes quantitative and qualitative values. Induction is then performed over the data represented by both the 'raw' features and the model-computed high-level features. Finally, work on clustering for operating mode discovery motivated some important extensions to the clustering strategy we had used. One modification appends an iterative optimization technique onto the clustering system; this optimization strategy appears to be novel in the clustering literature. A second modification improves the noise tolerance of the clustering system. In particular, we adapt resampling-based pruning strategies used by supervised learning systems to the task of simplifying hierarchical clusterings, thus making post-clustering analysis easier.
Reservoirs performances under climate variability: a case study
NASA Astrophysics Data System (ADS)
Longobardi, A.; Mautone, M.; de Luca, C.
2014-09-01
A case study, the Piano della Rocca dam (southern Italy) is discussed here in order to quantify the system performances under climate variability conditions. Different climate scenarios have been stochastically generated according to the tendencies in precipitation and air temperature observed during recent decades for the studied area. Climate variables have then been filtered through an ARMA model to generate, at the monthly scale, time series of reservoir inflow volumes. Controlled release has been computed considering the reservoir is operated following the standard linear operating policy (SLOP) and reservoir performances have been assessed through the calculation of reliability, resilience and vulnerability indices (Hashimoto et al. 1982), comparing current and future scenarios of climate variability. The proposed approach can be suggested as a valuable tool to mitigate the effects of moderate to severe and persistent droughts periods, through the allocation of new water resources or the planning of appropriate operational rules.
Results from Testing Crew-Controlled Surface Telerobotics on the International Space Station
NASA Technical Reports Server (NTRS)
Bualat, Maria; Schreckenghost, Debra; Pacis, Estrellina; Fong, Terrence; Kalar, Donald; Beutter, Brent
2014-01-01
During Summer 2013, the Intelligent Robotics Group at NASA Ames Research Center conducted a series of tests to examine how astronauts in the International Space Station (ISS) can remotely operate a planetary rover. The tests simulated portions of a proposed lunar mission, in which an astronaut in lunar orbit would remotely operate a planetary rover to deploy a radio telescope on the lunar far side. Over the course of Expedition 36, three ISS astronauts remotely operated the NASA "K10" planetary rover in an analogue lunar terrain located at the NASA Ames Research Center in California. The astronauts used a "Space Station Computer" (crew laptop), a combination of supervisory control (command sequencing) and manual control (discrete commanding), and Ku-band data communications to command and monitor K10 for 11 hours. In this paper, we present and analyze test results, summarize user feedback, and describe directions for future research.
M-DAS: System for multispectral data analysis. [in Saginaw Bay, Michigan
NASA Technical Reports Server (NTRS)
Johnson, R. H.
1975-01-01
M-DAS is a ground data processing system designed for analysis of multispectral data. M-DAS operates on multispectral data from LANDSAT, S-192, M2S and other sources in CCT form. Interactive training by operator-investigators using a variable cursor on a color display was used to derive optimum processing coefficients and data on cluster separability. An advanced multivariate normal-maximum likelihood processing algorithm was used to produce output in various formats: color-coded film images, geometrically corrected map overlays, moving displays of scene sections, coverage tabulations and categorized CCTs. The analysis procedure for M-DAS involves three phases: (1) screening and training, (2) analysis of training data to compute performance predictions and processing coefficients, and (3) processing of multichannel input data into categorized results. Typical M-DAS applications involve iteration between each of these phases. A series of photographs of the M-DAS display are used to illustrate M-DAS operation.
Thermal modeling of nickel-hydrogen battery cells operating under transient orbital conditions
NASA Technical Reports Server (NTRS)
Schrage, Dean S.
1991-01-01
An analytical study of the thermal operating characteristics of nickel-hydrogen battery cells is presented. Combined finite-element and finite-difference techniques are employed to arrive at a computationally efficient composite thermal model representing a series-cell arrangement operating in conjunction with a radiately coupled baseplate and coldplate thermal bus. An aggressive, low-mass design approach indicates that thermal considerations can and should direct the design of the thermal bus arrangement. Special consideration is given to the potential for mixed conductive and convective processes across the hydrogen gap. Results of a compressible flow model are presented and indicate the transfer process is suitably represented by molecular conduction. A high-fidelity thermal model of the cell stack (and related components) indicates the presence of axial and radial temperature gradients. A detailed model of the thermal bus reveals the thermal interaction of individual cells and is imperative for assessing the intercell temperature gradients.
The River Basin Model: Computer Output. Water Pollution Control Research Series.
ERIC Educational Resources Information Center
Envirometrics, Inc., Washington, DC.
This research report is part of the Water Pollution Control Research Series which describes the results and progress in the control and abatement of pollution in our nation's waters. The River Basin Model described is a computer-assisted decision-making tool in which a number of computer programs simulate major processes related to water use that…
NASA Astrophysics Data System (ADS)
Baldysz, Zofia; Nykiel, Grzegorz; Figurski, Mariusz; Szafranek, Karolina; Kroszczynski, Krzysztof; Araszkiewicz, Andrzej
2015-04-01
In recent years, the GNSS system began to play an increasingly important role in the research related to the climate monitoring. Based on the GPS system, which has the longest operational capability in comparison with other systems, and a common computational strategy applied to all observations, long and homogeneous ZTD (Zenith Tropospheric Delay) time series were derived. This paper presents results of analysis of 16-year ZTD time series obtained from the EPN (EUREF Permanent Network) reprocessing performed by the Military University of Technology. To maintain the uniformity of data, analyzed period of time (1998-2013) is exactly the same for all stations - observations carried out before 1998 were removed from time series and observations processed using different strategy were recalculated according to the MUT LAC approach. For all 16-year time series (59 stations) Lomb-Scargle periodograms were created to obtain information about the oscillations in ZTD time series. Due to strong annual oscillations which disturb the character of oscillations with smaller amplitude and thus hinder their investigation, Lomb-Scargle periodograms for time series with the deleted annual oscillations were created in order to verify presence of semi-annual, ter-annual and quarto-annual oscillations. Linear trend and seasonal components were estimated using LSE (Least Square Estimation) and Mann-Kendall trend test were used to confirm the presence of linear trend designated by LSE method. In order to verify the effect of the length of time series on the estimated size of the linear trend, comparison between two different length of ZTD time series was performed. To carry out a comparative analysis, 30 stations which have been operating since 1996 were selected. For these stations two periods of time were analyzed: shortened 16-year (1998-2013) and full 18-year (1996-2013). For some stations an additional two years of observations have significant impact on changing the size of linear trend - only for 4 stations the size of linear trend was exactly the same for two periods of time. In one case, the nature of the trend has changed from negative (16-year time series) for positive (18-year time series). The average value of a linear trends for 16-year time series is 1,5 mm/decade, but their spatial distribution is not uniform. The average value of linear trends for all 18-year time series is 2,0 mm/decade, with better spatial distribution and smaller discrepancies.
Operating rules for multireservoir systems
NASA Astrophysics Data System (ADS)
Oliveira, Rodrigo; Loucks, Daniel P.
1997-04-01
Multireservoir operating policies are usually defined by rules that specify either individual reservoir desired (target) storage volumes or desired (target) releases based on the time of year and the existing total storage volume in all reservoirs. This paper focuses on the use of genetic search algorithms to derive these multireservoir operating policies. The genetic algorithms use real-valued vectors containing information needed to define both system release and individual reservoir storage volume targets as functions of total storage in each of multiple within-year periods. Elitism, arithmetic crossover, mutation, and "en bloc" replacement are used in the algorithms to generate successive sets of possible operating policies. Each policy is then evaluated using simulation to compute a performance index for a given flow series. The better performing policies are then used as a basis for generating new sets of possible policies. The process of improved policy generation and evaluation is repeated until no further improvement in performance is obtained. The proposed algorithm is applied to example reservoir systems used for water supply and hydropower.
Huang, Yanyan; Ran, Xiang; Lin, Youhui; Ren, Jinsong; Qu, Xiaogang
2015-04-22
Based on enzymatic reactions-triggered changes of pH values and biocomputing, a novel and multistage interconnection biological network with multiple easy-detectable signal outputs has been developed. Compared with traditional chemical computing, the enzyme-based biological system could overcome the interference between reactions or the incompatibility of individual computing gates and offer a unique opportunity to assemble multicomponent/multifunctional logic circuitries. Our system included four enzyme inputs: β-galactosidase (β-gal), glucose oxidase (GOx), esterase (Est) and urease (Ur). With the assistance of two signal transducers (gold nanoparticles and acid-base indicators) or pH meter, the outputs of the biological network could be conveniently read by the naked eyes. In contrast to current methods, the approach present here could realize cost-effective, label-free and colorimetric logic operations without complicated instrument. By designing a series of Boolean logic operations, we could logically make judgment of the compositions of the samples on the basis of visual output signals. Our work offered a promising paradigm for future biological computing technology and might be highly useful in future intelligent diagnostics, prodrug activation, smart drug delivery, process control, and electronic applications. Copyright © 2015 Elsevier B.V. All rights reserved.
Time-Of-Flight Camera, Optical Tracker and Computed Tomography in Pairwise Data Registration.
Pycinski, Bartlomiej; Czajkowska, Joanna; Badura, Pawel; Juszczyk, Jan; Pietka, Ewa
2016-01-01
A growing number of medical applications, including minimal invasive surgery, depends on multi-modal or multi-sensors data processing. Fast and accurate 3D scene analysis, comprising data registration, seems to be crucial for the development of computer aided diagnosis and therapy. The advancement of surface tracking system based on optical trackers already plays an important role in surgical procedures planning. However, new modalities, like the time-of-flight (ToF) sensors, widely explored in non-medical fields are powerful and have the potential to become a part of computer aided surgery set-up. Connection of different acquisition systems promises to provide a valuable support for operating room procedures. Therefore, the detailed analysis of the accuracy of such multi-sensors positioning systems is needed. We present the system combining pre-operative CT series with intra-operative ToF-sensor and optical tracker point clouds. The methodology contains: optical sensor set-up and the ToF-camera calibration procedures, data pre-processing algorithms, and registration technique. The data pre-processing yields a surface, in case of CT, and point clouds for ToF-sensor and marker-driven optical tracker representation of an object of interest. An applied registration technique is based on Iterative Closest Point algorithm. The experiments validate the registration of each pair of modalities/sensors involving phantoms of four various human organs in terms of Hausdorff distance and mean absolute distance metrics. The best surface alignment was obtained for CT and optical tracker combination, whereas the worst for experiments involving ToF-camera. The obtained accuracies encourage to further develop the multi-sensors systems. The presented substantive discussion concerning the system limitations and possible improvements mainly related to the depth information produced by the ToF-sensor is useful for computer aided surgery developers.
NASA Technical Reports Server (NTRS)
Sainsbury-Carter, J. B.; Conaway, J. H.
1973-01-01
The development and implementation of a preprocessor system for the finite element analysis of helicopter fuselages is described. The system utilizes interactive graphics for the generation, display, and editing of NASTRAN data for fuselage models. It is operated from an IBM 2250 cathode ray tube (CRT) console driven by an IBM 370/145 computer. Real time interaction plus automatic data generation reduces the nominal 6 to 10 week time for manual generation and checking of data to a few days. The interactive graphics system consists of a series of satellite programs operated from a central NASTRAN Systems Monitor. Fuselage structural models including the outer shell and internal structure may be rapidly generated. All numbering systems are automatically assigned. Hard copy plots of the model labeled with GRID or elements ID's are also available. General purpose programs for displaying and editing NASTRAN data are included in the system. Utilization of the NASTRAN interactive graphics system has made possible the multiple finite element analysis of complex helicopter fuselage structures within design schedules.
NASA Technical Reports Server (NTRS)
Staveland, Lowell
1994-01-01
This is the experimental and software detailed design report for the prototype task loading model (TLM) developed as part of the man-machine integration design and analysis system (MIDAS), as implemented and tested in phase 6 of the Army-NASA Aircrew/Aircraft Integration (A3I) Program. The A3I program is an exploratory development effort to advance the capabilities and use of computational representations of human performance and behavior in the design, synthesis, and analysis of manned systems. The MIDAS TLM computationally models the demands designs impose on operators to aide engineers in the conceptual design of aircraft crewstations. This report describes TLM and the results of a series of experiments which were run this phase to test its capabilities as a predictive task demand modeling tool. Specifically, it includes discussions of: the inputs and outputs of TLM, the theories underlying it, the results of the test experiments, the use of the TLM as both stand alone tool and part of a complete human operator simulation, and a brief introduction to the TLM software design.
Andrade, Edson de Oliveira; Andrade, Elizabeth Nogueira de; Gallo, José Hiran
2011-01-01
To present the experience of a health plan operator (Unimed-Manaus) in Manaus, Amazonas, Brazil, with the accreditation of imaging services and the demand induced by the supply of new services (Roemer's Law). This is a retrospective work studying a time series covering the period from January 1998 to June 2004, in which the computed tomography and the magnetic resonance imaging services were implemented as part of the services offered by that health plan operator. Statistical analysis consisted of a descriptive and an inferential part, with the latter using a mean parametric test (Student T-test and ANOVA) and the Pearson correlation test. A 5% alpha and a 95% confidence interval were adopted. At Unimed-Manaus, the supply of new imaging services, by itself, was identified as capable of generating an increased service demand, thus characterizing the phenomenon described by Roemer. The results underscore the need to be aware of the fact that the supply of new health services could bring about their increased use without a real demand.
On-the-fly scheduling as a manifestation of partial-order planning and dynamic task values.
Hannah, Samuel D; Neal, Andrew
2014-09-01
The aim of this study was to develop a computational account of the spontaneous task ordering that occurs within jobs as work unfolds ("on-the-fly task scheduling"). Air traffic control is an example of work in which operators have to schedule their tasks as a partially predictable work flow emerges. To date, little attention has been paid to such on-the-fly scheduling situations. We present a series of discrete-event models fit to conflict resolution decision data collected from experienced controllers operating in a high-fidelity simulation. Our simulations reveal air traffic controllers' scheduling decisions as examples of the partial-order planning approach of Hayes-Roth and Hayes-Roth. The most successful model uses opportunistic first-come-first-served scheduling to select tasks from a queue. Tasks with short deadlines are executed immediately. Tasks with long deadlines are evaluated to assess whether they need to be executed immediately or deferred. On-the-fly task scheduling is computationally tractable despite its surface complexity and understandable as an example of both the partial-order planning strategy and the dynamic-value approach to prioritization.
Detecting chaos in irregularly sampled time series.
Kulp, C W
2013-09-01
Recently, Wiebe and Virgin [Chaos 22, 013136 (2012)] developed an algorithm which detects chaos by analyzing a time series' power spectrum which is computed using the Discrete Fourier Transform (DFT). Their algorithm, like other time series characterization algorithms, requires that the time series be regularly sampled. Real-world data, however, are often irregularly sampled, thus, making the detection of chaotic behavior difficult or impossible with those methods. In this paper, a characterization algorithm is presented, which effectively detects chaos in irregularly sampled time series. The work presented here is a modification of Wiebe and Virgin's algorithm and uses the Lomb-Scargle Periodogram (LSP) to compute a series' power spectrum instead of the DFT. The DFT is not appropriate for irregularly sampled time series. However, the LSP is capable of computing the frequency content of irregularly sampled data. Furthermore, a new method of analyzing the power spectrum is developed, which can be useful for differentiating between chaotic and non-chaotic behavior. The new characterization algorithm is successfully applied to irregularly sampled data generated by a model as well as data consisting of observations of variable stars.
Interactive Digital Signal Processor
NASA Technical Reports Server (NTRS)
Mish, W. H.
1985-01-01
Interactive Digital Signal Processor, IDSP, consists of set of time series analysis "operators" based on various algorithms commonly used for digital signal analysis. Processing of digital signal time series to extract information usually achieved by applications of number of fairly standard operations. IDSP excellent teaching tool for demonstrating application for time series operators to artificially generated signals.
ERIC Educational Resources Information Center
Crowe, Suzy; Penney, Elaine
This book is the first volume in the "Kids and Computers" series, a series of books designed to help adults easily use high-quality, developmentally appropriate software with children. After reviewing the basics of selected software packages (how to start the program, stop the program, move around, and use special keys) several ideas and…
NASA Astrophysics Data System (ADS)
Schneider, P.; Roberts, D. A.
2007-12-01
The Fire Potential Index (FPI) is currently the only operationally used wildfire susceptibility index in the United States that incorporates remote sensing data in addition to meteorological information. Its remote sensing component utilizes relative greenness derived from a NDVI time series as a proxy for computing the ratio of live to dead vegetation. This study investigates the potential of Multiple Endmember Spectral Mixture Analysis (MESMA) as a more direct and physically reasonable way of computing the live ratio and applying it for the computation of the FPI. A time series of 16-day reflectance composites of Moderate Resolution Imaging Spectroradiometer (MODIS) data was used to perform the analysis. Endmember selection for green vegetation (GV), non- photosynthetic vegetation (NPV) and soil was performed in two stages. First, a subset of suitable endmembers was selected from an extensive library of reference and image spectra for each class using Endmember Average Root Mean Square Error (EAR), Minimum Average Spectral Angle (MASA) and a count-based technique. Second, the most appropriate endmembers for the specific data set were selected from the subset by running a series of 2-endmember models on representative images and choosing the ones that modeled the majority of pixels. The final set of endmembers was used for running MESMA on southern California MODIS composites from 2000 to 2006. 3- and 4-endmember models were considered. The best model was chosen on a per-pixel basis according to the minimum root mean square error of the models at each level of complexity. Endmember fractions were normalized by the shade endmember to generate realistic fractions of GV and NPV. In order to validate the MESMA-derived GV fractions they were compared against live ratio estimates from RG. A significant spatial and temporal relationship between both measures was found, indicating that GV fraction has the potential to substitute RG in computing the FPI. To further test this hypothesis the live ratio estimates obtained from MESMA were used to compute daily FPI maps for southern California from 2001 to 2006. A validation with historical wildfire data from the MODIS Active Fire product was carried out over the same time period using logistic regression. Initial results show that MESMA-derived GV fraction can be used successfully for generating FPI maps of southern California.
Computation of type curves for flow to partially penetrating wells in water-table aquifers
Moench, Allen F.
1993-01-01
Evaluation of Neuman's analytical solution for flow to a well in a homogeneous, anisotropic, water-table aquifer commonly requires large amounts of computation time and can produce inaccurate results for selected combinations of parameters. Large computation times occur because the integrand of a semi-infinite integral involves the summation of an infinite series. Each term of the series requires evaluation of the roots of equations, and the series itself is sometimes slowly convergent. Inaccuracies can result from lack of computer precision or from the use of improper methods of numerical integration. In this paper it is proposed to use a method of numerical inversion of the Laplace transform solution, provided by Neuman, to overcome these difficulties. The solution in Laplace space is simpler in form than the real-time solution; that is, the integrand of the semi-infinite integral does not involve an infinite series or the need to evaluate roots of equations. Because the integrand is evaluated rapidly, advanced methods of numerical integration can be used to improve accuracy with an overall reduction in computation time. The proposed method of computing type curves, for which a partially documented computer program (WTAQ1) was written, was found to reduce computation time by factors of 2 to 20 over the time needed to evaluate the closed-form, real-time solution.
TDRSS-user orbit determination using batch least-squares and sequential methods
NASA Astrophysics Data System (ADS)
Oza, D. H.; Jones, T. L.; Hakimi, M.; Samii, Mina V.; Doll, C. E.; Mistretta, G. D.; Hart, R. C.
1993-02-01
The Goddard Space Flight Center (GSFC) Flight Dynamics Division (FDD) commissioned Applied Technology Associates, Incorporated, to develop the Real-Time Orbit Determination/Enhanced (RTOD/E) system on a Disk Operating System (DOS)-based personal computer (PC) as a prototype system for sequential orbit determination of spacecraft. This paper presents the results of a study to compare the orbit determination accuracy for a Tracking and Data Relay Satellite System (TDRSS) user spacecraft, Landsat-4, obtained using RTOD/E, operating on a PC, with the accuracy of an established batch least-squares system, the Goddard Trajectory Determination System (GTDS), and operating on a mainframe computer. The results of Landsat-4 orbit determination will provide useful experience for the Earth Observing System (EOS) series of satellites. The Landsat-4 ephemerides were estimated for the January 17-23, 1991, timeframe, during which intensive TDRSS tracking data for Landsat-4 were available. Independent assessments were made of the consistencies (overlap comparisons for the batch case and covariances and the first measurement residuals for the sequential case) of solutions produced by the batch and sequential methods. The forward-filtered RTOD/E orbit solutions were compared with the definitive GTDS orbit solutions for Landsat-4; the solution differences were less than 40 meters after the filter had reached steady state.
TDRSS-user orbit determination using batch least-squares and sequential methods
NASA Technical Reports Server (NTRS)
Oza, D. H.; Jones, T. L.; Hakimi, M.; Samii, Mina V.; Doll, C. E.; Mistretta, G. D.; Hart, R. C.
1993-01-01
The Goddard Space Flight Center (GSFC) Flight Dynamics Division (FDD) commissioned Applied Technology Associates, Incorporated, to develop the Real-Time Orbit Determination/Enhanced (RTOD/E) system on a Disk Operating System (DOS)-based personal computer (PC) as a prototype system for sequential orbit determination of spacecraft. This paper presents the results of a study to compare the orbit determination accuracy for a Tracking and Data Relay Satellite System (TDRSS) user spacecraft, Landsat-4, obtained using RTOD/E, operating on a PC, with the accuracy of an established batch least-squares system, the Goddard Trajectory Determination System (GTDS), and operating on a mainframe computer. The results of Landsat-4 orbit determination will provide useful experience for the Earth Observing System (EOS) series of satellites. The Landsat-4 ephemerides were estimated for the January 17-23, 1991, timeframe, during which intensive TDRSS tracking data for Landsat-4 were available. Independent assessments were made of the consistencies (overlap comparisons for the batch case and covariances and the first measurement residuals for the sequential case) of solutions produced by the batch and sequential methods. The forward-filtered RTOD/E orbit solutions were compared with the definitive GTDS orbit solutions for Landsat-4; the solution differences were less than 40 meters after the filter had reached steady state.
Hadoop for High-Performance Climate Analytics: Use Cases and Lessons Learned
NASA Technical Reports Server (NTRS)
Tamkin, Glenn
2013-01-01
Scientific data services are a critical aspect of the NASA Center for Climate Simulations mission (NCCS). Hadoop, via MapReduce, provides an approach to high-performance analytics that is proving to be useful to data intensive problems in climate research. It offers an analysis paradigm that uses clusters of computers and combines distributed storage of large data sets with parallel computation. The NCCS is particularly interested in the potential of Hadoop to speed up basic operations common to a wide range of analyses. In order to evaluate this potential, we prototyped a series of canonical MapReduce operations over a test suite of observational and climate simulation datasets. The initial focus was on averaging operations over arbitrary spatial and temporal extents within Modern Era Retrospective- Analysis for Research and Applications (MERRA) data. After preliminary results suggested that this approach improves efficiencies within data intensive analytic workflows, we invested in building a cyber infrastructure resource for developing a new generation of climate data analysis capabilities using Hadoop. This resource is focused on reducing the time spent in the preparation of reanalysis data used in data-model inter-comparison, a long sought goal of the climate community. This paper summarizes the related use cases and lessons learned.
Transportable satellite voice terminals in Canada's north: Benefits from the users' perspectives
NASA Astrophysics Data System (ADS)
Sivertz, Christopher B.
Infosat Telecommunications has developed a series of Ku-band transportable terminals for toll-quality telecommunications applications. These terminals are designed to provide reliable telephone, Group III facsimile, and computer data to users needing communications on short notice. The terminals are capable of providing full duplex service interconnecting into the public switched network or operating as an off-premise extension of an office private branch exchange. The terminals use a 1.8 m antenna and can be deployed from the back of a pick-up truck, single axle trailer or surface-mounted metal base. They operate reliable in extreme conditions and are very satellite efficient. This paper describes the features and performance of these terminals. Highlights of a number of remote installations are also discussed.
Binarized cross-approximate entropy in crowdsensing environment.
Skoric, Tamara; Mohamoud, Omer; Milovanovic, Branislav; Japundzic-Zigon, Nina; Bajic, Dragana
2017-01-01
Personalised monitoring in health applications has been recognised as part of the mobile crowdsensing concept, where subjects equipped with sensors extract information and share them for personal or common benefit. Limited transmission resources impose the use of local analyses methodology, but this approach is incompatible with analytical tools that require stationary and artefact-free data. This paper proposes a computationally efficient binarised cross-approximate entropy, referred to as (X)BinEn, for unsupervised cardiovascular signal processing in environments where energy and processor resources are limited. The proposed method is a descendant of the cross-approximate entropy ((X)ApEn). It operates on binary, differentially encoded data series split into m-sized vectors. The Hamming distance is used as a distance measure, while a search for similarities is performed on the vector sets. The procedure is tested on rats under shaker and restraint stress, and compared to the existing (X)ApEn results. The number of processing operations is reduced. (X)BinEn captures entropy changes in a similar manner to (X)ApEn. The coding coarseness yields an adverse effect of reduced sensitivity, but it attenuates parameter inconsistency and binary bias. A special case of (X)BinEn is equivalent to Shannon's entropy. A binary conditional entropy for m =1 vectors is embedded into the (X)BinEn procedure. (X)BinEn can be applied to a single time series as an auto-entropy method, or to a pair of time series, as a cross-entropy method. Its low processing requirements makes it suitable for mobile, battery operated, self-attached sensing devices, with limited power and processor resources. Copyright © 2016 Elsevier Ltd. All rights reserved.
Towards an Entropy Stable Spectral Element Framework for Computational Fluid Dynamics
NASA Technical Reports Server (NTRS)
Carpenter, Mark H.; Parsani, Matteo; Fisher, Travis C.; Nielsen, Eric J.
2016-01-01
Entropy stable (SS) discontinuous spectral collocation formulations of any order are developed for the compressible Navier-Stokes equations on hexahedral elements. Recent progress on two complementary efforts is presented. The first effort is a generalization of previous SS spectral collocation work to extend the applicable set of points from tensor product, Legendre-Gauss-Lobatto (LGL) to tensor product Legendre-Gauss (LG) points. The LG and LGL point formulations are compared on a series of test problems. Although being more costly to implement, it is shown that the LG operators are significantly more accurate on comparable grids. Both the LGL and LG operators are of comparable efficiency and robustness, as is demonstrated using test problems for which conventional FEM techniques suffer instability. The second effort generalizes previous SS work to include the possibility of p-refinement at non-conforming interfaces. A generalization of existing entropy stability machinery is developed to accommodate the nuances of fully multi-dimensional summation-by-parts (SBP) operators. The entropy stability of the compressible Euler equations on non-conforming interfaces is demonstrated using the newly developed LG operators and multi-dimensional interface interpolation operators.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sturm, C.; Soni, A.; Aoki, Y.
2009-07-01
We extend the Rome-Southampton regularization independent momentum-subtraction renormalization scheme (RI/MOM) for bilinear operators to one with a nonexceptional, symmetric subtraction point. Two-point Green's functions with the insertion of quark bilinear operators are computed with scalar, pseudoscalar, vector, axial-vector and tensor operators at one-loop order in perturbative QCD. We call this new scheme RI/SMOM, where the S stands for 'symmetric'. Conversion factors are derived, which connect the RI/SMOM scheme and the MS scheme and can be used to convert results obtained in lattice calculations into the MS scheme. Such a symmetric subtraction point involves nonexceptional momenta implying a lattice calculation withmore » substantially suppressed contamination from infrared effects. Further, we find that the size of the one-loop corrections for these infrared improved kinematics is substantially decreased in the case of the pseudoscalar and scalar operator, suggesting a much better behaved perturbative series. Therefore it should allow us to reduce the error in the determination of the quark mass appreciably.« less
A view on coupled cluster perturbation theory using a bivariational Lagrangian formulation.
Kristensen, Kasper; Eriksen, Janus J; Matthews, Devin A; Olsen, Jeppe; Jørgensen, Poul
2016-02-14
We consider two distinct coupled cluster (CC) perturbation series that both expand the difference between the energies of the CCSD (CC with single and double excitations) and CCSDT (CC with single, double, and triple excitations) models in orders of the Møller-Plesset fluctuation potential. We initially introduce the E-CCSD(T-n) series, in which the CCSD amplitude equations are satisfied at the expansion point, and compare it to the recently developed CCSD(T-n) series [J. J. Eriksen et al., J. Chem. Phys. 140, 064108 (2014)], in which not only the CCSD amplitude, but also the CCSD multiplier equations are satisfied at the expansion point. The computational scaling is similar for the two series, and both are term-wise size extensive with a formal convergence towards the CCSDT target energy. However, the two series are different, and the CCSD(T-n) series is found to exhibit a more rapid convergence up through the series, which we trace back to the fact that more information at the expansion point is utilized than for the E-CCSD(T-n) series. The present analysis can be generalized to any perturbation expansion representing the difference between a parent CC model and a higher-level target CC model. In general, we demonstrate that, whenever the parent parameters depend upon the perturbation operator, a perturbation expansion of the CC energy (where only parent amplitudes are used) differs from a perturbation expansion of the CC Lagrangian (where both parent amplitudes and parent multipliers are used). For the latter case, the bivariational Lagrangian formulation becomes more than a convenient mathematical tool, since it facilitates a different and faster convergent perturbation series than the simpler energy-based expansion.
NASA Technical Reports Server (NTRS)
Grossman, Robert
1991-01-01
Algorithms previously developed by the author give formulas which can be used for the efficient symbolic computation of series expansions to solutions of nonlinear systems of ordinary differential equations. As a by product of this analysis, formulas are derived which relate to trees to the coefficients of the series expansions, similar to the work of Leroux and Viennot, and Lamnabhi, Leroux and Viennot.
NASA Astrophysics Data System (ADS)
Zou, Hai-Long; Yu, Zu-Guo; Anh, Vo; Ma, Yuan-Lin
2018-05-01
In recent years, researchers have proposed several methods to transform time series (such as those of fractional Brownian motion) into complex networks. In this paper, we construct horizontal visibility networks (HVNs) based on the -stable Lévy motion. We aim to study the relations of multifractal and Laplacian spectrum of transformed networks on the parameters and of the -stable Lévy motion. First, we employ the sandbox algorithm to compute the mass exponents and multifractal spectrum to investigate the multifractality of these HVNs. Then we perform least squares fits to find possible relations of the average fractal dimension , the average information dimension and the average correlation dimension against using several methods of model selection. We also investigate possible dependence relations of eigenvalues and energy on , calculated from the Laplacian and normalized Laplacian operators of the constructed HVNs. All of these constructions and estimates will help us to evaluate the validity and usefulness of the mappings between time series and networks, especially between time series of -stable Lévy motions and HVNs.
Software for Managing Parametric Studies
NASA Technical Reports Server (NTRS)
Yarrow, Maurice; McCann, Karen M.; DeVivo, Adrian
2003-01-01
The Information Power Grid Virtual Laboratory (ILab) is a Practical Extraction and Reporting Language (PERL) graphical-user-interface computer program that generates shell scripts to facilitate parametric studies performed on the Grid. (The Grid denotes a worldwide network of supercomputers used for scientific and engineering computations involving data sets too large to fit on desktop computers.) Heretofore, parametric studies on the Grid have been impeded by the need to create control language scripts and edit input data files painstaking tasks that are necessary for managing multiple jobs on multiple computers. ILab reflects an object-oriented approach to automation of these tasks: All data and operations are organized into packages in order to accelerate development and debugging. A container or document object in ILab, called an experiment, contains all the information (data and file paths) necessary to define a complex series of repeated, sequenced, and/or branching processes. For convenience and to enable reuse, this object is serialized to and from disk storage. At run time, the current ILab experiment is used to generate required input files and shell scripts, create directories, copy data files, and then both initiate and monitor the execution of all computational processes.
Statistical Engineering in Air Traffic Management Research
NASA Technical Reports Server (NTRS)
Wilson, Sara R.
2015-01-01
NASA is working to develop an integrated set of advanced technologies to enable efficient arrival operations in high-density terminal airspace for the Next Generation Air Transportation System. This integrated arrival solution is being validated and verified in laboratories and transitioned to a field prototype for an operational demonstration at a major U.S. airport. Within NASA, this is a collaborative effort between Ames and Langley Research Centers involving a multi-year iterative experimentation process. Designing and analyzing a series of sequential batch computer simulations and human-in-the-loop experiments across multiple facilities and simulation environments involves a number of statistical challenges. Experiments conducted in separate laboratories typically have different limitations and constraints, and can take different approaches with respect to the fundamental principles of statistical design of experiments. This often makes it difficult to compare results from multiple experiments and incorporate findings into the next experiment in the series. A statistical engineering approach is being employed within this project to support risk-informed decision making and maximize the knowledge gained within the available resources. This presentation describes a statistical engineering case study from NASA, highlights statistical challenges, and discusses areas where existing statistical methodology is adapted and extended.
Three-dimensional printing in cardiology: Current applications and future challenges.
Luo, Hongxing; Meyer-Szary, Jarosław; Wang, Zhongmin; Sabiniewicz, Robert; Liu, Yuhao
2017-01-01
Three-dimensional (3D) printing has attracted a huge interest in recent years. Broadly speaking, it refers to the technology which converts a predesigned virtual model to a touchable object. In clinical medicine, it usually converts a series of two-dimensional medical images acquired through computed tomography, magnetic resonance imaging or 3D echocardiography into a physical model. Medical 3D printing consists of three main steps: image acquisition, virtual reconstruction and 3D manufacturing. It is a promising tool for preoperative evaluation, medical device design, hemodynamic simulation and medical education, it is also likely to reduce operative risk and increase operative success. However, the most relevant studies are case reports or series which are underpowered in testing its actual effect on patient outcomes. The decision of making a 3D cardiac model may seem arbitrary since it is mostly based on a cardiologist's perceived difficulty in performing an interventional procedure. A uniform consensus is urgently necessary to standardize the key steps of 3D printing from imaging acquisition to final production. In the future, more clinical trials of rigorous design are possible to further validate the effect of 3D printing on the treatment of cardiovascular diseases. (Cardiol J 2017; 24, 4: 436-444).
Interactive digital signal processor
NASA Technical Reports Server (NTRS)
Mish, W. H.; Wenger, R. M.; Behannon, K. W.; Byrnes, J. B.
1982-01-01
The Interactive Digital Signal Processor (IDSP) is examined. It consists of a set of time series analysis Operators each of which operates on an input file to produce an output file. The operators can be executed in any order that makes sense and recursively, if desired. The operators are the various algorithms used in digital time series analysis work. User written operators can be easily interfaced to the sysatem. The system can be operated both interactively and in batch mode. In IDSP a file can consist of up to n (currently n=8) simultaneous time series. IDSP currently includes over thirty standard operators that range from Fourier transform operations, design and application of digital filters, eigenvalue analysis, to operators that provide graphical output, allow batch operation, editing and display information.
Attitude ground support system for the solar maximum mission spacecraft
NASA Technical Reports Server (NTRS)
Nair, G.
1980-01-01
The SMM attitude ground support system (AGSS) supports the acquisition of spacecraft roll attitude reference, performs the in-flight calibration of the attitude sensor complement, supports onboard control autonomy via onboard computer data base updates, and monitors onboard computer (OBC) performance. Initial roll attitude acquisition is accomplished by obtaining a coarse 3 axis attitude estimate from magnetometer and Sun sensor data and subsequently refining it by processing data from the fixed head star trackers. In-flight calibration of the attitude sensor complement is achieved by processing data from a series of slew maneuvers designed to maximize the observability and accuracy of the appropriate alignments and biases. To ensure autonomy of spacecraft operation, the AGSS selects guide stars and computes sensor occultation information for uplink to the OBC. The onboard attitude control performance is monitored on the ground through periodic attitude determination and processing of OBC data in downlink telemetry. In general, the control performance has met mission requirements. However, software and hardware problems have resulted in sporadic attitude reference losses.
Dynamic Stall Measurements and Computations for a VR-12 Airfoil with a Variable Droop Leading Edge
NASA Technical Reports Server (NTRS)
Martin, P. B.; McAlister, K. W.; Chandrasekhara, M. S.; Geissler, W.
2003-01-01
High density-altitude operations of helicopters with advanced performance and maneuver capabilities have lead to fundamental research on active high-lift system concepts for rotor blades. The requirement for this type of system was to improve the sectional lift-to-drag ratio by alleviating dynamic stall on the retreating blade while simultaneously reducing the transonic drag rise of the advancing blade. Both measured and computational results showed that a Variable Droop Leading Edge (VDLE) airfoil is a viable concept for application to a rotor high-lift system. Results are presented for a series of 2D compressible dynamic stall wind tunnel tests with supporting CFD results for selected test cases. These measurements and computations show a dramatic decrease in the drag and pitching moment associated with severe dynamic stall when the VDLE concept is applied to the Boeing VR-12 airfoil. Test results also show an elimination of the negative pitch damping observed in the baseline moment hysteresis curves.
Making intelligent systems team players: Additional case studies
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Schreckenghost, Debra L.; Rhoads, Ron W.
1993-01-01
Observations from a case study of intelligent systems are reported as part of a multi-year interdisciplinary effort to provide guidance and assistance for designers of intelligent systems and their user interfaces. A series of studies were conducted to investigate issues in designing intelligent fault management systems in aerospace applications for effective human-computer interaction. The results of the initial study are documented in two NASA technical memoranda: TM 104738 Making Intelligent Systems Team Players: Case Studies and Design Issues, Volumes 1 and 2; and TM 104751, Making Intelligent Systems Team Players: Overview for Designers. The objective of this additional study was to broaden the investigation of human-computer interaction design issues beyond the focus on monitoring and fault detection in the initial study. The results of this second study are documented which is intended as a supplement to the original design guidance documents. These results should be of interest to designers of intelligent systems for use in real-time operations, and to researchers in the areas of human-computer interaction and artificial intelligence.
Agulleiro, Jose-Ignacio; Fernandez, Jose-Jesus
2015-01-01
Cache blocking is a technique widely used in scientific computing to minimize the exchange of information with main memory by reusing the data kept in cache memory. In tomographic reconstruction on standard computers using vector instructions, cache blocking turns out to be central to optimize performance. To this end, sinograms of the tilt-series and slices of the volumes to be reconstructed have to be divided into small blocks that fit into the different levels of cache memory. The code is then reorganized so as to operate with a block as much as possible before proceeding with another one. This data article is related to the research article titled Tomo3D 2.0 – Exploitation of Advanced Vector eXtensions (AVX) for 3D reconstruction (Agulleiro and Fernandez, 2015) [1]. Here we present data of a thorough study of the performance of tomographic reconstruction by varying cache block sizes, which allows derivation of expressions for their automatic quasi-optimal tuning. PMID:26217710
Agulleiro, Jose-Ignacio; Fernandez, Jose-Jesus
2015-06-01
Cache blocking is a technique widely used in scientific computing to minimize the exchange of information with main memory by reusing the data kept in cache memory. In tomographic reconstruction on standard computers using vector instructions, cache blocking turns out to be central to optimize performance. To this end, sinograms of the tilt-series and slices of the volumes to be reconstructed have to be divided into small blocks that fit into the different levels of cache memory. The code is then reorganized so as to operate with a block as much as possible before proceeding with another one. This data article is related to the research article titled Tomo3D 2.0 - Exploitation of Advanced Vector eXtensions (AVX) for 3D reconstruction (Agulleiro and Fernandez, 2015) [1]. Here we present data of a thorough study of the performance of tomographic reconstruction by varying cache block sizes, which allows derivation of expressions for their automatic quasi-optimal tuning.
Bethel, E. Wes [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Computational Research Division and Scientific Visualization Group
2018-05-07
Summer Lecture Series 2008: Scientific visualization transforms abstract data into readily comprehensible images, provide a vehicle for "seeing the unseeable," and play a central role in both experimental and computational sciences. Wes Bethel, who heads the Scientific Visualization Group in the Computational Research Division, presents an overview of visualization and computer graphics, current research challenges, and future directions for the field.
Turbulent wind at the equatorial segment of an operating Darrieus wind turbine blade
NASA Astrophysics Data System (ADS)
Connell, J. R.; Morris, V. R.
1989-09-01
Six turbulent wind time series, measured at equally spaced equator-height locations on a circle 3 m outside a 34-m Darrieus rotor, are analyzed to approximate the wind fluctuations experienced by the rotor. The flatwise lower root-bending stress of one blade was concurrently recorded. The wind data are analyzed in three ways: wind components that are radial and tangential to the rotation of a blade were rotationally sampled; induction and wake effects of the rotor were estimated from the six Eulerian time series; and turbulence spectra of both the measured wind and the modeled wind from the PNL theory of rotationally sampled turbulence. The wind and the rotor response are related by computing the spectral response function of the flatwise lower root-bending stress. Two bands of resonant response that surround the first and second flatwise modal frequencies shift with the rotor rotation rate.
NASA Astrophysics Data System (ADS)
Lieu, Richard
2018-01-01
A hierarchy of statistics of increasing sophistication and accuracy is proposed, to exploit an interesting and fundamental arithmetic structure in the photon bunching noise of incoherent light of large photon occupation number, with the purpose of suppressing the noise and rendering a more reliable and unbiased measurement of the light intensity. The method does not require any new hardware, rather it operates at the software level, with the help of high precision computers, to reprocess the intensity time series of the incident light to create a new series with smaller bunching noise coherence length. The ultimate accuracy improvement of this method of flux measurement is limited by the timing resolution of the detector and the photon occupation number of the beam (the higher the photon number the better the performance). The principal application is accuracy improvement in the bolometric flux measurement of a radio source.
Manoj Kumar, Palanivelu; Karthikeyan, Chandrabose; Hari Narayana Moorthy, Narayana Subbiah; Trivedi, Piyush
2006-11-01
In the present paper, quantitative structure activity relationship (QSAR) approach was applied to understand the affinity and selectivity of a novel series of triaryl imidazole derivatives towards glucagon receptor. Statistically significant and highly predictive QSARs were derived for glucagon receptor inhibition by triaryl imidazoles using QuaSAR descriptors of molecular operating environment (MOE) employing computer-assisted multiple regression procedure. The generated QSAR models revealed that factors related to hydrophobicity, molecular shape and geometry predominantly influences glucagon receptor binding affinity of the triaryl imidazoles indicating the relevance of shape specific steric interactions between the molecule and the receptor. Further, QSAR models formulated for selective inhibition of glucagon receptor over p38 mitogen activated protein (MAP) kinase of the compounds in the series highlights that the same structural features, which influence the glucagon receptor affinity, also contribute to their selective inhibition.
NASA Technical Reports Server (NTRS)
Pokras, V. M.; Yevdokimov, V. P.; Maslov, V. D.
1978-01-01
The structure and potential of the information reference system OZhUR designed for the automated data processing systems of scientific space vehicles (SV) is considered. The system OZhUR ensures control of the extraction phase of processing with respect to a concrete SV and the exchange of data between phases.The practical application of the system OZhUR is exemplified in the construction of a data processing system for satellites of the Cosmos series. As a result of automating the operations of exchange and control, the volume of manual preparation of data is significantly reduced, and there is no longer any need for individual logs which fix the status of data processing. The system Ozhur is included in the automated data processing system Nauka which is realized in language PL-1 in a binary one-address system one-state (BOS OS) electronic computer.
Mixed-state fidelity susceptibility through iterated commutator series expansion
NASA Astrophysics Data System (ADS)
Tonchev, N. S.
2014-11-01
We present a perturbative approach to the problem of computation of mixed-state fidelity susceptibility (MFS) for thermal states. The mathematical techniques used provide an analytical expression for the MFS as a formal expansion in terms of the thermodynamic mean values of successively higher commutators of the Hamiltonian with the operator involved through the control parameter. That expression is naturally divided into two parts: the usual isothermal susceptibility and a constituent in the form of an infinite series of thermodynamic mean values which encodes the noncommutativity in the problem. If the symmetry properties of the Hamiltonian are given in terms of the generators of some (finite-dimensional) algebra, the obtained expansion may be evaluated in a closed form. This issue is tested on several popular models, for which it is shown that the calculations are much simpler if they are based on the properties from the representation theory of the Heisenberg or SU(1, 1) Lie algebra.
Multicore: Fallout From a Computing Evolution (LBNL Summer Lecture Series)
Yelick, Kathy [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). National Energy Research Scientific Computing Center (NERSC)
2018-05-07
Summer Lecture Series 2008: Parallel computing used to be reserved for big science and engineering projects, but in two years that's all changed. Even laptops and hand-helds use parallel processors. Unfortunately, the software hasn't kept pace. Kathy Yelick, Director of the National Energy Research Scientific Computing Center at Berkeley Lab, describes the resulting chaos and the computing community's efforts to develop exciting applications that take advantage of tens or hundreds of processors on a single chip.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Du, Qiang, E-mail: jyanghkbu@gmail.com; Yang, Jiang, E-mail: qd2125@columbia.edu
This work is concerned with the Fourier spectral approximation of various integral differential equations associated with some linear nonlocal diffusion and peridynamic operators under periodic boundary conditions. For radially symmetric kernels, the nonlocal operators under consideration are diagonalizable in the Fourier space so that the main computational challenge is on the accurate and fast evaluation of their eigenvalues or Fourier symbols consisting of possibly singular and highly oscillatory integrals. For a large class of fractional power-like kernels, we propose a new approach based on reformulating the Fourier symbols both as coefficients of a series expansion and solutions of some simplemore » ODE models. We then propose a hybrid algorithm that utilizes both truncated series expansions and high order Runge–Kutta ODE solvers to provide fast evaluation of Fourier symbols in both one and higher dimensional spaces. It is shown that this hybrid algorithm is robust, efficient and accurate. As applications, we combine this hybrid spectral discretization in the spatial variables and the fourth-order exponential time differencing Runge–Kutta for temporal discretization to offer high order approximations of some nonlocal gradient dynamics including nonlocal Allen–Cahn equations, nonlocal Cahn–Hilliard equations, and nonlocal phase-field crystal models. Numerical results show the accuracy and effectiveness of the fully discrete scheme and illustrate some interesting phenomena associated with the nonlocal models.« less
Mono-stereo-autostereo: the evolution of 3-dimensional neurosurgical planning.
Stadie, Axel T; Kockro, Ralf A
2013-01-01
In the past decade, surgery planning has changed significantly. The main reason is the improvements in computer graphical rendering power and display technology, which turned the plain graphics of the mid-1990s into interactive stereoscopic objects. To report our experiences with 2 virtual reality systems used for planning neurosurgical operations. A series of 208 operations were planned with the Dextroscope (Bracco AMT, Singapore) requiring the use of liquid crystal display shutter glasses. The participating neurosurgeons answered a questionnaire after the planning procedure and postoperatively. In a second prospective series of 33 patients, we used an autostereoscopic monitor system (MD20-3-D; Setred SA, Sweden) to plan intracranial operations. A questionnaire regarding the value of surgery planning was answered preoperatively and postoperatively. The Dextroscope could be integrated into daily surgical routine. Surgeons regarded their understanding of the pathoanatomical situation as improved, leading to enhanced intraoperative orientation and confidence compared with conventional planning. The autostereoscopic Setred system was regarded as helpful in establishing the surgical strategy and analyzing the pathoanatomical situation compared with conventional planning. Both systems were perceived as a backup in case of failure of the standard navigation system. Improvement of display and interaction techniques adds to the realism of the planning process and enables precise structural understanding preoperatively. This minimizes intraoperative guesswork and exploratory dissection. Autostereoscopic display techniques will further increase the value and acceptance of 3-dimensional planning and intraoperative navigation.
Tsuang, Fon-Yih; Chen, Chia-Hsien; Kuo, Yi-Jie; Tseng, Wei-Lung; Chen, Yuan-Shen; Lin, Chin-Jung; Liao, Chun-Jen; Lin, Feng-Huei; Chiang, Chang-Jung
2017-09-01
Minimally invasive spine surgery has become increasingly popular in clinical practice, and it offers patients the potential benefits of reduced blood loss, wound pain, and infection risk, and it also diminishes the loss of working time and length of hospital stay. However, surgeons require more intraoperative fluoroscopy and ionizing radiation exposure during minimally invasive spine surgery for localization, especially for guidance in instrumentation placement. In addition, computer navigation is not accessible in some facility-limited institutions. This study aimed to demonstrate a method for percutaneous screws placement using only the anterior-posterior (AP) trajectory of intraoperative fluoroscopy. A technical report (a retrospective and prospective case series) was carried out. Patients who received posterior fixation with percutaneous pedicle screws for thoracolumbar degenerative disease or trauma comprised the patient sample. We retrospectively reviewed the charts of consecutive 670 patients who received 4,072 pedicle screws between December 2010 and August 2015. Another case series study was conducted prospectively in three additional hospitals, and 88 consecutive patients with 413 pedicle screws were enrolled from February 2014 to July 2016. The fluoroscopy shot number and radiation dose were recorded. In the prospective study, 78 patients with 371 screws received computed tomography at 3 months postoperatively to evaluate the fusion condition and screw positions. In the retrospective series, the placement of a percutaneous screw required 5.1 shots (2-14, standard deviation [SD]=2.366) of AP fluoroscopy. One screw was revised because of a medialwall breach of the pedicle. In the prospective series, 5.8 shots (2-16, SD=2.669) were required forone percutaneous pedicle screw placement. There were two screws with a Grade 1 breach (8.6%), both at the lateral wall of the pedicle, out of 23 screws placed at the thoracic spine at T9-T12. Forthe lumbar and sacral areas, there were 15 Grade 1 breaches (4.3%), 1 Grade 2 breach (0.3%), and 1 Grade 3 breach (0.3%). No revision surgery was necessary. This method avoids lateral shots of fluoroscopy during screw placement and thus decreases the operation time and exposes surgeons to less radiation. At the same time, compared with the computer-navigated procedure, it is less facility-demanding, and provides satisfactory reliability and accuracy. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Improved digital filters for evaluating Fourier and Hankel transform integrals
Anderson, Walter L.
1975-01-01
New algorithms are described for evaluating Fourier (cosine, sine) and Hankel (J0,J1) transform integrals by means of digital filters. The filters have been designed with extended lengths so that a variable convolution operation can be applied to a large class of integral transforms having the same system transfer function. A f' lagged-convolution method is also presented to significantly decrease the computation time when computing a series of like-transforms over a parameter set spaced the same as the filters. Accuracy of the new filters is comparable to Gaussian integration, provided moderate parameter ranges and well-behaved kernel functions are used. A collection of Fortran IV subprograms is included for both real and complex functions for each filter type. The algorithms have been successfully used in geophysical applications containing a wide variety of integral transforms
Continued research on selected parameters to minimize community annoyance from airplane noise
NASA Technical Reports Server (NTRS)
Frair, L.
1981-01-01
Results from continued research on selected parameters to minimize community annoyance from airport noise are reported. First, a review of the initial work on this problem is presented. Then the research focus is expanded by considering multiobjective optimization approaches for this problem. A multiobjective optimization algorithm review from the open literature is presented. This is followed by the multiobjective mathematical formulation for the problem of interest. A discussion of the appropriate solution algorithm for the multiobjective formulation is conducted. Alternate formulations and associated solution algorithms are discussed and evaluated for this airport noise problem. Selected solution algorithms that have been implemented are then used to produce computational results for example airports. These computations involved finding the optimal operating scenario for a moderate size airport and a series of sensitivity analyses for a smaller example airport.
Scenarios for Evolving Seismic Crises: Possible Communication Strategies
NASA Astrophysics Data System (ADS)
Steacy, S.
2015-12-01
Recent advances in operational earthquake forecasting mean that we are very close to being able to confidently compute changes in earthquake probability as seismic crises develop. For instance, we now have statistical models such as ETAS and STEP which demonstrate considerable skill in forecasting earthquake rates and recent advances in Coulomb based models are also showing much promise. Communicating changes in earthquake probability is likely be very difficult, however, as the absolute probability of a damaging event is likely to remain quite small despite a significant increase in the relative value. Here, we use a hybrid Coulomb/statistical model to compute probability changes for a series of earthquake scenarios in New Zealand. We discuss the strengths and limitations of the forecasts and suggest a number of possible mechanisms that might be used to communicate results in an actual developing seismic crisis.
NASA Technical Reports Server (NTRS)
Brauer, G. L.; Habeger, A. R.; Stevenson, R.
1974-01-01
The basic equations and models used in a computer program (6D POST) to optimize simulated trajectories with six degrees of freedom were documented. The 6D POST program was conceived as a direct extension of the program POST, which dealt with point masses, and considers the general motion of a rigid body with six degrees of freedom. It may be used to solve a wide variety of atmospheric flight mechanics and orbital transfer problems for powered or unpowered vehicles operating near a rotating oblate planet. Its principal features are: an easy to use NAMELIST type input procedure, an integrated set of Flight Control System (FCS) modules, and a general-purpose discrete parameter targeting and optimization capability. It was written in FORTRAN 4 for the CDC 6000 series computers.
Modelling technological process of ion-exchange filtration of fluids in porous media
NASA Astrophysics Data System (ADS)
Ravshanov, N.; Saidov, U. M.
2018-05-01
Solution of an actual problem related to the process of filtration and dehydration of liquid and ionic solutions from gel particles and heavy ionic compounds is considered in the paper. This technological process is realized during the preparation and cleaning of chemical solutions, drinking water, pharmaceuticals, liquid fuels, products for public use, etc. For the analysis, research, determination of the main parameters of the technological process and operating modes of filter units and for support in managerial decision-making, a mathematical model is developed. Using the developed model, a series of computational experiments on a computer is carried out. The results of numerical calculations are illustrated in the form of graphs. Based on the analysis of numerical experiments, the conclusions are formulated that serve as the basis for making appropriate managerial decisions.
POD Model Reconstruction for Gray-Box Fault Detection
NASA Technical Reports Server (NTRS)
Park, Han; Zak, Michail
2007-01-01
Proper orthogonal decomposition (POD) is the mathematical basis of a method of constructing low-order mathematical models for the "gray-box" fault-detection algorithm that is a component of a diagnostic system known as beacon-based exception analysis for multi-missions (BEAM). POD has been successfully applied in reducing computational complexity by generating simple models that can be used for control and simulation for complex systems such as fluid flows. In the present application to BEAM, POD brings the same benefits to automated diagnosis. BEAM is a method of real-time or offline, automated diagnosis of a complex dynamic system.The gray-box approach makes it possible to utilize incomplete or approximate knowledge of the dynamics of the system that one seeks to diagnose. In the gray-box approach, a deterministic model of the system is used to filter a time series of system sensor data to remove the deterministic components of the time series from further examination. What is left after the filtering operation is a time series of residual quantities that represent the unknown (or at least unmodeled) aspects of the behavior of the system. Stochastic modeling techniques are then applied to the residual time series. The procedure for detecting abnormal behavior of the system then becomes one of looking for statistical differences between the residual time series and the predictions of the stochastic model.
Petruccelli, Jonathan C; Alonso, Miguel A
2007-09-01
We examine the angle-impact Wigner function (AIW) as a computational tool for the propagation of nonparaxial quasi-monochromatic light of any degree of coherence past a planar boundary between two homogeneous media. The AIWs of the reflected and transmitted fields in two dimensions are shown to be given by a simple ray-optical transformation of the incident AIW plus a series of corrections in the form of differential operators. The radiometric and leading six correction terms are studied for Gaussian Schell-model fields of varying transverse width, transverse coherence, and angle of incidence.
Automatic control algorithm effects on energy production
NASA Technical Reports Server (NTRS)
Mcnerney, G. M.
1981-01-01
A computer model was developed using actual wind time series and turbine performance data to simulate the power produced by the Sandia 17-m VAWT operating in automatic control. The model was used to investigate the influence of starting algorithms on annual energy production. The results indicate that, depending on turbine and local wind characteristics, a bad choice of a control algorithm can significantly reduce overall energy production. The model can be used to select control algorithms and threshold parameters that maximize long term energy production. The results from local site and turbine characteristics were generalized to obtain general guidelines for control algorithm design.
Combinatorial Optimization by Amoeba-Based Neurocomputer with Chaotic Dynamics
NASA Astrophysics Data System (ADS)
Aono, Masashi; Hirata, Yoshito; Hara, Masahiko; Aihara, Kazuyuki
We demonstrate a computing system based on an amoeba of a true slime mold Physarum capable of producing rich spatiotemporal oscillatory behavior. Our system operates as a neurocomputer because an optical feedback control in accordance with a recurrent neural network algorithm leads the amoeba's photosensitive branches to search for a stable configuration concurrently. We show our system's capability of solving the traveling salesman problem. Furthermore, we apply various types of nonlinear time series analysis to the amoeba's oscillatory behavior in the problem-solving process. The results suggest that an individual amoeba might be characterized as a set of coupled chaotic oscillators.
Investigation of Perforated Convergent-divergent Diffusers with Initial Boundary Layer
NASA Technical Reports Server (NTRS)
Weinstein, Maynard I
1950-01-01
An experimental investigation was made at Mach number 1.90 of the performance of a series of perforated convergent-divergent supersonic diffusers operating with initial boundary layer, which was induced and controlled by lengths of cylindrical inlets affixed to the diffusers. Supercritical mass-flow and peak total-pressure recoveries were decreased slightly by use of the longest inlets (4 inlet diameters in length). Combinations of cylindrical inlets, perforated diffusers, and subsonic diffuser were evaluated as simulated wind tunnels having second throats. Comparisons with noncontracted configurations of similar scale indicated conservatively computed power reductions of 25 percent.
Earthquake forecasting studies using radon time series data in Taiwan
NASA Astrophysics Data System (ADS)
Walia, Vivek; Kumar, Arvind; Fu, Ching-Chou; Lin, Shih-Jung; Chou, Kuang-Wu; Wen, Kuo-Liang; Chen, Cheng-Hong
2017-04-01
For few decades, growing number of studies have shown usefulness of data in the field of seismogeochemistry interpreted as geochemical precursory signals for impending earthquakes and radon is idendified to be as one of the most reliable geochemical precursor. Radon is recognized as short-term precursor and is being monitored in many countries. This study is aimed at developing an effective earthquake forecasting system by inspecting long term radon time series data. The data is obtained from a network of radon monitoring stations eastblished along different faults of Taiwan. The continuous time series radon data for earthquake studies have been recorded and some significant variations associated with strong earthquakes have been observed. The data is also examined to evaluate earthquake precursory signals against environmental factors. An automated real-time database operating system has been developed recently to improve the data processing for earthquake precursory studies. In addition, the study is aimed at the appraisal and filtrations of these environmental parameters, in order to create a real-time database that helps our earthquake precursory study. In recent years, automatic operating real-time database has been developed using R, an open source programming language, to carry out statistical computation on the data. To integrate our data with our working procedure, we use the popular and famous open source web application solution, AMP (Apache, MySQL, and PHP), creating a website that could effectively show and help us manage the real-time database.
ERIC Educational Resources Information Center
Tataw, Oben Moses
2013-01-01
Interdisciplinary research in computer science requires the development of computational techniques for practical application in different domains. This usually requires careful integration of different areas of technical expertise. This dissertation presents image and time series analysis algorithms, with practical interdisciplinary applications…
Advanced Free Flight Planner and Dispatcher's Workstation: Preliminary Design Specification
NASA Technical Reports Server (NTRS)
Wilson, J.; Wright, C.; Couluris, G. J.
1997-01-01
The National Aeronautics and Space Administration (NASA) has implemented the Advanced Air Transportation Technology (AATT) program to investigate future improvements to the national and international air traffic management systems. This research, as part of the AATT program, developed preliminary design requirements for an advanced Airline Operations Control (AOC) dispatcher's workstation, with emphasis on flight planning. This design will support the implementation of an experimental workstation in NASA laboratories that would emulate AOC dispatch operations. The work developed an airline flight plan data base and specified requirements for: a computer tool for generation and evaluation of free flight, user preferred trajectories (UPT); the kernel of an advanced flight planning system to be incorporated into the UPT-generation tool; and an AOC workstation to house the UPT-generation tool and to provide a real-time testing environment. A prototype for the advanced flight plan optimization kernel was developed and demonstrated. The flight planner uses dynamic programming to search a four-dimensional wind and temperature grid to identify the optimal route, altitude and speed for successive segments of a flight. An iterative process is employed in which a series of trajectories are successively refined until the LTPT is identified. The flight planner is designed to function in the current operational environment as well as in free flight. The free flight environment would enable greater flexibility in UPT selection based on alleviation of current procedural constraints. The prototype also takes advantage of advanced computer processing capabilities to implement more powerful optimization routines than would be possible with older computer systems.
Parallel Aircraft Trajectory Optimization with Analytic Derivatives
NASA Technical Reports Server (NTRS)
Falck, Robert D.; Gray, Justin S.; Naylor, Bret
2016-01-01
Trajectory optimization is an integral component for the design of aerospace vehicles, but emerging aircraft technologies have introduced new demands on trajectory analysis that current tools are not well suited to address. Designing aircraft with technologies such as hybrid electric propulsion and morphing wings requires consideration of the operational behavior as well as the physical design characteristics of the aircraft. The addition of operational variables can dramatically increase the number of design variables which motivates the use of gradient based optimization with analytic derivatives to solve the larger optimization problems. In this work we develop an aircraft trajectory analysis tool using a Legendre-Gauss-Lobatto based collocation scheme, providing analytic derivatives via the OpenMDAO multidisciplinary optimization framework. This collocation method uses an implicit time integration scheme that provides a high degree of sparsity and thus several potential options for parallelization. The performance of the new implementation was investigated via a series of single and multi-trajectory optimizations using a combination of parallel computing and constraint aggregation. The computational performance results show that in order to take full advantage of the sparsity in the problem it is vital to parallelize both the non-linear analysis evaluations and the derivative computations themselves. The constraint aggregation results showed a significant numerical challenge due to difficulty in achieving tight convergence tolerances. Overall, the results demonstrate the value of applying analytic derivatives to trajectory optimization problems and lay the foundation for future application of this collocation based method to the design of aircraft with where operational scheduling of technologies is key to achieving good performance.
Santitissadeekorn, N; Bollt, E M
2007-06-01
In this paper, we present an approach to approximate the Frobenius-Perron transfer operator from a sequence of time-ordered images, that is, a movie dataset. Unlike time-series data, successive images do not provide a direct access to a trajectory of a point in a phase space; more precisely, a pixel in an image plane. Therefore, we reconstruct the velocity field from image sequences based on the infinitesimal generator of the Frobenius-Perron operator. Moreover, we relate this problem to the well-known optical flow problem from the computer vision community and we validate the continuity equation derived from the infinitesimal operator as a constraint equation for the optical flow problem. Once the vector field and then a discrete transfer operator are found, then, in addition, we present a graph modularity method as a tool to discover basin structure in the phase space. Together with a tool to reconstruct a velocity field, this graph-based partition method provides us with a way to study transport behavior and other ergodic properties of measurable dynamical systems captured only through image sequences.
Exact semi-separation of variables in waveguides with non-planar boundaries
NASA Astrophysics Data System (ADS)
Athanassoulis, G. A.; Papoutsellis, Ch. E.
2017-05-01
Series expansions of unknown fields Φ =∑φn Zn in elongated waveguides are commonly used in acoustics, optics, geophysics, water waves and other applications, in the context of coupled-mode theories (CMTs). The transverse functions Zn are determined by solving local Sturm-Liouville problems (reference waveguides). In most cases, the boundary conditions assigned to Zn cannot be compatible with the physical boundary conditions of Φ, leading to slowly convergent series, and rendering CMTs mild-slope approximations. In the present paper, the heuristic approach introduced in Athanassoulis & Belibassakis (Athanassoulis & Belibassakis 1999 J. Fluid Mech. 389, 275-301) is generalized and justified. It is proved that an appropriately enhanced series expansion becomes an exact, rapidly convergent representation of the field Φ, valid for any smooth, non-planar boundaries and any smooth enough Φ. This series expansion can be differentiated termwise everywhere in the domain, including the boundaries, implementing an exact semi-separation of variables for non-separable domains. The efficiency of the method is illustrated by solving a boundary value problem for the Laplace equation, and computing the corresponding Dirichlet-to-Neumann operator, involved in Hamiltonian equations for nonlinear water waves. The present method provides accurate results with only a few modes for quite general domains. Extensions to general waveguides are also discussed.
Coastline detection with time series of SAR images
NASA Astrophysics Data System (ADS)
Ao, Dongyang; Dumitru, Octavian; Schwarz, Gottfried; Datcu, Mihai
2017-10-01
For maritime remote sensing, coastline detection is a vital task. With continuous coastline detection results from satellite image time series, the actual shoreline, the sea level, and environmental parameters can be observed to support coastal management and disaster warning. Established coastline detection methods are often based on SAR images and wellknown image processing approaches. These methods involve a lot of complicated data processing, which is a big challenge for remote sensing time series. Additionally, a number of SAR satellites operating with polarimetric capabilities have been launched in recent years, and many investigations of target characteristics in radar polarization have been performed. In this paper, a fast and efficient coastline detection method is proposed which comprises three steps. First, we calculate a modified correlation coefficient of two SAR images of different polarization. This coefficient differs from the traditional computation where normalization is needed. Through this modified approach, the separation between sea and land becomes more prominent. Second, we set a histogram-based threshold to distinguish between sea and land within the given image. The histogram is derived from the statistical distribution of the polarized SAR image pixel amplitudes. Third, we extract continuous coastlines using a Canny image edge detector that is rather immune to speckle noise. Finally, the individual coastlines derived from time series of .SAR images can be checked for changes.
NASA Astrophysics Data System (ADS)
Chen, R.; Xi, X.; Zhao, X.; He, L.; Yao, H.; Shen, R.
2016-12-01
Dense 3D magnetotelluric (MT) data acquisition owns the benefit of suppressing the static shift and topography effect, can achieve high precision and high resolution inversion for underground structure. This method may play an important role in mineral exploration, geothermal resources exploration, and hydrocarbon exploration. It's necessary to reduce the power consumption greatly of a MT signal receiver for large-scale 3D MT data acquisition while using sensor network to monitor data quality of deployed MT receivers. We adopted a series of technologies to realized above goal. At first, we designed an low-power embedded computer which can couple with other parts of MT receiver tightly and support wireless sensor network. The power consumption of our embedded computer is less than 1 watt. Then we designed 4-channel data acquisition subsystem which supports 24-bit analog-digital conversion, GPS synchronization, and real-time digital signal processing. Furthermore, we developed the power supply and power management subsystem for MT receiver. At last, a series of software, which support data acquisition, calibration, wireless sensor network, and testing, were developed. The software which runs on personal computer can monitor and control over 100 MT receivers on the field for data acquisition and quality control. The total power consumption of the receiver is about 2 watts at full operation. The standby power consumption is less than 0.1 watt. Our testing showed that the MT receiver can acquire good quality data at ground with electrical dipole length as 3 m. Over 100 MT receivers were made and used for large-scale geothermal exploration in China with great success.
Large robotized turning centers described
NASA Astrophysics Data System (ADS)
Kirsanov, V. V.; Tsarenko, V. I.
1985-09-01
The introduction of numerical control (NC) machine tools has made it possible to automate machining in series and small series production. The organization of automated production sections merged NC machine tools with automated transport systems. However, both the one and the other require the presence of an operative at the machine for low skilled operations. Industrial robots perform a number of auxiliary operations, such as equipment loading-unloading and control, changing cutting and auxiliary tools, controlling workpieces and parts, and cleaning of location surfaces. When used with a group of equipment they perform transfer operations between the machine tools. Industrial robots eliminate the need for workers to form auxiliary operations. This underscores the importance of developing robotized manufacturing centers providing for minimal human participation in production and creating conditions for two and three shift operation of equipment. Work carried out at several robotized manufacturing centers for series and small series production is described.
A nonlinear optimal control approach for chaotic finance dynamics
NASA Astrophysics Data System (ADS)
Rigatos, G.; Siano, P.; Loia, V.; Tommasetti, A.; Troisi, O.
2017-11-01
A new nonlinear optimal control approach is proposed for stabilization of the dynamics of a chaotic finance model. The dynamic model of the financial system, which expresses interaction between the interest rate, the investment demand, the price exponent and the profit margin, undergoes approximate linearization round local operating points. These local equilibria are defined at each iteration of the control algorithm and consist of the present value of the systems state vector and the last value of the control inputs vector that was exerted on it. The approximate linearization makes use of Taylor series expansion and of the computation of the associated Jacobian matrices. The truncation of higher order terms in the Taylor series expansion is considered to be a modelling error that is compensated by the robustness of the control loop. As the control algorithm runs, the temporary equilibrium is shifted towards the reference trajectory and finally converges to it. The control method needs to compute an H-infinity feedback control law at each iteration, and requires the repetitive solution of an algebraic Riccati equation. Through Lyapunov stability analysis it is shown that an H-infinity tracking performance criterion holds for the control loop. This implies elevated robustness against model approximations and external perturbations. Moreover, under moderate conditions the global asymptotic stability of the control loop is proven.
Stabilization of business cycles of finance agents using nonlinear optimal control
NASA Astrophysics Data System (ADS)
Rigatos, G.; Siano, P.; Ghosh, T.; Sarno, D.
2017-11-01
Stabilization of the business cycles of interconnected finance agents is performed with the use of a new nonlinear optimal control method. First, the dynamics of the interacting finance agents and of the associated business cycles is described by a modeled of coupled nonlinear oscillators. Next, this dynamic model undergoes approximate linearization round a temporary operating point which is defined by the present value of the system's state vector and the last value of the control inputs vector that was exerted on it. The linearization procedure is based on Taylor series expansion of the dynamic model and on the computation of Jacobian matrices. The modelling error, which is due to the truncation of higher-order terms in the Taylor series expansion is considered as a disturbance which is compensated by the robustness of the control loop. Next, for the linearized model of the interacting finance agents, an H-infinity feedback controller is designed. The computation of the feedback control gain requires the solution of an algebraic Riccati equation at each iteration of the control algorithm. Through Lyapunov stability analysis it is proven that the control scheme satisfies an H-infinity tracking performance criterion, which signifies elevated robustness against modelling uncertainty and external perturbations. Moreover, under moderate conditions the global asymptotic stability features of the control loop are proven.
Lie algebraic similarity transformed Hamiltonians for lattice model systems
NASA Astrophysics Data System (ADS)
Wahlen-Strothman, Jacob M.; Jiménez-Hoyos, Carlos A.; Henderson, Thomas M.; Scuseria, Gustavo E.
2015-01-01
We present a class of Lie algebraic similarity transformations generated by exponentials of two-body on-site Hermitian operators whose Hausdorff series can be summed exactly without truncation. The correlators are defined over the entire lattice and include the Gutzwiller factor ni ↑ni ↓ , and two-site products of density (ni ↑+ni ↓) and spin (ni ↑-ni ↓) operators. The resulting non-Hermitian many-body Hamiltonian can be solved in a biorthogonal mean-field approach with polynomial computational cost. The proposed similarity transformation generates locally weighted orbital transformations of the reference determinant. Although the energy of the model is unbound, projective equations in the spirit of coupled cluster theory lead to well-defined solutions. The theory is tested on the one- and two-dimensional repulsive Hubbard model where it yields accurate results for small and medium sized interaction strengths.
Visual improvement for bad handwriting based on Monte-Carlo method
NASA Astrophysics Data System (ADS)
Shi, Cao; Xiao, Jianguo; Xu, Canhui; Jia, Wenhua
2014-03-01
A visual improvement algorithm based on Monte Carlo simulation is proposed in this paper, in order to enhance visual effects for bad handwriting. The whole improvement process is to use well designed typeface so as to optimize bad handwriting image. In this process, a series of linear operators for image transformation are defined for transforming typeface image to approach handwriting image. And specific parameters of linear operators are estimated by Monte Carlo method. Visual improvement experiments illustrate that the proposed algorithm can effectively enhance visual effect for handwriting image as well as maintain the original handwriting features, such as tilt, stroke order and drawing direction etc. The proposed visual improvement algorithm, in this paper, has a huge potential to be applied in tablet computer and Mobile Internet, in order to improve user experience on handwriting.
Boundary integral equation analysis for suspension of spheres in Stokes flow
NASA Astrophysics Data System (ADS)
Corona, Eduardo; Veerapaneni, Shravan
2018-06-01
We show that the standard boundary integral operators, defined on the unit sphere, for the Stokes equations diagonalize on a specific set of vector spherical harmonics and provide formulas for their spectra. We also derive analytical expressions for evaluating the operators away from the boundary. When two particle are located close to each other, we use a truncated series expansion to compute the hydrodynamic interaction. On the other hand, we use the standard spectrally accurate quadrature scheme to evaluate smooth integrals on the far-field, and accelerate the resulting discrete sums using the fast multipole method (FMM). We employ this discretization scheme to analyze several boundary integral formulations of interest including those arising in porous media flow, active matter and magneto-hydrodynamics of rigid particles. We provide numerical results verifying the accuracy and scaling of their evaluation.
NASA Astrophysics Data System (ADS)
Casu, F.; Bonano, M.; de Luca, C.; Lanari, R.; Manunta, M.; Manzo, M.; Zinno, I.
2017-12-01
Since its launch in 2014, the Sentinel-1 (S1) constellation has played a key role on SAR data availability and dissemination all over the World. Indeed, the free and open access data policy adopted by the European Copernicus program together with the global coverage acquisition strategy, make the Sentinel constellation as a game changer in the Earth Observation scenario. Being the SAR data become ubiquitous, the technological and scientific challenge is focused on maximizing the exploitation of such huge data flow. In this direction, the use of innovative processing algorithms and distributed computing infrastructures, such as the Cloud Computing platforms, can play a crucial role. In this work we present a Cloud Computing solution for the advanced interferometric (DInSAR) processing chain based on the Parallel SBAS (P-SBAS) approach, aimed at processing S1 Interferometric Wide Swath (IWS) data for the generation of large spatial scale deformation time series in efficient, automatic and systematic way. Such a DInSAR chain ingests Sentinel 1 SLC images and carries out several processing steps, to finally compute deformation time series and mean deformation velocity maps. Different parallel strategies have been designed ad hoc for each processing step of the P-SBAS S1 chain, encompassing both multi-core and multi-node programming techniques, in order to maximize the computational efficiency achieved within a Cloud Computing environment and cut down the relevant processing times. The presented P-SBAS S1 processing chain has been implemented on the Amazon Web Services platform and a thorough analysis of the attained parallel performances has been performed to identify and overcome the major bottlenecks to the scalability. The presented approach is used to perform national-scale DInSAR analyses over Italy, involving the processing of more than 3000 S1 IWS images acquired from both ascending and descending orbits. Such an experiment confirms the big advantage of exploiting large computational and storage resources of Cloud Computing platforms for large scale DInSAR analysis. The presented Cloud Computing P-SBAS processing chain can be a precious tool in the perspective of developing operational services disposable for the EO scientific community related to hazard monitoring and risk prevention and mitigation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
B.C. Lyons, S.C. Jardin, and J.J. Ramos
2012-06-28
A new code, the Neoclassical Ion-Electron Solver (NIES), has been written to solve for stationary, axisymmetric distribution functions (f ) in the conventional banana regime for both ions and elec trons using a set of drift-kinetic equations (DKEs) with linearized Fokker-Planck-Landau collision operators. Solvability conditions on the DKEs determine the relevant non-adiabatic pieces of f (called h ). We work in a 4D phase space in which Ψ defines a flux surface, θ is the poloidal angle, v is the total velocity referenced to the mean flow velocity, and λ is the dimensionless magnetic moment parameter. We expand h inmore » finite elements in both v and λ . The Rosenbluth potentials, φ and ψ, which define the integral part of the collision operator, are expanded in Legendre series in cos χ , where χ is the pitch angle, Fourier series in cos θ , and finite elements in v . At each ψ , we solve a block tridiagonal system for hi (independent of fe ), then solve another block tridiagonal system for he (dependent on fi ). We demonstrate that such a formulation can be accurately and efficiently solved. NIES is coupled to the MHD equilibrium code JSOLVER [J. DeLucia, et al., J. Comput. Phys. 37 , pp 183-204 (1980).] allowing us to work with realistic magnetic geometries. The bootstrap current is calculated as a simple moment of the distribution function. Results are benchmarked against the Sauter analytic formulas and can be used as a kinetic closure for an MHD code (e.g., M3D-C1 [S.C. Jardin, et al ., Computational Science & Discovery, 4 (2012).]).« less
Efficient 3D geometric and Zernike moments computation from unstructured surface meshes.
Pozo, José María; Villa-Uriol, Maria-Cruz; Frangi, Alejandro F
2011-03-01
This paper introduces and evaluates a fast exact algorithm and a series of faster approximate algorithms for the computation of 3D geometric moments from an unstructured surface mesh of triangles. Being based on the object surface reduces the computational complexity of these algorithms with respect to volumetric grid-based algorithms. In contrast, it can only be applied for the computation of geometric moments of homogeneous objects. This advantage and restriction is shared with other proposed algorithms based on the object boundary. The proposed exact algorithm reduces the computational complexity for computing geometric moments up to order N with respect to previously proposed exact algorithms, from N(9) to N(6). The approximate series algorithm appears as a power series on the rate between triangle size and object size, which can be truncated at any desired degree. The higher the number and quality of the triangles, the better the approximation. This approximate algorithm reduces the computational complexity to N(3). In addition, the paper introduces a fast algorithm for the computation of 3D Zernike moments from the computed geometric moments, with a computational complexity N(4), while the previously proposed algorithm is of order N(6). The error introduced by the proposed approximate algorithms is evaluated in different shapes and the cost-benefit ratio in terms of error, and computational time is analyzed for different moment orders.
PYROLASER - PYROLASER OPTICAL PYROMETER OPERATING SYSTEM
NASA Technical Reports Server (NTRS)
Roberts, F. E.
1994-01-01
The PYROLASER package is an operating system for the Pyrometer Instrument Company's Pyrolaser. There are 6 individual programs in the PYROLASER package: two main programs, two lower level subprograms, and two programs which, although independent, function predominantly as macros. The package provides a quick and easy way to setup, control, and program a standard Pyrolaser. Temperature and emissivity measurements may be either collected as if the Pyrolaser were in the manual operations mode, or displayed on real time strip charts and stored in standard spreadsheet format for post-test analysis. A shell is supplied to allow macros, which are test-specific, to be easily added to the system. The Pyrolaser Simple Operation program provides full on-screen remote operation capabilities, thus allowing the user to operate the Pyrolaser from the computer just as it would be operated manually. The Pyrolaser Simple Operation program also allows the use of "quick starts". Quick starts provide an easy way to permit routines to be used as setup macros for specific applications or tests. The specific procedures required for a test may be ordered in a sequence structure and then the sequence structure can be started with a simple button in the cluster structure provided. One quick start macro is provided for continuous Pyrolaser operation. A subprogram, Display Continuous Pyr Data, is used to display and store the resulting data output. Using this macro, the system is set up for continuous operation and the subprogram is called to display the data in real time on strip charts. The data is simultaneously stored in a spreadsheet format. The resulting spreadsheet file can be opened in any one of a number of commercially available spreadsheet programs. The Read Continuous Pyrometer program is provided as a continuously run subprogram for incorporation of the Pyrolaser software into a process control or feedback control scheme in a multi-component system. The program requires the Pyrolaser to be set up using the Pyrometer String Transfer macro. It requires no inputs and provides temperature and emissivity as outputs. The Read Continuous Pyrometer program can be run continuously and the data can be sampled as often or as seldom as updates of temperature and emissivity are required. PYROLASER is written using the Labview software for use on Macintosh series computers running System 6.0.3 or later, Sun Sparc series computers running OpenWindows 3.0 or MIT's X Window System (X11R4 or X11R5), and IBM PC or compatibles running Microsoft Windows 3.1 or later. Labview requires a minimum of 5Mb of RAM on a Macintosh, 24Mb of RAM on a Sun, and 8Mb of RAM on an IBM PC or compatible. The Labview software is a product of National Instruments (Austin,TX; 800-433-3488), and is not included with this program. The standard distribution medium for PYROLASER is a 3.5 inch 800K Macintosh format diskette. It is also available on a 3.5 inch 720K MS-DOS format diskette, a 3.5 inch diskette in UNIX tar format, and a .25 inch streaming magnetic tape cartridge in UNIX tar format. An electronic copy of the documentation in Macintosh WordPerfect version 2.0.4 format is included on the distribution medium. Printed documentation is included in the price of the program. PYROLASER was developed in 1992.
STRIPE: Remote Driving Using Limited Image Data
NASA Technical Reports Server (NTRS)
Kay, Jennifer S.
1997-01-01
Driving a vehicle, either directly or remotely, is an inherently visual task. When heavy fog limits visibility, we reduce our car's speed to a slow crawl, even along very familiar roads. In teleoperation systems, an operator's view is limited to images provided by one or more cameras mounted on the remote vehicle. Traditional methods of vehicle teleoperation require that a real time stream of images is transmitted from the vehicle camera to the operator control station, and the operator steers the vehicle accordingly. For this type of teleoperation, the transmission link between the vehicle and operator workstation must be very high bandwidth (because of the high volume of images required) and very low latency (because delayed images can cause operators to steer incorrectly). In many situations, such a high-bandwidth, low-latency communication link is unavailable or even technically impossible to provide. Supervised TeleRobotics using Incremental Polyhedral Earth geometry, or STRIPE, is a teleoperation system for a robot vehicle that allows a human operator to accurately control the remote vehicle across very low bandwidth communication links, and communication links with large delays. In STRIPE, a single image from a camera mounted on the vehicle is transmitted to the operator workstation. The operator uses a mouse to pick a series of 'waypoints' in the image that define a path that the vehicle should follow. These 2D waypoints are then transmitted back to the vehicle, where they are used to compute the appropriate steering commands while the next image is being transmitted. STRIPE requires no advance knowledge of the terrain to be traversed, and can be used by novice operators with only minimal training. STRIPE is a unique combination of computer and human control. The computer must determine the 3D world path designated by the 2D waypoints and then accurately control the vehicle over rugged terrain. The human issues involve accurate path selection, and the prevention of disorientation, a common problem across all types of teleoperation systems. STRIPE is the only semi-autonomous teleoperation system that can accurately follow paths designated in monocular images on varying terrain. The thesis describes the STRIPE algorithm for tracking points using the incremental geometry model, insight into the design and redesign of the interface, an analysis of the effects of potential errors, details of the user studies, and hints on how to improve both the algorithm and interface for future designs.
High-Performance Computing Systems and Operations | Computational Science |
NREL Systems and Operations High-Performance Computing Systems and Operations NREL operates high-performance computing (HPC) systems dedicated to advancing energy efficiency and renewable energy technologies. Capabilities NREL's HPC capabilities include: High-Performance Computing Systems We operate
Software Method for Computed Tomography Cylinder Data Unwrapping, Re-slicing, and Analysis
NASA Technical Reports Server (NTRS)
Roth, Don J.
2013-01-01
A software method has been developed that is applicable for analyzing cylindrical and partially cylindrical objects inspected using computed tomography (CT). This method involves unwrapping and re-slicing data so that the CT data from the cylindrical object can be viewed as a series of 2D sheets (or flattened onion skins ) in addition to a series of top view slices and 3D volume rendering. The advantages of viewing the data in this fashion are as follows: (1) the use of standard and specialized image processing and analysis methods is facilitated having 2D array data versus a volume rendering; (2) accurate lateral dimensional analysis of flaws is possible in the unwrapped sheets versus volume rendering; (3) flaws in the part jump out at the inspector with the proper contrast expansion settings in the unwrapped sheets; and (4) it is much easier for the inspector to locate flaws in the unwrapped sheets versus top view slices for very thin cylinders. The method is fully automated and requires no input from the user except proper voxel dimension from the CT experiment and wall thickness of the part. The software is available in 32-bit and 64-bit versions, and can be used with binary data (8- and 16-bit) and BMP type CT image sets. The software has memory (RAM) and hard-drive based modes. The advantage of the (64-bit) RAM-based mode is speed (and is very practical for users of 64-bit Windows operating systems and computers having 16 GB or more RAM). The advantage of the hard-drive based analysis is one can work with essentially unlimited-sized data sets. Separate windows are spawned for the unwrapped/re-sliced data view and any image processing interactive capability. Individual unwrapped images and un -wrapped image series can be saved in common image formats. More information is available at http://www.grc.nasa.gov/WWW/OptInstr/ NDE_CT_CylinderUnwrapper.html.
A class of renormalised meshless Laplacians for boundary value problems
NASA Astrophysics Data System (ADS)
Basic, Josip; Degiuli, Nastia; Ban, Dario
2018-02-01
A meshless approach to approximating spatial derivatives on scattered point arrangements is presented in this paper. Three various derivations of approximate discrete Laplace operator formulations are produced using the Taylor series expansion and renormalised least-squares correction of the first spatial derivatives. Numerical analyses are performed for the introduced Laplacian formulations, and their convergence rate and computational efficiency are examined. The tests are conducted on regular and highly irregular scattered point arrangements. The results are compared to those obtained by the smoothed particle hydrodynamics method and the finite differences method on a regular grid. Finally, the strong form of various Poisson and diffusion equations with Dirichlet or Robin boundary conditions are solved in two and three dimensions by making use of the introduced operators in order to examine their stability and accuracy for boundary value problems. The introduced Laplacian operators perform well for highly irregular point distribution and offer adequate accuracy for mesh and mesh-free numerical methods that require frequent movement of the grid or point cloud.
Flight results from a study of aided inertial navigation applied to landing operations
NASA Technical Reports Server (NTRS)
Mcgee, L. A.; Smith, G. L.; Hegarty, D. M.; Carson, T. M.; Merrick, R. B.; Schmidt, S. F.; Conrad, B.
1973-01-01
An evaluation is presented of the approach and landing performance of a Kalman filter aided inertial navigation system using flight data obtained from a series of approaches and landings of the CV-340 aircraft at an instrumented test area. A description of the flight test is given, in which data recorded included: (1) accelerometer signals from the platform of an INS; (2) three ranges from the Ames-Cubic Precision Ranging System; and (3) radar and barometric altimeter signals. The method of system evaluation employed was postflight processing of the recorded data using a Kalman filter which was designed for use on the XDS920 computer onboard the CV-340 aircraft. Results shown include comparisons between the trajectories as estimated by the Kalman filter aided system and as determined from cinetheodolite data. Data start initialization of the Kalman filter, operation at a practical data rate, postflight modeling of sensor errors and operation under the adverse condition of bad data are illustrated.
The Orlando TDWR testbed and airborne wind shear date comparison results
NASA Technical Reports Server (NTRS)
Campbell, Steven; Berke, Anthony; Matthews, Michael
1992-01-01
The focus of this talk is on comparing terminal Doppler Weather Radar (TDWR) and airborne wind shear data in computing a microburst hazard index called the F factor. The TDWR is a ground-based system for detecting wind shear hazards to aviation in the terminal area. The Federal Aviation Administration will begin deploying TDWR units near 45 airports in late 1992. As part of this development effort, M.I.T. Lincoln Laboratory operates under F.A.A. support a TDWR testbed radar in Orlando, FL. During the past two years, a series of flight tests has been conducted with instrumented aircraft penetrating microburst events while under testbed radar surveillance. These tests were carried out with a Cessna Citation 2 aircraft operated by the University of North Dakota (UND) Center for Aerospace Sciences in 1990, and a Boeing 737 operated by NASA Langley Research Center in 1991. A large data base of approximately 60 instrumented microburst penetrations has been obtained from these flights.
NASA Astrophysics Data System (ADS)
Yozgatligil, Ceylan; Aslan, Sipan; Iyigun, Cem; Batmaz, Inci
2013-04-01
This study aims to compare several imputation methods to complete the missing values of spatio-temporal meteorological time series. To this end, six imputation methods are assessed with respect to various criteria including accuracy, robustness, precision, and efficiency for artificially created missing data in monthly total precipitation and mean temperature series obtained from the Turkish State Meteorological Service. Of these methods, simple arithmetic average, normal ratio (NR), and NR weighted with correlations comprise the simple ones, whereas multilayer perceptron type neural network and multiple imputation strategy adopted by Monte Carlo Markov Chain based on expectation-maximization (EM-MCMC) are computationally intensive ones. In addition, we propose a modification on the EM-MCMC method. Besides using a conventional accuracy measure based on squared errors, we also suggest the correlation dimension (CD) technique of nonlinear dynamic time series analysis which takes spatio-temporal dependencies into account for evaluating imputation performances. Depending on the detailed graphical and quantitative analysis, it can be said that although computational methods, particularly EM-MCMC method, are computationally inefficient, they seem favorable for imputation of meteorological time series with respect to different missingness periods considering both measures and both series studied. To conclude, using the EM-MCMC algorithm for imputing missing values before conducting any statistical analyses of meteorological data will definitely decrease the amount of uncertainty and give more robust results. Moreover, the CD measure can be suggested for the performance evaluation of missing data imputation particularly with computational methods since it gives more precise results in meteorological time series.
Computer algebra and operators
NASA Technical Reports Server (NTRS)
Fateman, Richard; Grossman, Robert
1989-01-01
The symbolic computation of operator expansions is discussed. Some of the capabilities that prove useful when performing computer algebra computations involving operators are considered. These capabilities may be broadly divided into three areas: the algebraic manipulation of expressions from the algebra generated by operators; the algebraic manipulation of the actions of the operators upon other mathematical objects; and the development of appropriate normal forms and simplification algorithms for operators and their actions. Brief descriptions are given of the computer algebra computations that arise when working with various operators and their actions.
Computing singularities of perturbation series
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kvaal, Simen; Jarlebring, Elias; Michiels, Wim
2011-03-15
Many properties of current ab initio approaches to the quantum many-body problem, both perturbational and otherwise, are related to the singularity structure of the Rayleigh-Schroedinger perturbation series. A numerical procedure is presented that in principle computes the complete set of singularities, including the dominant singularity which limits the radius of convergence. The method approximates the singularities as eigenvalues of a certain generalized eigenvalue equation which is solved using iterative techniques. It relies on computation of the action of the Hamiltonian matrix on a vector and does not rely on the terms in the perturbation series. The method can be usefulmore » for studying perturbation series of typical systems of moderate size, for fundamental development of resummation schemes, and for understanding the structure of singularities for typical systems. Some illustrative model problems are studied, including a helium-like model with {delta}-function interactions for which Moeller-Plesset perturbation theory is considered and the radius of convergence found.« less
A study of sound generation in subsonic rotors, volume 2
NASA Technical Reports Server (NTRS)
Chalupnik, J. D.; Clark, L. T.
1975-01-01
Computer programs were developed for use in the analysis of sound generation by subsonic rotors. Program AIRFOIL computes the spectrum of radiated sound from a single airfoil immersed in a laminar flow field. Program ROTOR extends this to a rotating frame, and provides a model for sound generation in subsonic rotors. The program also computes tone sound generation due to steady state forces on the blades. Program TONE uses a moving source analysis to generate a time series for an array of forces moving in a circular path. The resultant time series are than Fourier transformed to render the results in spectral form. Program SDATA is a standard time series analysis package. It reads in two discrete time series and forms auto and cross covariances and normalizes these to form correlations. The program then transforms the covariances to yield auto and cross power spectra by means of a Fourier transformation.
Early phase drug discovery: cheminformatics and computational techniques in identifying lead series.
Duffy, Bryan C; Zhu, Lei; Decornez, Hélène; Kitchen, Douglas B
2012-09-15
Early drug discovery processes rely on hit finding procedures followed by extensive experimental confirmation in order to select high priority hit series which then undergo further scrutiny in hit-to-lead studies. The experimental cost and the risk associated with poor selection of lead series can be greatly reduced by the use of many different computational and cheminformatic techniques to sort and prioritize compounds. We describe the steps in typical hit identification and hit-to-lead programs and then describe how cheminformatic analysis assists this process. In particular, scaffold analysis, clustering and property calculations assist in the design of high-throughput screening libraries, the early analysis of hits and then organizing compounds into series for their progression from hits to leads. Additionally, these computational tools can be used in virtual screening to design hit-finding libraries and as procedures to help with early SAR exploration. Copyright © 2012 Elsevier Ltd. All rights reserved.
Multipole Algorithms for Molecular Dynamics Simulation on High Performance Computers.
NASA Astrophysics Data System (ADS)
Elliott, William Dewey
1995-01-01
A fundamental problem in modeling large molecular systems with molecular dynamics (MD) simulations is the underlying N-body problem of computing the interactions between all pairs of N atoms. The simplest algorithm to compute pair-wise atomic interactions scales in runtime {cal O}(N^2), making it impractical for interesting biomolecular systems, which can contain millions of atoms. Recently, several algorithms have become available that solve the N-body problem by computing the effects of all pair-wise interactions while scaling in runtime less than {cal O}(N^2). One algorithm, which scales {cal O}(N) for a uniform distribution of particles, is called the Greengard-Rokhlin Fast Multipole Algorithm (FMA). This work describes an FMA-like algorithm called the Molecular Dynamics Multipole Algorithm (MDMA). The algorithm contains several features that are new to N-body algorithms. MDMA uses new, efficient series expansion equations to compute general 1/r^{n } potentials to arbitrary accuracy. In particular, the 1/r Coulomb potential and the 1/r^6 portion of the Lennard-Jones potential are implemented. The new equations are based on multivariate Taylor series expansions. In addition, MDMA uses a cell-to-cell interaction region of cells that is closely tied to worst case error bounds. The worst case error bounds for MDMA are derived in this work also. These bounds apply to other multipole algorithms as well. Several implementation enhancements are described which apply to MDMA as well as other N-body algorithms such as FMA and tree codes. The mathematics of the cell -to-cell interactions are converted to the Fourier domain for reduced operation count and faster computation. A relative indexing scheme was devised to locate cells in the interaction region which allows efficient pre-computation of redundant information and prestorage of much of the cell-to-cell interaction. Also, MDMA was integrated into the MD program SIgMA to demonstrate the performance of the program over several simulation timesteps. One MD application described here highlights the utility of including long range contributions to Lennard-Jones potential in constant pressure simulations. Another application shows the time dependence of long range forces in a multiple time step MD simulation.
Computer Programs for Technical Communicators: The Compelling Curriculum. Working Draft.
ERIC Educational Resources Information Center
Selfe, Cynthia L.; Wahlstrom, Billie J.
A series of computer programs have been developed at Michigan Technological University for use with technical writing and technical communications classes. The first type of program in the series, CURIE II, includes process-based modules, each of which corresponds to one of the following assignments: memoranda, resumes, feasibility reports,…
Development and use of a computer system in a radiotherapy department: SISGRAD.
Costa, A; Lalanne, C M; Marcié, S; Leca, M; Rameau, P; Chauvel, P; Héry, M; Lagrange, J L; Verschoore, J
1987-12-01
SISGRAD, the interactive computer system of the Antoine-Lacassagne Cancer Center Radiotherapy Department, has been operational since January 1982. It completes the computerized dosimetry system installed several years earlier and is fully integrated with the institution's central network. SISGRAD is in charge of surveillance of the radiotherapy treatments given by the Center's three radiotherapy units (1400 patients per year); it is also used for administrative purposes in the Department and physically connects all of the Department's operating stations. SISGRAD consists of a series of microcomputers connected to a common mass memory; each microcomputer is used as an intelligent console. SISGRAD was developed to guarantee that the treatments comply with prescriptions, to supply extemporaneous dosimetric data, to improve administrative work, and to supply banks with data for statistical analysis and research. SISGRAD actively intervenes to guarantee treatment quality and helps to improve therapy-related security factors. The present text describes the results of clinical use over a 4-year period. The consequences of integration of the system within the Department are analyzed, with special emphasis being placed on SISGRAD's role in the prevention and detection of errors in treatment prescription and delivery.
Fulcher, Ben D; Jones, Nick S
2017-11-22
Phenotype measurements frequently take the form of time series, but we currently lack a systematic method for relating these complex data streams to scientifically meaningful outcomes, such as relating the movement dynamics of organisms to their genotype or measurements of brain dynamics of a patient to their disease diagnosis. Previous work addressed this problem by comparing implementations of thousands of diverse scientific time-series analysis methods in an approach termed highly comparative time-series analysis. Here, we introduce hctsa, a software tool for applying this methodological approach to data. hctsa includes an architecture for computing over 7,700 time-series features and a suite of analysis and visualization algorithms to automatically select useful and interpretable time-series features for a given application. Using exemplar applications to high-throughput phenotyping experiments, we show how hctsa allows researchers to leverage decades of time-series research to quantify and understand informative structure in time-series data. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Antonik, Piotr; Haelterman, Marc; Massar, Serge
2017-05-01
Reservoir computing is a bioinspired computing paradigm for processing time-dependent signals. Its hardware implementations have received much attention because of their simplicity and remarkable performance on a series of benchmark tasks. In previous experiments, the output was uncoupled from the system and, in most cases, simply computed off-line on a postprocessing computer. However, numerical investigations have shown that feeding the output back into the reservoir opens the possibility of long-horizon time-series forecasting. Here, we present a photonic reservoir computer with output feedback, and we demonstrate its capacity to generate periodic time series and to emulate chaotic systems. We study in detail the effect of experimental noise on system performance. In the case of chaotic systems, we introduce several metrics, based on standard signal-processing techniques, to evaluate the quality of the emulation. Our work significantly enlarges the range of tasks that can be solved by hardware reservoir computers and, therefore, the range of applications they could potentially tackle. It also raises interesting questions in nonlinear dynamics and chaos theory.
AMRITA -- A computational facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shepherd, J.E.; Quirk, J.J.
1998-02-23
Amrita is a software system for automating numerical investigations. The system is driven using its own powerful scripting language, Amrita, which facilitates both the composition and archiving of complete numerical investigations, as distinct from isolated computations. Once archived, an Amrita investigation can later be reproduced by any interested party, and not just the original investigator, for no cost other than the raw CPU time needed to parse the archived script. In fact, this entire lecture can be reconstructed in such a fashion. To do this, the script: constructs a number of shock-capturing schemes; runs a series of test problems, generatesmore » the plots shown; outputs the LATEX to typeset the notes; performs a myriad of behind-the-scenes tasks to glue everything together. Thus Amrita has all the characteristics of an operating system and should not be mistaken for a common-or-garden code.« less
Small target detection using objectness and saliency
NASA Astrophysics Data System (ADS)
Zhang, Naiwen; Xiao, Yang; Fang, Zhiwen; Yang, Jian; Wang, Li; Li, Tao
2017-10-01
We are motived by the need for generic object detection algorithm which achieves high recall for small targets in complex scenes with acceptable computational efficiency. We propose a novel object detection algorithm, which has high localization quality with acceptable computational cost. Firstly, we obtain the objectness map as in BING[1] and use NMS to get the top N points. Then, k-means algorithm is used to cluster them into K classes according to their location. We set the center points of the K classes as seed points. For each seed point, an object potential region is extracted. Finally, a fast salient object detection algorithm[2] is applied to the object potential regions to highlight objectlike pixels, and a series of efficient post-processing operations are proposed to locate the targets. Our method runs at 5 FPS on 1000*1000 images, and significantly outperforms previous methods on small targets in cluttered background.
Yamazoe, Kenji; Mochi, Iacopo; Goldberg, Kenneth A.
2014-12-01
The wavefront retrieval by gradient descent algorithm that is typically applied to coherent or incoherent imaging is extended to retrieve a wavefront from a series of through-focus images by partially coherent illumination. For accurate retrieval, we modeled partial coherence as well as object transmittance into the gradient descent algorithm. However, this modeling increases the computation time due to the complexity of partially coherent imaging simulation that is repeatedly used in the optimization loop. To accelerate the computation, we incorporate not only the Fourier transform but also an eigenfunction decomposition of the image. As a demonstration, the extended algorithm is appliedmore » to retrieve a field-dependent wavefront of a microscope operated at extreme ultraviolet wavelength (13.4 nm). The retrieved wavefront qualitatively matches the expected characteristics of the lens design.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yamazoe, Kenji; Mochi, Iacopo; Goldberg, Kenneth A.
The wavefront retrieval by gradient descent algorithm that is typically applied to coherent or incoherent imaging is extended to retrieve a wavefront from a series of through-focus images by partially coherent illumination. For accurate retrieval, we modeled partial coherence as well as object transmittance into the gradient descent algorithm. However, this modeling increases the computation time due to the complexity of partially coherent imaging simulation that is repeatedly used in the optimization loop. To accelerate the computation, we incorporate not only the Fourier transform but also an eigenfunction decomposition of the image. As a demonstration, the extended algorithm is appliedmore » to retrieve a field-dependent wavefront of a microscope operated at extreme ultraviolet wavelength (13.4 nm). The retrieved wavefront qualitatively matches the expected characteristics of the lens design.« less
Ball bearing heat analysis program (BABHAP)
NASA Technical Reports Server (NTRS)
1978-01-01
The Ball Bearing Heat Analysis Program (BABHAP) is an attempt to assemble a series of equations, some of which are non-linear algebraic systems, in a logical order, which when solved, provide a complex analysis of load distribution among the balls, ball velocities, heat generation resulting from friction, applied load, and ball spinning, minimum lubricant film thickness, and many additional characteristics of ball bearing systems. Although initial design requirements for BABHAP were dictated by the core limitations of the PDP 11/45 computer, (approximately 8K of real words with limited number of instructions) the program dimensions can easily be expanded for large core computers such as the UNIVAC 1108. The PDP version of BABHAP is also operational on the UNIVAC system with the exception that the PDP uses 029 punch and the UNIVAC uses 026. A conversion program was written to allow transfer between machines.
A Series of Computational Neuroscience Labs Increases Comfort with MATLAB.
Nichols, David F
2015-01-01
Computational simulations allow for a low-cost, reliable means to demonstrate complex and often times inaccessible concepts to undergraduates. However, students without prior computer programming training may find working with code-based simulations to be intimidating and distracting. A series of computational neuroscience labs involving the Hodgkin-Huxley equations, an Integrate-and-Fire model, and a Hopfield Memory network were used in an undergraduate neuroscience laboratory component of an introductory level course. Using short focused surveys before and after each lab, student comfort levels were shown to increase drastically from a majority of students being uncomfortable or with neutral feelings about working in the MATLAB environment to a vast majority of students being comfortable working in the environment. Though change was reported within each lab, a series of labs was necessary in order to establish a lasting high level of comfort. Comfort working with code is important as a first step in acquiring computational skills that are required to address many questions within neuroscience.
A Series of Computational Neuroscience Labs Increases Comfort with MATLAB
Nichols, David F.
2015-01-01
Computational simulations allow for a low-cost, reliable means to demonstrate complex and often times inaccessible concepts to undergraduates. However, students without prior computer programming training may find working with code-based simulations to be intimidating and distracting. A series of computational neuroscience labs involving the Hodgkin-Huxley equations, an Integrate-and-Fire model, and a Hopfield Memory network were used in an undergraduate neuroscience laboratory component of an introductory level course. Using short focused surveys before and after each lab, student comfort levels were shown to increase drastically from a majority of students being uncomfortable or with neutral feelings about working in the MATLAB environment to a vast majority of students being comfortable working in the environment. Though change was reported within each lab, a series of labs was necessary in order to establish a lasting high level of comfort. Comfort working with code is important as a first step in acquiring computational skills that are required to address many questions within neuroscience. PMID:26557798
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, Eric D.
1999-06-17
In the world of computer-based data acquisition and control, the graphical interface program LabVIEW from National Instruments is so ubiquitous that in many ways it has almost become the laboratory standard. To date, there have been approximately fifteen books concerning LabVIEW, but Professor Essick's treatise takes on a completely different tack than all of the previous discussions. In the more standard treatments of the ways and wherefores of LabVIEW such as LabVIEW Graphical Programming: Practical Applications in Instrumentation and Control by Gary W. Johnson (McGraw Hill, NY 1997), the emphasis has been instructing the reader how to program LabVIEW tomore » create a Virtual Instrument (VI) on the computer for interfacing to a particular instruments. LabVIEW is written in G a graphical programming language developed by National Instruments. In the past the emphasis has been on training the experimenter to learn G . Without going into details here, G incorporates the usual loops, arithmetic expressions, etc., found in many programming languages, but in an icon (graphical) environment. The net result being that LabVIEW contains all of the standard methods needed for interfacing to instruments, data acquisition, data analysis, graphics, and also methodology to incorporate programs written in other languages into LabVIEW. Historically, according to Professor Essick, he developed a series of experiments for an upper division laboratory course for computer-based instrumentation. His observation was that while many students had the necessary background in computer programming languages, there were students who had virtually no concept about writing a computer program let alone a computer- based interfacing program. Thus the beginnings of a concept for not only teaching computer- based instrumentation techniques, but aiso a method for the beginner to experience writing a com- puter program. Professor Essick saw LabVIEW as the perfect environment in which to teach computer-based research skills. With this goal in mind, he has succeeded admirably. Advanced LabVIEW Labs presents a series of chapters devoted to not only introducing the reader to LabVIEW, but also to the concepts necessary for writing a successful computer pro- gram. Each chapter is an assignment for the student and is suitable for a ten week course. The first topic introduces the while loop and waveform chart VI'S. After learning how to launch LabVIEW, the student then leans how to use LabVIEW functions such as sine and cosine. The beauty of thk and subsequent chapters, the student is introduced immediately to computer-based instruction by learning how to display the results in graph form on the screen. At each point along the way, the student is not only introduced to another LabVIEW operation, but also to such subjects as spread sheets for data storage, numerical integration, Fourier transformations', curve fitting algorithms, etc. The last few chapters conclude with the purpose of the learning module, and that is, com- puter-based instrumentation. Computer-based laboratory projects such as analog-to-digital con- version, digitizing oscilloscopes treated. Advanced Lab VIEW Labs finishes with a treatment on GPIB interfacing and finally, the student is asked to create an operating VI for temperature con- trol. This is an excellent text, not only as an treatise on LabVIEW but also as an introduction to computer programming logic. All programmers, who are struggling to not only learning how interface computers to instruments, but also trying understand top down programming and other programming language techniques, should add Advanced Lab-VIEW Labs to their computer library.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, Eric D.
1999-06-17
In the world of computer-based data acquisition and control, the graphical interface program LabVIEW from National Instruments is so ubiquitous that in many ways it has almost become the laboratory standard. To date, there have been approximately fifteen books concerning LabVIEW, but Professor Essick's treatise takes on a completely different tack than all of the previous discussions. In the more standard treatments of the ways and wherefores of LabVIEW such as LabVIEW Graphical Programming: Practical Applications in Instrumentation and Control by Gary W. Johnson (McGraw Hill, NY 1997), the emphasis has been instructing the reader how to program LabVIEW tomore » create a Virtual Instrument (VI) on the computer for interfacing to a particular instruments. LabVIEW is written in "G" a graphical programming language developed by National Instruments. In the past the emphasis has been on training the experimenter to learn "G". Without going into details here, "G" incorporates the usual loops, arithmetic expressions, etc., found in many programming languages, but in an icon (graphical) environment. The net result being that LabVIEW contains all of the standard methods needed for interfacing to instruments, data acquisition, data analysis, graphics, and also methodology to incorporate programs written in other languages into LabVIEW. Historically, according to Professor Essick, he developed a series of experiments for an upper division laboratory course for computer-based instrumentation. His observation was that while many students had the necessary background in computer programming languages, there were students who had virtually no concept about writing a computer program let alone a computer- based interfacing program. Thus the beginnings of a concept for not only teaching computer- based instrumentation techniques, but aiso a method for the beginner to experience writing a com- puter program. Professor Essick saw LabVIEW as the "perfect environment in which to teach computer-based research skills." With this goal in mind, he has succeeded admirably. Advanced LabVIEW Labs presents a series of chapters devoted to not only introducing the reader to LabVIEW, but also to the concepts necessary for writing a successful computer pro- gram. Each chapter is an assignment for the student and is suitable for a ten week course. The first topic introduces the while loop and waveform chart VI'S. After learning how to launch LabVIEW, the student then leans how to use LabVIEW functions such as sine and cosine. The beauty of thk and subsequent chapters, the student is introduced immediately to computer-based instruction by learning how to display the results in graph form on the screen. At each point along the way, the student is not only introduced to another LabVIEW operation, but also to such subjects as spread sheets for data storage, numerical integration, Fourier transformations', curve fitting algorithms, etc. The last few chapters conclude with the purpose of the learning module, and that is, com- puter-based instrumentation. Computer-based laboratory projects such as analog-to-digital con- version, digitizing oscilloscopes treated. Advanced Lab VIEW Labs finishes with a treatment on GPIB interfacing and finally, the student is asked to create an operating VI for temperature con- trol. This is an excellent text, not only as an treatise on LabVIEW but also as an introduction to computer programming logic. All programmers, who are struggling to not only learning how interface computers to instruments, but also trying understand top down programming and other programming language techniques, should add Advanced Lab-VIEW Labs to their computer library.« less
Testing for nonlinearity in time series: The method of surrogate data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Theiler, J.; Galdrikian, B.; Longtin, A.
1991-01-01
We describe a statistical approach for identifying nonlinearity in time series; in particular, we want to avoid claims of chaos when simpler models (such as linearly correlated noise) can explain the data. The method requires a careful statement of the null hypothesis which characterizes a candidate linear process, the generation of an ensemble of surrogate'' data sets which are similar to the original time series but consistent with the null hypothesis, and the computation of a discriminating statistic for the original and for each of the surrogate data sets. The idea is to test the original time series against themore » null hypothesis by checking whether the discriminating statistic computed for the original time series differs significantly from the statistics computed for each of the surrogate sets. We present algorithms for generating surrogate data under various null hypotheses, and we show the results of numerical experiments on artificial data using correlation dimension, Lyapunov exponent, and forecasting error as discriminating statistics. Finally, we consider a number of experimental time series -- including sunspots, electroencephalogram (EEG) signals, and fluid convection -- and evaluate the statistical significance of the evidence for nonlinear structure in each case. 56 refs., 8 figs.« less
2016-09-01
Method Scientific Operating Procedure Series : SOP-C En vi ro nm en ta l L ab or at or y Jonathon Brame and Chris Griggs September 2016...BET) Method Scientific Operating Procedure Series : SOP-C Jonathon Brame and Chris Griggs Environmental Laboratory U.S. Army Engineer Research and...response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing
Operation of the J-series thruster using inert gas
NASA Technical Reports Server (NTRS)
Rawlin, V. K.
1982-01-01
Electron bombardment ion thrusters using inert gases are candidates for large space systems. The J-Series 30 cm diameter thruster, designed for operation up to 3 k-W with mercury, is at a state of technology readiness. The characteristics of operation with xenon, krypton, and argon propellants in a J-Series thruster with that obtained with mercury are compared. The performance of the discharge chamber, ion optics, and neutralizer and the overall efficiency as functions of input power and specific impulse and thruster lifetime were evaluated. As expected, the discharge chamber performance with inert gases decreased with decreasing atomic mass. Aspects of the J-Series thruster design which would require modification to provide operation at high power with insert gases were identified.
Towards pattern generation and chaotic series prediction with photonic reservoir computers
NASA Astrophysics Data System (ADS)
Antonik, Piotr; Hermans, Michiel; Duport, François; Haelterman, Marc; Massar, Serge
2016-03-01
Reservoir Computing is a bio-inspired computing paradigm for processing time dependent signals that is particularly well suited for analog implementations. Our team has demonstrated several photonic reservoir computers with performance comparable to digital algorithms on a series of benchmark tasks such as channel equalisation and speech recognition. Recently, we showed that our opto-electronic reservoir computer could be trained online with a simple gradient descent algorithm programmed on an FPGA chip. This setup makes it in principle possible to feed the output signal back into the reservoir, and thus highly enrich the dynamics of the system. This will allow to tackle complex prediction tasks in hardware, such as pattern generation and chaotic and financial series prediction, which have so far only been studied in digital implementations. Here we report simulation results of our opto-electronic setup with an FPGA chip and output feedback applied to pattern generation and Mackey-Glass chaotic series prediction. The simulations take into account the major aspects of our experimental setup. We find that pattern generation can be easily implemented on the current setup with very good results. The Mackey-Glass series prediction task is more complex and requires a large reservoir and more elaborate training algorithm. With these adjustments promising result are obtained, and we now know what improvements are needed to match previously reported numerical results. These simulation results will serve as basis of comparison for experiments we will carry out in the coming months.
Systolic array IC for genetic computation
NASA Technical Reports Server (NTRS)
Anderson, D.
1991-01-01
Measuring similarities between large sequences of genetic information is a formidable task requiring enormous amounts of computer time. Geneticists claim that nearly two months of CRAY-2 time are required to run a single comparison of the known database against the new bases that will be found this year, and more than a CRAY-2 year for next year's genetic discoveries, and so on. The DNA IC, designed at HP-ICBD in cooperation with the California Institute of Technology and the Jet Propulsion Laboratory, is being implemented in order to move the task of genetic comparison onto workstations and personal computers, while vastly improving performance. The chip is a systolic (pumped) array comprised of 16 processors, control logic, and global RAM, totaling 400,000 FETS. At 12 MHz, each chip performs 2.7 billion 16 bit operations per second. Using 35 of these chips in series on one PC board (performing nearly 100 billion operations per second), a sequence of 560 bases can be compared against the eventual total genome of 3 billion bases, in minutes--on a personal computer. While the designed purpose of the DNA chip is for genetic research, other disciplines requiring similarity measurements between strings of 7 bit encoded data could make use of this chip as well. Cryptography and speech recognition are two examples. A mix of full custom design and standard cells, in CMOS34, were used to achieve these goals. Innovative test methods were developed to enhance controllability and observability in the array. This paper describes these techniques as well as the chip's functionality. This chip was designed in the 1989-90 timeframe.
SAR processing in the cloud for oil detection in the Arctic
NASA Astrophysics Data System (ADS)
Garron, J.; Stoner, C.; Meyer, F. J.
2016-12-01
A new world of opportunity is being thawed from the ice of the Arctic, driven by decreased persistent Arctic sea-ice cover, increases in shipping, tourism, natural resource development. Tools that can automatically monitor key sea ice characteristics and potential oil spills are essential for safe passage in these changing waters. Synthetic aperture radar (SAR) data can be used to discriminate sea ice types and oil on the ocean surface and also for feature tracking. Additionally, SAR can image the earth through the night and most weather conditions. SAR data is volumetrically large and requires significant computing power to manipulate. Algorithms designed to identify key environmental features, like oil spills, in SAR imagery require secondary processing, and are computationally intensive, which can functionally limit their application in a real-time setting. Cloud processing is designed to manage big data and big data processing jobs by means of small cycles of off-site computations, eliminating up-front hardware costs. Pairing SAR data with cloud processing has allowed us to create and solidify a processing pipeline for SAR data products in the cloud to compare operational algorithms efficiency and effectiveness when run using an Alaska Satellite Facility (ASF) defined Amazon Machine Image (AMI). The products created from this secondary processing, were compared to determine which algorithm was most accurate in Arctic feature identification, and what operational conditions were required to produce the results on the ASF defined AMI. Results will be used to inform a series of recommendations to oil-spill response data managers and SAR users interested in expanding their analytical computing power.
Program For Generating Interactive Displays
NASA Technical Reports Server (NTRS)
Costenbader, Jay; Moleski, Walt; Szczur, Martha; Howell, David; Engelberg, Norm; Li, Tin P.; Misra, Dharitri; Miller, Philip; Neve, Leif; Wolf, Karl;
1991-01-01
Sun/Unix version of Transportable Applications Environment Plus (TAE+) computer program provides integrated, portable software environment for developing and running interactive window, text, and graphical-object-based application software systems. Enables programmer or nonprogrammer to construct easily custom software interface between user and application program and to move resulting interface program and its application program to different computers. Plus viewed as productivity tool for application developers and application end users, who benefit from resultant consistent and well-designed user interface sheltering them from intricacies of computer. Available in form suitable for following six different groups of computers: DEC VAX station and other VMS VAX computers, Macintosh II computers running AUX, Apollo Domain Series 3000, DEC VAX and reduced-instruction-set-computer workstations running Ultrix, Sun 3- and 4-series workstations running Sun OS and IBM RT/PC and PS/2 compute
Another Program For Generating Interactive Graphics
NASA Technical Reports Server (NTRS)
Costenbader, Jay; Moleski, Walt; Szczur, Martha; Howell, David; Engelberg, Norm; Li, Tin P.; Misra, Dharitri; Miller, Philip; Neve, Leif; Wolf, Karl;
1991-01-01
VAX/Ultrix version of Transportable Applications Environment Plus (TAE+) computer program provides integrated, portable software environment for developing and running interactive window, text, and graphical-object-based application software systems. Enables programmer or nonprogrammer to construct easily custom software interface between user and application program and to move resulting interface program and its application program to different computers. When used throughout company for wide range of applications, makes both application program and computer seem transparent, with noticeable improvements in learning curve. Available in form suitable for following six different groups of computers: DEC VAX station and other VMS VAX computers, Macintosh II computers running AUX, Apollo Domain Series 3000, DEC VAX and reduced-instruction-set-computer workstations running Ultrix, Sun 3- and 4-series workstations running Sun OS and IBM RT/PC's and PS/2 computers running AIX, and HP 9000 S
High-Speed Recording of Test Data on Hard Disks
NASA Technical Reports Server (NTRS)
Lagarde, Paul M., Jr.; Newnan, Bruce
2003-01-01
Disk Recording System (DRS) is a systems-integration computer program for a direct-to-disk (DTD) high-speed data acquisition system (HDAS) that records rocket-engine test data. The HDAS consists partly of equipment originally designed for recording the data on tapes. The tape recorders were replaced with hard-disk drives, necessitating the development of DRS to provide an operating environment that ties two computers, a set of five DTD recorders, and signal-processing circuits from the original tape-recording version of the HDAS into one working system. DRS includes three subsystems: (1) one that generates a graphical user interface (GUI), on one of the computers, that serves as a main control panel; (2) one that generates a GUI, on the other computer, that serves as a remote control panel; and (3) a data-processing subsystem that performs tasks on the DTD recorders according to instructions sent from the main control panel. The software affords capabilities for dynamic configuration to record single or multiple channels from a remote source, remote starting and stopping of the recorders, indexing to prevent overwriting of data, and production of filtered frequency data from an original time-series data file.
NASA Technical Reports Server (NTRS)
Cohen, Jarrett
1999-01-01
Parallel computers built out of mass-market parts are cost-effectively performing data processing and simulation tasks. The Supercomputing (now known as "SC") series of conferences celebrated its 10th anniversary last November. While vendors have come and gone, the dominant paradigm for tackling big problems still is a shared-resource, commercial supercomputer. Growing numbers of users needing a cheaper or dedicated-access alternative are building their own supercomputers out of mass-market parts. Such machines are generally called Beowulf-class systems after the 11th century epic. This modern-day Beowulf story began in 1994 at NASA's Goddard Space Flight Center. A laboratory for the Earth and space sciences, computing managers there threw down a gauntlet to develop a $50,000 gigaFLOPS workstation for processing satellite data sets. Soon, Thomas Sterling and Don Becker were working on the Beowulf concept at the University Space Research Association (USRA)-run Center of Excellence in Space Data and Information Sciences (CESDIS). Beowulf clusters mix three primary ingredients: commodity personal computers or workstations, low-cost Ethernet networks, and the open-source Linux operating system. One of the larger Beowulfs is Goddard's Highly-parallel Integrated Virtual Environment, or HIVE for short.
VTGRAPH - GRAPHIC SOFTWARE TOOL FOR VT TERMINALS
NASA Technical Reports Server (NTRS)
Wang, C.
1994-01-01
VTGRAPH is a graphics software tool for DEC/VT or VT compatible terminals which are widely used by government and industry. It is a FORTRAN or C-language callable library designed to allow the user to deal with many computer environments which use VT terminals for window management and graphic systems. It also provides a PLOT10-like package plus color or shade capability for VT240, VT241, and VT300 terminals. The program is transportable to many different computers which use VT terminals. With this graphics package, the user can easily design more friendly user interface programs and design PLOT10 programs on VT terminals with different computer systems. VTGRAPH was developed using the ReGis Graphics set which provides a full range of graphics capabilities. The basic VTGRAPH capabilities are as follows: window management, PLOT10 compatible drawing, generic program routines for two and three dimensional plotting, and color graphics or shaded graphics capability. The program was developed in VAX FORTRAN in 1988. VTGRAPH requires a ReGis graphics set terminal and a FORTRAN compiler. The program has been run on a DEC MicroVAX 3600 series computer operating under VMS 5.0, and has a virtual memory requirement of 5KB.
Space Software for Automotive Design
NASA Technical Reports Server (NTRS)
1988-01-01
John Thousand of Wolverine Western Corp. put his aerospace group to work on an unfamiliar job, designing a brake drum using computer design techniques. Computer design involves creation of a mathematical model of a product and analyzing its effectiveness in simulated operation. Technique enables study of performance and structural behavior of a number of different designs before settling on a final configuration. Wolverine employees attacked a traditional brake drum problem, the sudden buildup of heat during fast and repeated braking. Part of brake drum not confined tends to change its shape under combination of heat, physical pressure and rotational forces, a condition known as bellmouthing. Since bellmouthing is a major factor in braking effectiveness, a solution of problem would be a major advance in automotive engineering. A former NASA employee, now a Wolverine employee, knew of a series of NASA computer programs ideally suited to confronting bellmouthing. Originally developed as aids to rocket engine nozzle design, it's capable of analyzing problems generated in a rocket engine or automotive brake drum by heat, expansion, pressure and rotational forces. Use of these computer programs led to new brake drum concept featuring a more durable axle, and heat transfer ribs, or fins, on hub of drum.
HydroClimATe: hydrologic and climatic analysis toolkit
Dickinson, Jesse; Hanson, Randall T.; Predmore, Steven K.
2014-01-01
The potential consequences of climate variability and climate change have been identified as major issues for the sustainability and availability of the worldwide water resources. Unlike global climate change, climate variability represents deviations from the long-term state of the climate over periods of a few years to several decades. Currently, rich hydrologic time-series data are available, but the combination of data preparation and statistical methods developed by the U.S. Geological Survey as part of the Groundwater Resources Program is relatively unavailable to hydrologists and engineers who could benefit from estimates of climate variability and its effects on periodic recharge and water-resource availability. This report documents HydroClimATe, a computer program for assessing the relations between variable climatic and hydrologic time-series data. HydroClimATe was developed for a Windows operating system. The software includes statistical tools for (1) time-series preprocessing, (2) spectral analysis, (3) spatial and temporal analysis, (4) correlation analysis, and (5) projections. The time-series preprocessing tools include spline fitting, standardization using a normal or gamma distribution, and transformation by a cumulative departure. The spectral analysis tools include discrete Fourier transform, maximum entropy method, and singular spectrum analysis. The spatial and temporal analysis tool is empirical orthogonal function analysis. The correlation analysis tools are linear regression and lag correlation. The projection tools include autoregressive time-series modeling and generation of many realizations. These tools are demonstrated in four examples that use stream-flow discharge data, groundwater-level records, gridded time series of precipitation data, and the Multivariate ENSO Index.
Algorithms for computing the geopotential using a simple density layer
NASA Technical Reports Server (NTRS)
Morrison, F.
1976-01-01
Several algorithms have been developed for computing the potential and attraction of a simple density layer. These are numerical cubature, Taylor series, and a mixed analytic and numerical integration using a singularity-matching technique. A computer program has been written to combine these techniques for computing the disturbing acceleration on an artificial earth satellite. A total of 1640 equal-area, constant surface density blocks on an oblate spheroid are used. The singularity-matching algorithm is used in the subsatellite region, Taylor series in the surrounding zone, and numerical cubature on the rest of the earth.
MacDoctor: The Macintosh diagnoser
NASA Technical Reports Server (NTRS)
Lavery, David B.; Brooks, William D.
1990-01-01
When the Macintosh computer was first released, the primary user was a computer hobbyist who typically had a significant technical background and was highly motivated to understand the internal structure and operational intricacies of the computer. In recent years the Macintosh computer has become a widely-accepted general purpose computer which is being used by an ever-increasing non-technical audience. This has lead to a large base of users which has neither the interest nor the background to understand what is happening 'behind the scenes' when the Macintosh is put to use - or what should be happening when something goes wrong. Additionally, the Macintosh itself has evolved from a simple closed design to a complete family of processor platforms and peripherals with a tremendous number of possible configurations. With the increasing popularity of the Macintosh series, software and hardware developers are producing a product for every user's need. As the complexity of configuration possibilities grows, the need for experienced or even expert knowledge is required to diagnose problems. This presents a problem to uneducated or casual users. This problem indicates a new Macintosh consumer need; that is, a diagnostic tool able to determine the problem for the user. As the volume of Macintosh products has increased, this need has also increased.
Challenges in reducing the computational time of QSTS simulations for distribution system analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deboever, Jeremiah; Zhang, Xiaochen; Reno, Matthew J.
The rapid increase in penetration of distributed energy resources on the electric power distribution system has created a need for more comprehensive interconnection modelling and impact analysis. Unlike conventional scenario - based studies , quasi - static time - series (QSTS) simulation s can realistically model time - dependent voltage controllers and the diversity of potential impacts that can occur at different times of year . However, to accurately model a distribution system with all its controllable devices, a yearlong simulation at 1 - second resolution is often required , which could take conventional computers a computational time of 10more » to 120 hours when an actual unbalanced distribution feeder is modeled . This computational burden is a clear l imitation to the adoption of QSTS simulation s in interconnection studies and for determining optimal control solutions for utility operations . Our ongoing research to improve the speed of QSTS simulation has revealed many unique aspects of distribution system modelling and sequential power flow analysis that make fast QSTS a very difficult problem to solve. In this report , the most relevant challenges in reducing the computational time of QSTS simulations are presented: number of power flows to solve, circuit complexity, time dependence between time steps, multiple valid power flow solutions, controllable element interactions, and extensive accurate simulation analysis.« less
Reducing power consumption while performing collective operations on a plurality of compute nodes
Archer, Charles J [Rochester, MN; Blocksome, Michael A [Rochester, MN; Peters, Amanda E [Rochester, MN; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN
2011-10-18
Methods, apparatus, and products are disclosed for reducing power consumption while performing collective operations on a plurality of compute nodes that include: receiving, by each compute node, instructions to perform a type of collective operation; selecting, by each compute node from a plurality of collective operations for the collective operation type, a particular collective operation in dependence upon power consumption characteristics for each of the plurality of collective operations; and executing, by each compute node, the selected collective operation.
Broadcasting collective operation contributions throughout a parallel computer
Faraj, Ahmad [Rochester, MN
2012-02-21
Methods, systems, and products are disclosed for broadcasting collective operation contributions throughout a parallel computer. The parallel computer includes a plurality of compute nodes connected together through a data communications network. Each compute node has a plurality of processors for use in collective parallel operations on the parallel computer. Broadcasting collective operation contributions throughout a parallel computer according to embodiments of the present invention includes: transmitting, by each processor on each compute node, that processor's collective operation contribution to the other processors on that compute node using intra-node communications; and transmitting on a designated network link, by each processor on each compute node according to a serial processor transmission sequence, that processor's collective operation contribution to the other processors on the other compute nodes using inter-node communications.
Computation of solar perturbations with Poisson series
NASA Technical Reports Server (NTRS)
Broucke, R.
1974-01-01
Description of a project for computing first-order perturbations of natural or artificial satellites by integrating the equations of motion on a computer with automatic Poisson series expansions. A basic feature of the method of solution is that the classical variation-of-parameters formulation is used rather than rectangular coordinates. However, the variation-of-parameters formulation uses the three rectangular components of the disturbing force rather than the classical disturbing function, so that there is no problem in expanding the disturbing function in series. Another characteristic of the variation-of-parameters formulation employed is that six rather unusual variables are used in order to avoid singularities at the zero eccentricity and zero (or 90 deg) inclination. The integration process starts by assuming that all the orbit elements present on the right-hand sides of the equations of motion are constants. These right-hand sides are then simple Poisson series which can be obtained with the use of the Bessel expansions of the two-body problem in conjunction with certain interation methods. These Poisson series can then be integrated term by term, and a first-order solution is obtained.
Care 3 model overview and user's guide, first revision
NASA Technical Reports Server (NTRS)
Bavuso, S. J.; Petersen, P. L.
1985-01-01
A manual was written to introduce the CARE III (Computer-Aided Reliability Estimation) capability to reliability and design engineers who are interested in predicting the reliability of highly reliable fault-tolerant systems. It was also structured to serve as a quick-look reference manual for more experienced users. The guide covers CARE III modeling and reliability predictions for execution in the CDC CYber 170 series computers, DEC VAX-11/700 series computer, and most machines that compile ANSI Standard FORTRAN 77.
Stimec, Bojan V; Andersen, Bjarte T; Benz, Stefan R; Fasel, Jean H D; Augestad, Knut M; Ignjatovic, Dejan
2018-06-01
The middle colic artery (MCA) is of crucial importance in abdominal surgery, for laparoscopic or open right and transverse colectomies. Against this background, a high number of reports concerning anatomical variations of the MCA have been published intended to contribute to the improvement of operative techniques for the treatment of colon cancer. Despite this extensive literature, briefly reviewed in the present paper, a course of the MCA posterior to the superior mesenteric vein, called a retromesenteric trajectory, has been related to only once, to the best of our knowledge. A total series of 507 patients included in two prospective trials concerning laparoscopic or open right colectomy for cancer between 2011 and 2017 are reported. The investigation included preoperative or postoperative multidetector-computed tomography angiography. We found four (0.79%) cases of retromesenteric MCA. They all underwent meticulous image analysis with mesenteric vessels' road mapping, detailed morphometry, and surgical validation which revealed that, apart from their course, those cases did not differ significantly from the rest of the series. This paper therefore documents the worth-knowing behavior causing considerable confusion for the operating surgeon unaware of the abnormality and shows its concrete impact on patient-tailored surgical practice, in particular for laparoscopic D3 colectomy (including the "uncinated process first" approach).
On the estimation of brain signal entropy from sparse neuroimaging data
Grandy, Thomas H.; Garrett, Douglas D.; Schmiedek, Florian; Werkle-Bergner, Markus
2016-01-01
Multi-scale entropy (MSE) has been recently established as a promising tool for the analysis of the moment-to-moment variability of neural signals. Appealingly, MSE provides a measure of the predictability of neural operations across the multiple time scales on which the brain operates. An important limitation in the application of the MSE to some classes of neural signals is MSE’s apparent reliance on long time series. However, this sparse-data limitation in MSE computation could potentially be overcome via MSE estimation across shorter time series that are not necessarily acquired continuously (e.g., in fMRI block-designs). In the present study, using simulated, EEG, and fMRI data, we examined the dependence of the accuracy and precision of MSE estimates on the number of data points per segment and the total number of data segments. As hypothesized, MSE estimation across discontinuous segments was comparably accurate and precise, despite segment length. A key advance of our approach is that it allows the calculation of MSE scales not previously accessible from the native segment lengths. Consequently, our results may permit a far broader range of applications of MSE when gauging moment-to-moment dynamics in sparse and/or discontinuous neurophysiological data typical of many modern cognitive neuroscience study designs. PMID:27020961
Management of natural resources through automatic cartographic inventory. [France
NASA Technical Reports Server (NTRS)
Rey, P.; Gourinard, Y.; Cambou, F. (Principal Investigator)
1974-01-01
The author has identified the following significant results. (1) Accurate recognition of previously known ground features from ERTS-1 imagery has been confirmed and a probable detection range for the major signatures can be given. (2) Unidentified elements, however, must be decoded by means of the equal densitometric value zone method. (3) Determination of these zonings involves an analogical treatment of images using the color equidensity methods (pseudo-color), color composites and especially temporal color composite (repetitive superposition). (4) After this analogical preparation, the digital equidensities can be processed by computer in the four MSS bands, according to a series of transfer operations from imagery and automatic cartography.
All-optical animation projection system with rotating fieldstone.
Ishii, Yuko; Takayama, Yoshihisa; Kodate, Kashiko
2007-06-11
A simple and compact rewritable holographic memory system using a fieldstone of Ulexite is proposed. The role of the fieldstone is to impose random patterns on the reference beam to record plural images with the random-reference multiplexing scheme. The operations for writing and reading holograms are carried out by simply rotating the fieldstone in one direction. One of the features of this approach is found in a way to generate random patterns without computer drawings. The experimental study confirms that our system enables the smooth readout of the stored images one after another so that the series of reproduced images are projected as an animation.
All-optical animation projection system with rotating fieldstone
NASA Astrophysics Data System (ADS)
Ishii, Yuko; Takayama, Yoshihisa; Kodate, Kashiko
2007-06-01
A simple and compact rewritable holographic memory system using a fieldstone of Ulexite is proposed. The role of the fieldstone is to impose random patterns on the reference beam to record plural images with the random-reference multiplexing scheme. The operations for writing and reading holograms are carried out by simply rotating the fieldstone in one direction. One of the features of this approach is found in a way to generate random patterns without computer drawings. The experimental study confirms that our system enables the smooth readout of the stored images one after another so that the series of reproduced images are projected as an animation.
Small scale sequence automation pays big dividends
NASA Technical Reports Server (NTRS)
Nelson, Bill
1994-01-01
Galileo sequence design and integration are supported by a suite of formal software tools. Sequence review, however, is largely a manual process with reviewers scanning hundreds of pages of cryptic computer printouts to verify sequence correctness. Beginning in 1990, a series of small, PC based sequence review tools evolved. Each tool performs a specific task but all have a common 'look and feel'. The narrow focus of each tool means simpler operation, and easier creation, testing, and maintenance. Benefits from these tools are (1) decreased review time by factors of 5 to 20 or more with a concomitant reduction in staffing, (2) increased review accuracy, and (3) excellent returns on time invested.
31 CFR 359.51 - What book-entry Series I savings bonds are included in the computation?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 31 Money and Finance: Treasury 2 2010-07-01 2010-07-01 false What book-entry Series I savings bonds are included in the computation? 359.51 Section 359.51 Money and Finance: Treasury Regulations Relating to Money and Finance (Continued) FISCAL SERVICE, DEPARTMENT OF THE TREASURY BUREAU OF THE PUBLIC...
31 CFR 359.32 - What definitive Series I savings bonds are excluded from the computation?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 31 Money and Finance: Treasury 2 2010-07-01 2010-07-01 false What definitive Series I savings bonds are excluded from the computation? 359.32 Section 359.32 Money and Finance: Treasury Regulations Relating to Money and Finance (Continued) FISCAL SERVICE, DEPARTMENT OF THE TREASURY BUREAU OF THE PUBLIC...
Technology Tools in the Information Age Classroom. Using Technology in the Classroom Series.
ERIC Educational Resources Information Center
Finkel, LeRoy
This book is designed for use in an introductory, college level course on educational technology, and no prior experience with computers or computing is assumed. The first of a series on technology in the classroom, the text provides a foundation for exploring more specific topics in greater depth. The book is divided into three…
Cross-bispectrum computation and variance estimation
NASA Technical Reports Server (NTRS)
Lii, K. S.; Helland, K. N.
1981-01-01
A method for the estimation of cross-bispectra of discrete real time series is developed. The asymptotic variance properties of the bispectrum are reviewed, and a method for the direct estimation of bispectral variance is given. The symmetry properties are described which minimize the computations necessary to obtain a complete estimate of the cross-bispectrum in the right-half-plane. A procedure is given for computing the cross-bispectrum by subdividing the domain into rectangular averaging regions which help reduce the variance of the estimates and allow easy application of the symmetry relationships to minimize the computational effort. As an example of the procedure, the cross-bispectrum of a numerically generated, exponentially distributed time series is computed and compared with theory.
NASA Astrophysics Data System (ADS)
Philip, S.; Martin, R. V.; Keller, C. A.
2015-11-01
Chemical transport models involve considerable computational expense. Fine temporal resolution offers accuracy at the expense of computation time. Assessment is needed of the sensitivity of simulation accuracy to the duration of chemical and transport operators. We conduct a series of simulations with the GEOS-Chem chemical transport model at different temporal and spatial resolutions to examine the sensitivity of simulated atmospheric composition to temporal resolution. Subsequently, we compare the tracers simulated with operator durations from 10 to 60 min as typically used by global chemical transport models, and identify the timesteps that optimize both computational expense and simulation accuracy. We found that longer transport timesteps increase concentrations of emitted species such as nitrogen oxides and carbon monoxide since a more homogeneous distribution reduces loss through chemical reactions and dry deposition. The increased concentrations of ozone precursors increase ozone production at longer transport timesteps. Longer chemical timesteps decrease sulfate and ammonium but increase nitrate due to feedbacks with in-cloud sulfur dioxide oxidation and aerosol thermodynamics. The simulation duration decreases by an order of magnitude from fine (5 min) to coarse (60 min) temporal resolution. We assess the change in simulation accuracy with resolution by comparing the root mean square difference in ground-level concentrations of nitrogen oxides, ozone, carbon monoxide and secondary inorganic aerosols with a finer temporal or spatial resolution taken as truth. Simulation error for these species increases by more than a factor of 5 from the shortest (5 min) to longest (60 min) temporal resolution. Chemical timesteps twice that of the transport timestep offer more simulation accuracy per unit computation. However, simulation error from coarser spatial resolution generally exceeds that from longer timesteps; e.g. degrading from 2° × 2.5° to 4° × 5° increases error by an order of magnitude. We recommend prioritizing fine spatial resolution before considering different temporal resolutions in offline chemical transport models. We encourage the chemical transport model users to specify in publications the durations of operators due to their effects on simulation accuracy.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-26
... Maintenance Manual (AMM) includes chapters 05-10 ``Time Limits'', 05-15 ``Critical Design Configuration... 05, ``Time Limits/Maintenance Checks,'' of BAe 146 Series/AVRO 146-RJ Series Aircraft Maintenance... Chapter 05, ``Time Limits/ Maintenance Checks,'' of the BAE SYSTEMS (Operations) Limited BAe 146 Series...
A Power Series Expansion and Its Applications
ERIC Educational Resources Information Center
Chen, Hongwei
2006-01-01
Using the power series solution of a differential equation and the computation of a parametric integral, two elementary proofs are given for the power series expansion of (arcsin x)[squared], as well as some applications of this expansion.
Time-Of-Flight Camera, Optical Tracker and Computed Tomography in Pairwise Data Registration
Badura, Pawel; Juszczyk, Jan; Pietka, Ewa
2016-01-01
Purpose A growing number of medical applications, including minimal invasive surgery, depends on multi-modal or multi-sensors data processing. Fast and accurate 3D scene analysis, comprising data registration, seems to be crucial for the development of computer aided diagnosis and therapy. The advancement of surface tracking system based on optical trackers already plays an important role in surgical procedures planning. However, new modalities, like the time-of-flight (ToF) sensors, widely explored in non-medical fields are powerful and have the potential to become a part of computer aided surgery set-up. Connection of different acquisition systems promises to provide a valuable support for operating room procedures. Therefore, the detailed analysis of the accuracy of such multi-sensors positioning systems is needed. Methods We present the system combining pre-operative CT series with intra-operative ToF-sensor and optical tracker point clouds. The methodology contains: optical sensor set-up and the ToF-camera calibration procedures, data pre-processing algorithms, and registration technique. The data pre-processing yields a surface, in case of CT, and point clouds for ToF-sensor and marker-driven optical tracker representation of an object of interest. An applied registration technique is based on Iterative Closest Point algorithm. Results The experiments validate the registration of each pair of modalities/sensors involving phantoms of four various human organs in terms of Hausdorff distance and mean absolute distance metrics. The best surface alignment was obtained for CT and optical tracker combination, whereas the worst for experiments involving ToF-camera. Conclusion The obtained accuracies encourage to further develop the multi-sensors systems. The presented substantive discussion concerning the system limitations and possible improvements mainly related to the depth information produced by the ToF-sensor is useful for computer aided surgery developers. PMID:27434396
Visual Analysis of Cloud Computing Performance Using Behavioral Lines.
Muelder, Chris; Zhu, Biao; Chen, Wei; Zhang, Hongxin; Ma, Kwan-Liu
2016-02-29
Cloud computing is an essential technology to Big Data analytics and services. A cloud computing system is often comprised of a large number of parallel computing and storage devices. Monitoring the usage and performance of such a system is important for efficient operations, maintenance, and security. Tracing every application on a large cloud system is untenable due to scale and privacy issues. But profile data can be collected relatively efficiently by regularly sampling the state of the system, including properties such as CPU load, memory usage, network usage, and others, creating a set of multivariate time series for each system. Adequate tools for studying such large-scale, multidimensional data are lacking. In this paper, we present a visual based analysis approach to understanding and analyzing the performance and behavior of cloud computing systems. Our design is based on similarity measures and a layout method to portray the behavior of each compute node over time. When visualizing a large number of behavioral lines together, distinct patterns often appear suggesting particular types of performance bottleneck. The resulting system provides multiple linked views, which allow the user to interactively explore the data by examining the data or a selected subset at different levels of detail. Our case studies, which use datasets collected from two different cloud systems, show that this visual based approach is effective in identifying trends and anomalies of the systems.
Attitude Accuracy Study for the Earth Observing System (EOS) AM-1 Spacecraft
NASA Technical Reports Server (NTRS)
Lesikar, James D., II; Garrick, Joseph C.
1996-01-01
Earth Observing System (EOS) spacecraft will take measurements of the Earth's clouds, oceans, atmosphere, land, and radiation balance. These EOS spacecraft are part of the National Aeronautics and Space Administration's Mission to Planet Earth, and consist of several series of satellites, with each series specializing in a particular class of observations. This paper focuses on the EOS AM-1 spacecraft, which is the first of three satellites constituting the EOS AM series (morning equatorial crossing) and the initial spacecraft of the EOS program. EOS AM-1 has a stringent onboard attitude knowledge requirement, of 36/41/44 arc seconds (3 sigma) in yaw/roll/pitch, respectively. During normal mission operations, attitude is determined onboard using an extended Kalman sequential filter via measurements from two charge coupled device (CCD) star trackers, one Fine Sun Sensor, and an Inertial Rate Unit. The attitude determination error analysis system (ADEAS) was used to model the spacecraft and mission profile, and in a worst case scenario with only one star tracker in operation, the attitude uncertainty was 9.7/ll.5/12.2 arc seconds (3 sigma) in yaw/roll/pitch. The quoted result assumed the spacecraft was in nominal attitude, using only the 1-rotation per orbit motion of the spacecraft about the pitch axis for calibration of the gyro biases. Deviations from the nominal attitude would show greater attitude uncertainties, unless calibration maneuvers which roll and/or yaw the spacecraft have been performed. This permits computation of the gyro misalignments, and the attitude knowledge requirement would remain satisfied.
Performing an allreduce operation on a plurality of compute nodes of a parallel computer
Faraj, Ahmad [Rochester, MN
2012-04-17
Methods, apparatus, and products are disclosed for performing an allreduce operation on a plurality of compute nodes of a parallel computer. Each compute node includes at least two processing cores. Each processing core has contribution data for the allreduce operation. Performing an allreduce operation on a plurality of compute nodes of a parallel computer includes: establishing one or more logical rings among the compute nodes, each logical ring including at least one processing core from each compute node; performing, for each logical ring, a global allreduce operation using the contribution data for the processing cores included in that logical ring, yielding a global allreduce result for each processing core included in that logical ring; and performing, for each compute node, a local allreduce operation using the global allreduce results for each processing core on that compute node.
Electric Grid Expansion Planning with High Levels of Variable Generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hadley, Stanton W.; You, Shutang; Shankar, Mallikarjun
2016-02-01
Renewables are taking a large proportion of generation capacity in U.S. power grids. As their randomness has increasing influence on power system operation, it is necessary to consider their impact on system expansion planning. To this end, this project studies the generation and transmission expansion co-optimization problem of the US Eastern Interconnection (EI) power grid with a high wind power penetration rate. In this project, the generation and transmission expansion problem for the EI system is modeled as a mixed-integer programming (MIP) problem. This study analyzed a time series creation method to capture the diversity of load and wind powermore » across balancing regions in the EI system. The obtained time series can be easily introduced into the MIP co-optimization problem and then solved robustly through available MIP solvers. Simulation results show that the proposed time series generation method and the expansion co-optimization model and can improve the expansion result significantly after considering the diversity of wind and load across EI regions. The improved expansion plan that combines generation and transmission will aid system planners and policy makers to maximize the social welfare. This study shows that modelling load and wind variations and diversities across balancing regions will produce significantly different expansion result compared with former studies. For example, if wind is modeled in more details (by increasing the number of wind output levels) so that more wind blocks are considered in expansion planning, transmission expansion will be larger and the expansion timing will be earlier. Regarding generation expansion, more wind scenarios will slightly reduce wind generation expansion in the EI system and increase the expansion of other generation such as gas. Also, adopting detailed wind scenarios will reveal that it may be uneconomic to expand transmission networks for transmitting a large amount of wind power through a long distance in the EI system. Incorporating more details of renewables in expansion planning will inevitably increase the computational burden. Therefore, high performance computing (HPC) techniques are urgently needed for power system operation and planning optimization. As a scoping study task, this project tested some preliminary parallel computation techniques such as breaking down the simulation task into several sub-tasks based on chronology splitting or sample splitting, and then assigning these sub-tasks to different cores. Testing results show significant time reduction when a simulation task is split into several sub-tasks for parallel execution.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rest, J; Gehl, S M
1979-01-01
GRASS-SST and FASTGRASS are mechanistic computer codes for predicting fission-gas behavior in UO/sub 2/-base fuels during steady-state and transient conditions. FASTGRASS was developed in order to satisfy the need for a fast-running alternative to GRASS-SST. Althrough based on GRASS-SST, FASTGRASS is approximately an order of magnitude quicker in execution. The GRASS-SST transient analysis has evolved through comparisons of code predictions with the fission-gas release and physical phenomena that occur during reactor operation and transient direct-electrical-heating (DEH) testing of irradiated light-water reactor fuel. The FASTGRASS calculational procedure is described in this paper, along with models of key physical processes included inmore » both FASTGRASS and GRASS-SST. Predictions of fission-gas release obtained from GRASS-SST and FASTGRASS analyses are compared with experimental observations from a series of DEH tests. The major conclusions is that the computer codes should include an improved model for the evolution of the grain-edge porosity.« less
NASA Technical Reports Server (NTRS)
Darzi, Michael; Hooker, Stanford B. (Editor); Firestone, Elaine R. (Editor)
1992-01-01
Methods for detecting and screening cloud contamination from satellite derived visible and infrared data are reviewed in this document. The methods are applicable to past, present, and future polar orbiting satellite radiometers. Such instruments include the Coastal Zone Color Scanner (CZCS), operational from 1978 through 1986; the Advanced Very High Resolution Radiometer (AVHRR); the Sea-viewing Wide Field-of-view Sensor (SeaWiFS), scheduled for launch in August 1993; and the Moderate Resolution Imaging Spectrometer (IMODIS). Constant threshold methods are the least demanding computationally, and often provide adequate results. An improvement to these methods are the least demanding computationally, and often provide adequate results. An improvement to these methods is to determine the thresholds dynamically by adjusting them according to the areal and temporal distributions of the surrounding pixels. Spatial coherence methods set thresholds based on the expected spatial variability of the data. Other statistically derived methods and various combinations of basic methods are also reviewed. The complexity of the methods is ultimately limited by the computing resources. Finally, some criteria for evaluating cloud screening methods are discussed.
48 CFR 52.227-14 - Rights in Data-General.
Code of Federal Regulations, 2010 CFR
2010-10-01
... software. Computer software—(1) Means (i) Computer programs that comprise a series of instructions, rules... or computer software documentation. Computer software documentation means owner's manuals, user's... medium, that explain the capabilities of the computer software or provide instructions for using the...
Computer Simulated Visual and Tactile Feedback as an Aid to Manipulator and Vehicle Control,
1981-05-08
STATEMENT ........................ 8 Artificial Intellegence Versus Supervisory Control ....... 8 Computer Generation of Operator Feedback...operator. Artificial Intelligence Versus Supervisory Control The use of computers to aid human operators can be divided into two catagories: artificial ...operator. Artificial intelligence ( A. I. ) attempts to give the computer maximum intelligence and to replace all operator functions by the computer
Robotics On-Board Trainer (ROBoT)
NASA Technical Reports Server (NTRS)
Johnson, Genevieve; Alexander, Greg
2013-01-01
ROBoT is an on-orbit version of the ground-based Dynamics Skills Trainer (DST) that astronauts use for training on a frequent basis. This software consists of two primary software groups. The first series of components is responsible for displaying the graphical scenes. The remaining components are responsible for simulating the Mobile Servicing System (MSS), the Japanese Experiment Module Remote Manipulator System (JEMRMS), and the H-II Transfer Vehicle (HTV) Free Flyer Robotics Operations. The MSS simulation software includes: Robotic Workstation (RWS) simulation, a simulation of the Space Station Remote Manipulator System (SSRMS), a simulation of the ISS Command and Control System (CCS), and a portion of the Portable Computer System (PCS) software necessary for MSS operations. These components all run under the CentOS4.5 Linux operating system. The JEMRMS simulation software includes real-time, HIL, dynamics, manipulator multi-body dynamics, and a moving object contact model with Tricks discrete time scheduling. The JEMRMS DST will be used as a functional proficiency and skills trainer for flight crews. The HTV Free Flyer Robotics Operations simulation software adds a functional simulation of HTV vehicle controllers, sensors, and data to the MSS simulation software. These components are intended to support HTV ISS visiting vehicle analysis and training. The scene generation software will use DOUG (Dynamic On-orbit Ubiquitous Graphics) to render the graphical scenes. DOUG runs on a laptop running the CentOS4.5 Linux operating system. DOUG is an Open GL-based 3D computer graphics rendering package. It uses pre-built three-dimensional models of on-orbit ISS and space shuttle systems elements, and provides realtime views of various station and shuttle configurations.
Quantifying Nanoparticle Release from Nanotechnology: Scientific Operating Procedure Series: SOP C 3
2017-02-01
Operating Procedure Series : SOP-C-3 En vi ro nm en ta l L ab or at or y David P. Martin, Aimee R. Poda, and Anthony J. Bednar February 2017...Operating Procedure Series : SOP-C-3 David P. Martin, Aimee R. Poda, and Anthony J. Bednar Environmental Laboratory U.S. Army Engineer Research and...so designated by other authorized documents. DESTROY THIS REPORT WHEN NO LONGER NEEDED. DO NOT RETURN IT TO THE ORIGINATOR. ERDC/EL SR-17-1 iii
NASA Technical Reports Server (NTRS)
Wagner, Michael Broderick
1987-01-01
The modeled cascade cells offer an alternative to conventional series cascade designs that require a monolithic intercell ohmic contact. Selective electrodes provide a simple means of fabricating three-terminal devices, which can be configured in complementary pairs to circumvent the attendant losses and fabrication complexities of intercell ohmic contacts. Moreover, selective electrodes allow incorporation of additional layers in the upper subcell which can improve spectral response and increase radiation tolerance. Realistic simulations of such cells operating under one-sun AMO conditions show that the seven-layer structure is optimum from the standpoint of beginning-of-life efficiency and radiation tolerance. Projected efficiencies exceed 26 percent. Under higher concentration factors, it should be possible to achieve efficiencies beyond 30 percent. However, to simulate operation at high concentration will require a model for resistive losses. Overall, these devices appear to be a promising contender for future space applications.
StrateGene: object-oriented programming in molecular biology.
Carhart, R E; Cash, H D; Moore, J F
1988-03-01
This paper describes some of the ways that object-oriented programming methodologies have been used to represent and manipulate biological information in a working application. When running on a Xerox 1100 series computer, StrateGene functions as a genetic engineering workstation for the management of information about cloning experiments. It represents biological molecules, enzymes, fragments, and methods as classes, subclasses, and members in a hierarchy of objects. These objects may have various attributes, which themselves can be defined and classified. The attributes and their values can be passed from the classes of objects down to the subclasses and members. The user can modify the objects and their attributes while using them. New knowledge and changes to the system can be incorporated relatively easily. The operations on the biological objects are associated with the objects themselves. This makes it easier to invoke them correctly and allows generic operations to be customized for the particular object.
A kinetic model for the transport of electrons in a graphene layer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fermanian Kammerer, Clotilde, E-mail: Clotilde.Fermanian@u-pec.fr; Méhats, Florian, E-mail: florian.mehats@univ-rennes1.fr
In this article, we propose a new numerical scheme for the computation of the transport of electrons in a graphene device. The underlying quantum model for graphene is a massless Dirac equation, whose eigenvalues display a conical singularity responsible for non-adiabatic transitions between the two modes. We first derive a kinetic model which takes the form of two Boltzmann equations coupled by a collision operator modeling the non-adiabatic transitions. This collision term includes a Landau–Zener transfer term and a jump operator whose presence is essential in order to ensure a good energy conservation during the transitions. We propose an algorithmicmore » realization of the semi-group solving the kinetic model, by a particle method. We give analytic justification of the model and propose a series of numerical experiments studying the influences of the various sources of errors between the quantum and the kinetic models.« less
NASA Technical Reports Server (NTRS)
Chen, J. C.
1995-01-01
A disk-on-rod inside a corrugated horn is one of the horn configurations for dual-frequency or wide-band operation. A mode-matching analysis method is described. A disk-on-rod inside a corrugated horn is represented as a series of coaxial waveguide sections and circular waveguide sections connected to each other. Three kinds of junctions need to be considered: coaxial-to-coaxial, coaxial-to-circular, and circular-to-circular. A computer program was developed to calculate the scattering matrix and the radiation pattern of a disk-on-rod inside a corrugated horn. The software as verified by experiment, and good agreement between calculation and measurement was obtained. The disk-on-rod inside a corrugated horn design gives an option to the Deep Space Network dual-frequency operation system, which currently is a two-horn/one-dichroic plate system.
Performance, Agility and Cost of Cloud Computing Services for NASA GES DISC Giovanni Application
NASA Astrophysics Data System (ADS)
Pham, L.; Chen, A.; Wharton, S.; Winter, E. L.; Lynnes, C.
2013-12-01
The NASA Goddard Earth Science Data and Information Services Center (GES DISC) is investigating the performance, agility and cost of Cloud computing for GES DISC applications. Giovanni (Geospatial Interactive Online Visualization ANd aNalysis Infrastructure), one of the core applications at the GES DISC for online climate-related Earth science data access, subsetting, analysis, visualization, and downloading, was used to evaluate the feasibility and effort of porting an application to the Amazon Cloud Services platform. The performance and the cost of running Giovanni on the Amazon Cloud were compared to similar parameters for the GES DISC local operational system. A Giovanni Time-Series analysis of aerosol absorption optical depth (388nm) from OMI (Ozone Monitoring Instrument)/Aura was selected for these comparisons. All required data were pre-cached in both the Cloud and local system to avoid data transfer delays. The 3-, 6-, 12-, and 24-month data were used for analysis on the Cloud and local system respectively, and the processing times for the analysis were used to evaluate system performance. To investigate application agility, Giovanni was installed and tested on multiple Cloud platforms. The cost of using a Cloud computing platform mainly consists of: computing, storage, data requests, and data transfer in/out. The Cloud computing cost is calculated based on the hourly rate, and the storage cost is calculated based on the rate of Gigabytes per month. Cost for incoming data transfer is free, and for data transfer out, the cost is based on the rate in Gigabytes. The costs for a local server system consist of buying hardware/software, system maintenance/updating, and operating cost. The results showed that the Cloud platform had a 38% better performance and cost 36% less than the local system. This investigation shows the potential of cloud computing to increase system performance and lower the overall cost of system management.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hayashi, Kenta; Department of Chemistry, Biology, and Biotechnology, University of Perugia, 06123 Perugia; Gotoda, Hiroshi
2016-05-15
The convective motions within a solution of a photochromic spiro-oxazine being irradiated by UV only on the bottom part of its volume, give rise to aperiodic spectrophotometric dynamics. In this paper, we study three nonlinear properties of the aperiodic time series: permutation entropy, short-term predictability and long-term unpredictability, and degree distribution of the visibility graph networks. After ascertaining the extracted chaotic features, we show how the aperiodic time series can be exploited to implement all the fundamental two-inputs binary logic functions (AND, OR, NAND, NOR, XOR, and XNOR) and some basic arithmetic operations (half-adder, full-adder, half-subtractor). This is possible duemore » to the wide range of states a nonlinear system accesses in the course of its evolution. Therefore, the solution of the convective photochemical oscillator results in hardware for chaos-computing alternative to conventional complementary metal-oxide semiconductor-based integrated circuits.« less
Comparative case study between D3 and highcharts on lustre data visualization
NASA Astrophysics Data System (ADS)
ElTayeby, Omar; John, Dwayne; Patel, Pragnesh; Simmerman, Scott
2013-12-01
One of the challenging tasks in visual analytics is to target clustered time-series data sets, since it is important for data analysts to discover patterns changing over time while keeping their focus on particular subsets. In order to leverage the humans ability to quickly visually perceive these patterns, multivariate features should be implemented according to the attributes available. However, a comparative case study has been done using JavaScript libraries to demonstrate the differences in capabilities of using them. A web-based application to monitor the Lustre file system for the systems administrators and the operation teams has been developed using D3 and Highcharts. Lustre file systems are responsible of managing Remote Procedure Calls (RPCs) which include input output (I/O) requests between clients and Object Storage Targets (OSTs). The objective of this application is to provide time-series visuals of these calls and storage patterns of users on Kraken, a University of Tennessee High Performance Computing (HPC) resource in Oak Ridge National Laboratory (ORNL).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lieu, Richard
A hierarchy of statistics of increasing sophistication and accuracy is proposed to exploit an interesting and fundamental arithmetic structure in the photon bunching noise of incoherent light of large photon occupation number, with the purpose of suppressing the noise and rendering a more reliable and unbiased measurement of the light intensity. The method does not require any new hardware, rather it operates at the software level with the help of high-precision computers to reprocess the intensity time series of the incident light to create a new series with smaller bunching noise coherence length. The ultimate accuracy improvement of this methodmore » of flux measurement is limited by the timing resolution of the detector and the photon occupation number of the beam (the higher the photon number the better the performance). The principal application is accuracy improvement in the signal-limited bolometric flux measurement of a radio source.« less
A High Performance Block Eigensolver for Nuclear Configuration Interaction Calculations
Aktulga, Hasan Metin; Afibuzzaman, Md.; Williams, Samuel; ...
2017-06-01
As on-node parallelism increases and the performance gap between the processor and the memory system widens, achieving high performance in large-scale scientific applications requires an architecture-aware design of algorithms and solvers. We focus on the eigenvalue problem arising in nuclear Configuration Interaction (CI) calculations, where a few extreme eigenpairs of a sparse symmetric matrix are needed. Here, we consider a block iterative eigensolver whose main computational kernels are the multiplication of a sparse matrix with multiple vectors (SpMM), and tall-skinny matrix operations. We then present techniques to significantly improve the SpMM and the transpose operation SpMM T by using themore » compressed sparse blocks (CSB) format. We achieve 3-4× speedup on the requisite operations over good implementations with the commonly used compressed sparse row (CSR) format. We develop a performance model that allows us to correctly estimate the performance of our SpMM kernel implementations, and we identify cache bandwidth as a potential performance bottleneck beyond DRAM. We also analyze and optimize the performance of LOBPCG kernels (inner product and linear combinations on multiple vectors) and show up to 15× speedup over using high performance BLAS libraries for these operations. The resulting high performance LOBPCG solver achieves 1.4× to 1.8× speedup over the existing Lanczos solver on a series of CI computations on high-end multicore architectures (Intel Xeons). We also analyze the performance of our techniques on an Intel Xeon Phi Knights Corner (KNC) processor.« less
Analysis of experimental characteristics of multistage steam-jet electors of steam turbines
NASA Astrophysics Data System (ADS)
Aronson, K. E.; Ryabchikov, A. Yu.; Brodov, Yu. M.; Brezgin, D. V.; Zhelonkin, N. V.; Murmanskii, I. B.
2017-02-01
A series of questions for specification of physical gas dynamics model in flow range of steam-jet unit and ejector computation methodology, as well as functioning peculiarities of intercoolers, was formulated based on analysis of experimental characteristics of multistage team-jet steam turbines. It was established that coefficient defining position of critical cross-section of injected flow depends on characteristics of the "sound tube" zone. Speed of injected flow within this tube may exceed that of sound, and pressure jumps in work-steam decrease at the same time. Characteristics of the "sound tube" define optimal axial sizes of the ejector. According to measurement results, the part of steam condensing in the first-stage coolant constitutes 70-80% of steam amount supplied into coolant and is almost independent of air content in steam. Coolant efficiency depends on steam pressure defined by operation of steam-jet unit of ejector of the next stage after coolant of steam-jet stage, temperature, and condensing water flow. As a rule, steam entering content of steam-air mixture supplied to coolant is overheated with respect to saturation temperature of steam in the mixture. This should be taken into account during coolant computation. Long-term operation causes changes in roughness of walls of the ejector's mixing chamber. The influence of change of wall roughness on ejector characteristic is similar to the influence of reverse pressure of the steam-jet stage. Until some roughness value, injection coefficient of the ejector stage operating in superlimiting regime hardly changed. After reaching critical roughness, the ejector switches to prelimiting operating regime.
A High Performance Block Eigensolver for Nuclear Configuration Interaction Calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aktulga, Hasan Metin; Afibuzzaman, Md.; Williams, Samuel
As on-node parallelism increases and the performance gap between the processor and the memory system widens, achieving high performance in large-scale scientific applications requires an architecture-aware design of algorithms and solvers. We focus on the eigenvalue problem arising in nuclear Configuration Interaction (CI) calculations, where a few extreme eigenpairs of a sparse symmetric matrix are needed. Here, we consider a block iterative eigensolver whose main computational kernels are the multiplication of a sparse matrix with multiple vectors (SpMM), and tall-skinny matrix operations. We then present techniques to significantly improve the SpMM and the transpose operation SpMM T by using themore » compressed sparse blocks (CSB) format. We achieve 3-4× speedup on the requisite operations over good implementations with the commonly used compressed sparse row (CSR) format. We develop a performance model that allows us to correctly estimate the performance of our SpMM kernel implementations, and we identify cache bandwidth as a potential performance bottleneck beyond DRAM. We also analyze and optimize the performance of LOBPCG kernels (inner product and linear combinations on multiple vectors) and show up to 15× speedup over using high performance BLAS libraries for these operations. The resulting high performance LOBPCG solver achieves 1.4× to 1.8× speedup over the existing Lanczos solver on a series of CI computations on high-end multicore architectures (Intel Xeons). We also analyze the performance of our techniques on an Intel Xeon Phi Knights Corner (KNC) processor.« less
Archer, Charles J.; Faraj, Ahmad A.; Inglett, Todd A.; Ratterman, Joseph D.
2012-10-23
Methods, apparatus, and products are disclosed for providing nearest neighbor point-to-point communications among compute nodes of an operational group in a global combining network of a parallel computer, each compute node connected to each adjacent compute node in the global combining network through a link, that include: identifying each link in the global combining network for each compute node of the operational group; designating one of a plurality of point-to-point class routing identifiers for each link such that no compute node in the operational group is connected to two adjacent compute nodes in the operational group with links designated for the same class routing identifiers; and configuring each compute node of the operational group for point-to-point communications with each adjacent compute node in the global combining network through the link between that compute node and that adjacent compute node using that link's designated class routing identifier.
A Smoothing Technique for the Multifractal Analysis of a Medium Voltage Feeders Electric Current
NASA Astrophysics Data System (ADS)
de Santis, Enrico; Sadeghian, Alireza; Rizzi, Antonello
2017-12-01
The current paper presents a data-driven detrending technique allowing to smooth complex sinusoidal trends from a real-world electric load time series before applying the Detrended Multifractal Fluctuation Analysis (MFDFA). The algorithm we call Smoothed Sort and Cut Fourier Detrending (SSC-FD) is based on a suitable smoothing of high power periodicities operating directly in the Fourier spectrum through a polynomial fitting technique of the DFT. The main aim consists of disambiguating the characteristic slow varying periodicities, that can impair the MFDFA analysis, from the residual signal in order to study its correlation properties. The algorithm performances are evaluated on a simple benchmark test consisting of a persistent series where the Hurst exponent is known, with superimposed ten sinusoidal harmonics. Moreover, the behavior of the algorithm parameters is assessed computing the MFDFA on the well-known sunspot data, whose correlation characteristics are reported in literature. In both cases, the SSC-FD method eliminates the apparent crossover induced by the synthetic and natural periodicities. Results are compared with some existing detrending methods within the MFDFA paradigm. Finally, a study of the multifractal characteristics of the electric load time series detrendended by the SSC-FD algorithm is provided, showing a strong persistent behavior and an appreciable amplitude of the multifractal spectrum that allows to conclude that the series at hand has multifractal characteristics.
ERIC Educational Resources Information Center
Weisgerber, Robert A.
This monograph, first in a series of six, provides the theoretical background and premises underlying the efforts of the research team and two collaborating California school districts to explore ways in which the computer and related technologies can be more fully and effectively used in the instruction of learning disabled students. Contents…
Computing the multifractal spectrum from time series: an algorithmic approach.
Harikrishnan, K P; Misra, R; Ambika, G; Amritkar, R E
2009-12-01
We show that the existing methods for computing the f(alpha) spectrum from a time series can be improved by using a new algorithmic scheme. The scheme relies on the basic idea that the smooth convex profile of a typical f(alpha) spectrum can be fitted with an analytic function involving a set of four independent parameters. While the standard existing schemes [P. Grassberger et al., J. Stat. Phys. 51, 135 (1988); A. Chhabra and R. V. Jensen, Phys. Rev. Lett. 62, 1327 (1989)] generally compute only an incomplete f(alpha) spectrum (usually the top portion), we show that this can be overcome by an algorithmic approach, which is automated to compute the D(q) and f(alpha) spectra from a time series for any embedding dimension. The scheme is first tested with the logistic attractor with known f(alpha) curve and subsequently applied to higher-dimensional cases. We also show that the scheme can be effectively adapted for analyzing practical time series involving noise, with examples from two widely different real world systems. Moreover, some preliminary results indicating that the set of four independent parameters may be used as diagnostic measures are also included.
Kuntzelman, Karl; Jack Rhodes, L; Harrington, Lillian N; Miskovic, Vladimir
2018-06-01
There is a broad family of statistical methods for capturing time series regularity, with increasingly widespread adoption by the neuroscientific community. A common feature of these methods is that they permit investigators to quantify the entropy of brain signals - an index of unpredictability/complexity. Despite the proliferation of algorithms for computing entropy from neural time series data there is scant evidence concerning their relative stability and efficiency. Here we evaluated several different algorithmic implementations (sample, fuzzy, dispersion and permutation) of multiscale entropy in terms of their stability across sessions, internal consistency and computational speed, accuracy and precision using a combination of electroencephalogram (EEG) and synthetic 1/ƒ noise signals. Overall, we report fair to excellent internal consistency and longitudinal stability over a one-week period for the majority of entropy estimates, with several caveats. Computational timing estimates suggest distinct advantages for dispersion and permutation entropy over other entropy estimates. Considered alongside the psychometric evidence, we suggest several ways in which researchers can maximize computational resources (without sacrificing reliability), especially when working with high-density M/EEG data or multivoxel BOLD time series signals. Copyright © 2018 Elsevier Inc. All rights reserved.
Spektor, Sergey; Valarezo, Javier; Fliss, Dan M; Gil, Ziv; Cohen, Jose; Goldman, Jose; Umansky, Felix
2005-10-01
To review the surgical approaches, techniques, outcomes, and recurrence rates in a series of 80 olfactory groove meningioma (OGM) patients operated on between 1990 and 2003. Eighty patients underwent 81 OGM surgeries. Tumor diameter varied from 2 to 9 cm (average, 4.6 cm). In 35 surgeries (43.2%), the tumor was removed through bifrontal craniotomy; nine operations (11.1%) were performed through a unilateral subfrontal approach; 18 surgeries (22.2%) were performed through a pterional approach; seven surgeries (8.6%) were carried out using a fronto-orbital craniotomy; and 12 procedures (14.8%) were accomplished via a subcranial approach. Nine patients (11.3%) had undergone surgery previously and had recurrent tumor. Total removal was obtained in 72 patients (90.0%); subtotal removal was achieved in 8 patients (10.0%). Two patients, one with total and one with subtotal removal, had atypical (World Health Organization Grade II) meningiomas, whereas 78 patients had World Health Organization Grade I tumors. There was no operative mortality and no new permanent focal neurological deficit besides anosmia. Twenty-five patients (31.3%) experienced surgery-related complications. There were no recurrences in 75 patients (93.8%) 6 to 164 months (mean, 70.8 mo) after surgery. Three patients (3.8%) were lost to follow-up. In two patients (2.5%) with subtotal removal, the residual evidenced growth on computed tomography and/or magnetic resonance imaging 1 year after surgery. One of them had an atypical meningioma. The second, a multiple meningiomata patient, was operated on twice in this series. A variety of surgical approaches are used for OGM resection. An approach tailored to the tumor's size, location, and extension, combined with modern microsurgical cranial base techniques, allows full OGM removal with minimal permanent morbidity, excellent neurological outcome, and very low recurrence rates.
In aquatic systems, time series of dissolved oxygen (DO) have been used to compute estimates of ecosystem metabolism. Central to this open-water method is the assumption that the DO time series is a Lagrangian specification of the flow field. However, most DO time series are coll...
ERIC Educational Resources Information Center
Moore, John W., Ed.
1987-01-01
Included are two articles related to the use of computers. One activity is a computer exercise in chemical reaction engineering and applied kinetics for undergraduate college students. The second article shows how computer-assisted analysis can be used with reaction rate data. (RH)
IDSP- INTERACTIVE DIGITAL SIGNAL PROCESSOR
NASA Technical Reports Server (NTRS)
Mish, W. H.
1994-01-01
The Interactive Digital Signal Processor, IDSP, consists of a set of time series analysis "operators" based on the various algorithms commonly used for digital signal analysis work. The processing of a digital time series to extract information is usually achieved by the application of a number of fairly standard operations. However, it is often desirable to "experiment" with various operations and combinations of operations to explore their effect on the results. IDSP is designed to provide an interactive and easy-to-use system for this type of digital time series analysis. The IDSP operators can be applied in any sensible order (even recursively), and can be applied to single time series or to simultaneous time series. IDSP is being used extensively to process data obtained from scientific instruments onboard spacecraft. It is also an excellent teaching tool for demonstrating the application of time series operators to artificially-generated signals. IDSP currently includes over 43 standard operators. Processing operators provide for Fourier transformation operations, design and application of digital filters, and Eigenvalue analysis. Additional support operators provide for data editing, display of information, graphical output, and batch operation. User-developed operators can be easily interfaced with the system to provide for expansion and experimentation. Each operator application generates one or more output files from an input file. The processing of a file can involve many operators in a complex application. IDSP maintains historical information as an integral part of each file so that the user can display the operator history of the file at any time during an interactive analysis. IDSP is written in VAX FORTRAN 77 for interactive or batch execution and has been implemented on a DEC VAX-11/780 operating under VMS. The IDSP system generates graphics output for a variety of graphics systems. The program requires the use of Versaplot and Template plotting routines and IMSL Math/Library routines. These software packages are not included in IDSP. The virtual memory requirement for the program is approximately 2.36 MB. The IDSP system was developed in 1982 and was last updated in 1986. Versaplot is a registered trademark of Versatec Inc. Template is a registered trademark of Template Graphics Software Inc. IMSL Math/Library is a registered trademark of IMSL Inc.
User's manual for SEDCALC, a computer program for computation of suspended-sediment discharge
Koltun, G.F.; Gray, John R.; McElhone, T.J.
1994-01-01
Sediment-Record Calculations (SEDCALC), a menu-driven set of interactive computer programs, was developed to facilitate computation of suspended-sediment records. The programs comprising SEDCALC were developed independently in several District offices of the U.S. Geological Survey (USGS) to minimize the intensive labor associated with various aspects of sediment-record computations. SEDCALC operates on suspended-sediment-concentration data stored in American Standard Code for Information Interchange (ASCII) files in a predefined card-image format. Program options within SEDCALC can be used to assist in creating and editing the card-image files, as well as to reformat card-image files to and from formats used by the USGS Water-Quality System. SEDCALC provides options for creating card-image files containing time series of equal-interval suspended-sediment concentrations from 1. digitized suspended-sediment-concentration traces, 2. linear interpolation between log-transformed instantaneous suspended-sediment-concentration data stored at unequal time intervals, and 3. nonlinear interpolation between log-transformed instantaneous suspended-sediment-concentration data stored at unequal time intervals. Suspended-sediment discharge can be computed from the streamflow and suspended-sediment-concentration data or by application of transport relations derived by regressing log-transformed instantaneous streamflows on log-transformed instantaneous suspended-sediment concentrations or discharges. The computed suspended-sediment discharge data are stored in card-image files that can be either directly imported to the USGS Automated Data Processing System or used to generate plots by means of other SEDCALC options.
Numerical solution methods for viscoelastic orthotropic materials
NASA Technical Reports Server (NTRS)
Gramoll, K. C.; Dillard, D. A.; Brinson, H. F.
1988-01-01
Numerical solution methods for viscoelastic orthotropic materials, specifically fiber reinforced composite materials, are examined. The methods include classical lamination theory using time increments, direction solution of the Volterra Integral, Zienkiewicz's linear Prony series method, and a new method called Nonlinear Differential Equation Method (NDEM) which uses a nonlinear Prony series. The criteria used for comparison of the various methods include the stability of the solution technique, time step size stability, computer solution time length, and computer memory storage. The Volterra Integral allowed the implementation of higher order solution techniques but had difficulties solving singular and weakly singular compliance function. The Zienkiewicz solution technique, which requires the viscoelastic response to be modeled by a Prony series, works well for linear viscoelastic isotropic materials and small time steps. The new method, NDEM, uses a modified Prony series which allows nonlinear stress effects to be included and can be used with orthotropic nonlinear viscoelastic materials. The NDEM technique is shown to be accurate and stable for both linear and nonlinear conditions with minimal computer time.
NASA Astrophysics Data System (ADS)
Gladwin, D.; Stewart, P.; Stewart, J.
2011-02-01
This article addresses the problem of maintaining a stable rectified DC output from the three-phase AC generator in a series-hybrid vehicle powertrain. The series-hybrid prime power source generally comprises an internal combustion (IC) engine driving a three-phase permanent magnet generator whose output is rectified to DC. A recent development has been to control the engine/generator combination by an electronically actuated throttle. This system can be represented as a nonlinear system with significant time delay. Previously, voltage control of the generator output has been achieved by model predictive methods such as the Smith Predictor. These methods rely on the incorporation of an accurate system model and time delay into the control algorithm, with a consequent increase in computational complexity in the real-time controller, and as a necessity relies to some extent on the accuracy of the models. Two complementary performance objectives exist for the control system. Firstly, to maintain the IC engine at its optimal operating point, and secondly, to supply a stable DC supply to the traction drive inverters. Achievement of these goals minimises the transient energy storage requirements at the DC link, with a consequent reduction in both weight and cost. These objectives imply constant velocity operation of the IC engine under external load disturbances and changes in both operating conditions and vehicle speed set-points. In order to achieve these objectives, and reduce the complexity of implementation, in this article a controller is designed by the use of Genetic Programming methods in the Simulink modelling environment, with the aim of obtaining a relatively simple controller for the time-delay system which does not rely on the implementation of real time system models or time delay approximations in the controller. A methodology is presented to utilise the miriad of existing control blocks in the Simulink libraries to automatically evolve optimal control structures.
Simulation of time series by distorted Gaussian processes
NASA Technical Reports Server (NTRS)
Greenhall, C. A.
1977-01-01
Distorted stationary Gaussian process can be used to provide computer-generated imitations of experimental time series. A method of analyzing a source time series and synthesizing an imitation is shown, and an example using X-band radiometer data is given.
Combining neural networks and genetic algorithms for hydrological flow forecasting
NASA Astrophysics Data System (ADS)
Neruda, Roman; Srejber, Jan; Neruda, Martin; Pascenko, Petr
2010-05-01
We present a neural network approach to rainfall-runoff modeling for small size river basins based on several time series of hourly measured data. Different neural networks are considered for short time runoff predictions (from one to six hours lead time) based on runoff and rainfall data observed in previous time steps. Correlation analysis shows that runoff data, short time rainfall history, and aggregated API values are the most significant data for the prediction. Neural models of multilayer perceptron and radial basis function networks with different numbers of units are used and compared with more traditional linear time series predictors. Out of possible 48 hours of relevant history of all the input variables, the most important ones are selected by means of input filters created by a genetic algorithm. The genetic algorithm works with population of binary encoded vectors defining input selection patterns. Standard genetic operators of two-point crossover, random bit-flipping mutation, and tournament selection were used. The evaluation of objective function of each individual consists of several rounds of building and testing a particular neural network model. The whole procedure is rather computational exacting (taking hours to days on a desktop PC), thus a high-performance mainframe computer has been used for our experiments. Results based on two years worth data from the Ploucnice river in Northern Bohemia suggest that main problems connected with this approach to modeling are ovetraining that can lead to poor generalization, and relatively small number of extreme events which makes it difficult for a model to predict the amplitude of the event. Thus, experiments with both absolute and relative runoff predictions were carried out. In general it can be concluded that the neural models show about 5 per cent improvement in terms of efficiency coefficient over liner models. Multilayer perceptrons with one hidden layer trained by back propagation algorithm and predicting relative runoff show the best behavior so far. Utilizing the genetically evolved input filter improves the performance of yet another 5 per cent. In the future we would like to continue with experiments in on-line prediction using real-time data from Smeda River with 6 hours lead time forecast. Following the operational reality we will focus on classification of the runoffs into flood alert levels, and reformulation of the time series prediction task as a classification problem. The main goal of all this work is to improve flood warning system operated by the Czech Hydrometeorological Institute.
NASA Astrophysics Data System (ADS)
Lashkin, S. V.; Kozelkov, A. S.; Yalozo, A. V.; Gerasimov, V. Yu.; Zelensky, D. K.
2017-12-01
This paper describes the details of the parallel implementation of the SIMPLE algorithm for numerical solution of the Navier-Stokes system of equations on arbitrary unstructured grids. The iteration schemes for the serial and parallel versions of the SIMPLE algorithm are implemented. In the description of the parallel implementation, special attention is paid to computational data exchange among processors under the condition of the grid model decomposition using fictitious cells. We discuss the specific features for the storage of distributed matrices and implementation of vector-matrix operations in parallel mode. It is shown that the proposed way of matrix storage reduces the number of interprocessor exchanges. A series of numerical experiments illustrates the effect of the multigrid SLAE solver tuning on the general efficiency of the algorithm; the tuning involves the types of the cycles used (V, W, and F), the number of iterations of a smoothing operator, and the number of cells for coarsening. Two ways (direct and indirect) of efficiency evaluation for parallelization of the numerical algorithm are demonstrated. The paper presents the results of solving some internal and external flow problems with the evaluation of parallelization efficiency by two algorithms. It is shown that the proposed parallel implementation enables efficient computations for the problems on a thousand processors. Based on the results obtained, some general recommendations are made for the optimal tuning of the multigrid solver, as well as for selecting the optimal number of cells per processor.
Synchronizing compute node time bases in a parallel computer
Chen, Dong; Faraj, Daniel A; Gooding, Thomas M; Heidelberger, Philip
2015-01-27
Synchronizing time bases in a parallel computer that includes compute nodes organized for data communications in a tree network, where one compute node is designated as a root, and, for each compute node: calculating data transmission latency from the root to the compute node; configuring a thread as a pulse waiter; initializing a wakeup unit; and performing a local barrier operation; upon each node completing the local barrier operation, entering, by all compute nodes, a global barrier operation; upon all nodes entering the global barrier operation, sending, to all the compute nodes, a pulse signal; and for each compute node upon receiving the pulse signal: waking, by the wakeup unit, the pulse waiter; setting a time base for the compute node equal to the data transmission latency between the root node and the compute node; and exiting the global barrier operation.
Synchronizing compute node time bases in a parallel computer
Chen, Dong; Faraj, Daniel A; Gooding, Thomas M; Heidelberger, Philip
2014-12-30
Synchronizing time bases in a parallel computer that includes compute nodes organized for data communications in a tree network, where one compute node is designated as a root, and, for each compute node: calculating data transmission latency from the root to the compute node; configuring a thread as a pulse waiter; initializing a wakeup unit; and performing a local barrier operation; upon each node completing the local barrier operation, entering, by all compute nodes, a global barrier operation; upon all nodes entering the global barrier operation, sending, to all the compute nodes, a pulse signal; and for each compute node upon receiving the pulse signal: waking, by the wakeup unit, the pulse waiter; setting a time base for the compute node equal to the data transmission latency between the root node and the compute node; and exiting the global barrier operation.
Institute for scientific computing research;fiscal year 1999 annual report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keyes, D
2000-03-28
Large-scale scientific computation, and all of the disciplines that support it and help to validate it, have been placed at the focus of Lawrence Livermore National Laboratory by the Accelerated Strategic Computing Initiative (ASCI). The Laboratory operates the computer with the highest peak performance in the world and has undertaken some of the largest and most compute-intensive simulations ever performed. Computers at the architectural extremes, however, are notoriously difficult to use efficiently. Even such successes as the Laboratory's two Bell Prizes awarded in November 1999 only emphasize the need for much better ways of interacting with the results of large-scalemore » simulations. Advances in scientific computing research have, therefore, never been more vital to the core missions of the Laboratory than at present. Computational science is evolving so rapidly along every one of its research fronts that to remain on the leading edge, the Laboratory must engage researchers at many academic centers of excellence. In FY 1999, the Institute for Scientific Computing Research (ISCR) has expanded the Laboratory's bridge to the academic community in the form of collaborative subcontracts, visiting faculty, student internships, a workshop, and a very active seminar series. ISCR research participants are integrated almost seamlessly with the Laboratory's Center for Applied Scientific Computing (CASC), which, in turn, addresses computational challenges arising throughout the Laboratory. Administratively, the ISCR flourishes under the Laboratory's University Relations Program (URP). Together with the other four Institutes of the URP, it must navigate a course that allows the Laboratory to benefit from academic exchanges while preserving national security. Although FY 1999 brought more than its share of challenges to the operation of an academic-like research enterprise within the context of a national security laboratory, the results declare the challenges well met and well worth the continued effort. A change of administration for the ISCR occurred during FY 1999. Acting Director John Fitzgerald retired from LLNL in August after 35 years of service, including the last two at helm of the ISCR. David Keyes, who has been a regular visitor in conjunction with ASCI scalable algorithms research since October 1997, overlapped with John for three months and serves half-time as the new Acting Director.« less
Legendre modified moments for Euler's constant
NASA Astrophysics Data System (ADS)
Prévost, Marc
2008-10-01
Polynomial moments are often used for the computation of Gauss quadrature to stabilize the numerical calculation of the orthogonal polynomials, see [W. Gautschi, Computational aspects of orthogonal polynomials, in: P. Nevai (Ed.), Orthogonal Polynomials-Theory and Practice, NATO ASI Series, Series C: Mathematical and Physical Sciences, vol. 294. Kluwer, Dordrecht, 1990, pp. 181-216 [6]; W. Gautschi, On the sensitivity of orthogonal polynomials to perturbations in the moments, Numer. Math. 48(4) (1986) 369-382 [5]; W. Gautschi, On generating orthogonal polynomials, SIAM J. Sci. Statist. Comput. 3(3) (1982) 289-317 [4
Models of Computer Use in School Settings. Technical Report Series, Report No. 84.2.2.
ERIC Educational Resources Information Center
Sherwood, Robert D.
Designed to focus on student learning and to illustrate techniques that might be used with computers to facilitate that process, this paper discusses five types of computer use in educational settings: (1) learning ABOUT computers; (2) learning WITH computers; (3) learning FROM computers; (4) learning ABOUT THINKING with computers; and (5)…
The Convergence Problems of Eigenfunction Expansions of Elliptic Differential Operators
NASA Astrophysics Data System (ADS)
Ahmedov, Anvarjon
2018-03-01
In the present research we investigate the problems concerning the almost everywhere convergence of multiple Fourier series summed over the elliptic levels in the classes of Liouville. The sufficient conditions for the almost everywhere convergence problems, which are most difficult problems in Harmonic analysis, are obtained. The methods of approximation by multiple Fourier series summed over elliptic curves are applied to obtain suitable estimations for the maximal operator of the spectral decompositions. Obtaining of such estimations involves very complicated calculations which depends on the functional structure of the classes of functions. The main idea on the proving the almost everywhere convergence of the eigenfunction expansions in the interpolation spaces is estimation of the maximal operator of the partial sums in the boundary classes and application of the interpolation Theorem of the family of linear operators. In the present work the maximal operator of the elliptic partial sums are estimated in the interpolation classes of Liouville and the almost everywhere convergence of the multiple Fourier series by elliptic summation methods are established. The considering multiple Fourier series as an eigenfunction expansions of the differential operators helps to translate the functional properties (for example smoothness) of the Liouville classes into Fourier coefficients of the functions which being expanded into such expansions. The sufficient conditions for convergence of the multiple Fourier series of functions from Liouville classes are obtained in terms of the smoothness and dimensions. Such results are highly effective in solving the boundary problems with periodic boundary conditions occurring in the spectral theory of differential operators. The investigations of multiple Fourier series in modern methods of harmonic analysis incorporates the wide use of methods from functional analysis, mathematical physics, modern operator theory and spectral decomposition. New method for the best approximation of the square-integrable function by multiple Fourier series summed over the elliptic levels are established. Using the best approximation, the Lebesgue constant corresponding to the elliptic partial sums is estimated. The latter is applied to obtain an estimation for the maximal operator in the classes of Liouville.
Computer Series, 13: Bits and Pieces, 11.
ERIC Educational Resources Information Center
Moore, John W., Ed.
1982-01-01
Describes computer programs (with ordering information) on various topics including, among others, modeling of thermodynamics and economics of solar energy, radioactive decay simulation, stoichiometry drill/tutorial (in Spanish), computer-generated safety quiz, medical chemistry computer game, medical biochemistry question bank, generation of…
NASA Astrophysics Data System (ADS)
Fadly Nurullah Rasedee, Ahmad; Ahmedov, Anvarjon; Sathar, Mohammad Hasan Abdul
2017-09-01
The mathematical models of the heat and mass transfer processes on the ball type solids can be solved using the theory of convergence of Fourier-Laplace series on unit sphere. Many interesting models have divergent Fourier-Laplace series, which can be made convergent by introducing Riesz and Cesaro means of the series. Partial sums of the Fourier-Laplace series summed by Riesz method are integral operators with the kernel known as Riesz means of the spectral function. In order to obtain the convergence results for the partial sums by Riesz means we need to know an asymptotic behavior of the latter kernel. In this work the estimations for Riesz means of spectral function of Laplace-Beltrami operator which guarantees the convergence of the Fourier-Laplace series by Riesz method are obtained.
Linear Models for Systematics and Nuisances
NASA Astrophysics Data System (ADS)
Luger, Rodrigo; Foreman-Mackey, Daniel; Hogg, David W.
2017-12-01
The target of many astronomical studies is the recovery of tiny astrophysical signals living in a sea of uninteresting (but usually dominant) noise. In many contexts (i.e., stellar time-series, or high-contrast imaging, or stellar spectroscopy), there are structured components in this noise caused by systematic effects in the astronomical source, the atmosphere, the telescope, or the detector. More often than not, evaluation of the true physical model for these nuisances is computationally intractable and dependent on too many (unknown) parameters to allow rigorous probabilistic inference. Sometimes, housekeeping data---and often the science data themselves---can be used as predictors of the systematic noise. Linear combinations of simple functions of these predictors are often used as computationally tractable models that can capture the nuisances. These models can be used to fit and subtract systematics prior to investigation of the signals of interest, or they can be used in a simultaneous fit of the systematics and the signals. In this Note, we show that if a Gaussian prior is placed on the weights of the linear components, the weights can be marginalized out with an operation in pure linear algebra, which can (often) be made fast. We illustrate this model by demonstrating the applicability of a linear model for the non-linear systematics in K2 time-series data, where the dominant noise source for many stars is spacecraft motion and variability.
Neural mechanisms of planning: A computational analysis using event-related fMRI
Fincham, Jon M.; Carter, Cameron S.; van Veen, Vincent; Stenger, V. Andrew; Anderson, John R.
2002-01-01
To investigate the neural mechanisms of planning, we used a novel adaptation of the Tower of Hanoi (TOH) task and event-related functional MRI. Participants were trained in applying a specific strategy to an isomorph of the five-disk TOH task. After training, participants solved novel problems during event-related functional MRI. A computational cognitive model of the task was used to generate a reference time series representing the expected blood oxygen level-dependent response in brain areas involved in the manipulation and planning of goals. This time series was used as one term within a general linear modeling framework to identify brain areas in which the time course of activity varied as a function of goal-processing events. Two distinct time courses of activation were identified, one in which activation varied parametrically with goal-processing operations, and the other in which activation became pronounced only during goal-processing intensive trials. Regions showing the parametric relationship comprised a frontoparietal system and include right dorsolateral prefrontal cortex [Brodmann's area (BA 9)], bilateral parietal (BA 40/7), and bilateral premotor (BA 6) areas. Regions preferentially engaged only during goal-intensive processing include left inferior frontal gyrus (BA 44). The implications of these results for the current model, as well as for our understanding of the neural mechanisms of planning and functional specialization of the prefrontal cortex, are discussed. PMID:11880658
The InSAR Scientific Computing Environment
NASA Technical Reports Server (NTRS)
Rosen, Paul A.; Gurrola, Eric; Sacco, Gian Franco; Zebker, Howard
2012-01-01
We have developed a flexible and extensible Interferometric SAR (InSAR) Scientific Computing Environment (ISCE) for geodetic image processing. ISCE was designed from the ground up as a geophysics community tool for generating stacks of interferograms that lend themselves to various forms of time-series analysis, with attention paid to accuracy, extensibility, and modularity. The framework is python-based, with code elements rigorously componentized by separating input/output operations from the processing engines. This allows greater flexibility and extensibility in the data models, and creates algorithmic code that is less susceptible to unnecessary modification when new data types and sensors are available. In addition, the components support provenance and checkpointing to facilitate reprocessing and algorithm exploration. The algorithms, based on legacy processing codes, have been adapted to assume a common reference track approach for all images acquired from nearby orbits, simplifying and systematizing the geometry for time-series analysis. The framework is designed to easily allow user contributions, and is distributed for free use by researchers. ISCE can process data from the ALOS, ERS, EnviSAT, Cosmo-SkyMed, RadarSAT-1, RadarSAT-2, and TerraSAR-X platforms, starting from Level-0 or Level 1 as provided from the data source, and going as far as Level 3 geocoded deformation products. With its flexible design, it can be extended with raw/meta data parsers to enable it to work with radar data from other platforms
Lalys, Florent; Riffaud, Laurent; Bouget, David; Jannin, Pierre
2012-01-01
The need for a better integration of the new generation of Computer-Assisted-Surgical (CAS) systems has been recently emphasized. One necessity to achieve this objective is to retrieve data from the Operating Room (OR) with different sensors, then to derive models from these data. Recently, the use of videos from cameras in the OR has demonstrated its efficiency. In this paper, we propose a framework to assist in the development of systems for the automatic recognition of high level surgical tasks using microscope videos analysis. We validated its use on cataract procedures. The idea is to combine state-of-the-art computer vision techniques with time series analysis. The first step of the framework consisted in the definition of several visual cues for extracting semantic information, therefore characterizing each frame of the video. Five different pieces of image-based classifiers were therefore implemented. A step of pupil segmentation was also applied for dedicated visual cue detection. Time series classification algorithms were then applied to model time-varying data. Dynamic Time Warping (DTW) and Hidden Markov Models (HMM) were tested. This association combined the advantages of all methods for better understanding of the problem. The framework was finally validated through various studies. Six binary visual cues were chosen along with 12 phases to detect, obtaining accuracies of 94%. PMID:22203700
Reference manual for generation and analysis of Habitat Time Series: version II
Milhous, Robert T.; Bartholow, John M.; Updike, Marlys A.; Moos, Alan R.
1990-01-01
The selection of an instream flow requirement for water resource management often requires the review of how the physical habitat changes through time. This review is referred to as 'Time Series Analysis." The Tune Series Library (fSLIB) is a group of programs to enter, transform, analyze, and display time series data for use in stream habitat assessment. A time series may be defined as a sequence of data recorded or calculated over time. Examples might be historical monthly flow, predicted monthly weighted usable area, daily electrical power generation, annual irrigation diversion, and so forth. The time series can be analyzed, both descriptively and analytically, to understand the importance of the variation in the events over time. This is especially useful in the development of instream flow needs based on habitat availability. The TSLIB group of programs assumes that you have an adequate study plan to guide you in your analysis. You need to already have knowledge about such things as time period and time step, species and life stages to consider, and appropriate comparisons or statistics to be produced and displayed or tabulated. Knowing your destination, you must first evaluate whether TSLIB can get you there. Remember, data are not answers. This publication is a reference manual to TSLIB and is intended to be a guide to the process of using the various programs in TSLIB. This manual is essentially limited to the hands-on use of the various programs. a TSLIB use interface program (called RTSM) has been developed to provide an integrated working environment where the use has a brief on-line description of each TSLIB program with the capability to run the TSLIB program while in the user interface. For information on the RTSM program, refer to Appendix F. Before applying the computer models described herein, it is recommended that the user enroll in the short course "Problem Solving with the Instream Flow Incremental Methodology (IFIM)." This course is offered by the Aquatic Systems Branch of the National Ecology Research Center. For more information about the TSLIB software, refer to the Memorandum of Understanding. Chapter 1 provides a brief introduction to the Instream Flow Incremental Methodology and TSLIB. Other chapters in this manual provide information on the different aspects of using the models. The information contained in the other chapters includes (2) acquisition, entry, manipulation, and listing of streamflow data; (3) entry, manipulation, and listing of the habitat-versus-streamflow function; (4) transferring streamflow data; (5) water resources systems analysis; (6) generation and analysis of daily streamflow and habitat values; (7) generation of the time series of monthly habitats; (8) manipulation, analysis, and display of month time series data; and (9) generation, analysis, and display of annual time series data. Each section includes documentation for the programs therein with at least one page of information for each program, including a program description, instructions for running the program, and sample output. The Appendixes contain the following: (A) sample file formats; (B) descriptions of default filenames; (C) alphabetical summary of batch-procedure files; (D) installing and running TSLIB on a microcomputer; (E) running TSLIB on a CDC Cyber computer; (F) using the TSLIB user interface program (RTSM); and (G) running WATSTORE on the USGS Amdahl mainframe computer. The number for this version of TSLIB--Version II-- is somewhat arbitrary, as the TSLIB programs were collected into a library some time ago; but operators tended to use and manage them as individual programs. Therefore, we will consider the group of programs from the past that were only on the CDC Cyber computer as Version 0; the programs from the past that were on both the Cyber and the IBM-compatible microcomputer as Version I; and the programs contained in this reference manual as Version II.
ERIC Educational Resources Information Center
Sween, Joyce; Campbell, Donald T.
Computational formulae for the following three tests of significance, useful in the interrupted time series design, are given: (1) a "t" test (Mood, 1950) for the significance of the first post-change observation from a value predicted by a linear fit of the pre-change observations; (2) an "F" test (Walker and Lev, 1953) of the…
ERIC Educational Resources Information Center
Brandenburg, Sara A., Ed.; Vanderheiden, Gregg C., Ed.
One of a series of three resource guides concerned with communication, control, and computer access for disabled and elderly individuals, the directory focuses on switches and environmental controls. The book's three chapters each cover products with the same primary function. Cross reference indexes allow access to listings of products by…
Characteristic research on Hong Kong "I learned" series computer textbooks
NASA Astrophysics Data System (ADS)
Hu, Jinyan; Liu, Zhongxia; Li, Yuanyuan; Lu, Jianheng; Zhang, Lili
2011-06-01
Currently, the construction of information technology textbooks in the primary and middle schools is an important content of the information technology curriculum reform. The article expect to have any inspire and reference on inland China school information technology teaching material construction and development through the analyzing and refining the characteristics of the Hong Kong quality textbook series - "I learn . elementary school computer cognitive curriculum".
ERIC Educational Resources Information Center
Brandenburg, Sara A., Ed.; Vanderheiden, Gregg C., Ed.
One of a series of three resource guides concerned with communication, control, and computer access for disabled and elderly individuals, the directory focuses on communication aids. The book's six chapters each cover products with the same primary function. Cross reference indexes allow access to listings of products by function, input/output…
1985-09-01
QDC hLt umi ^ POR.2037 (WT-2037)(EX) VOLUME 1 EXTRACTED VERSION OPERATION DOMINIC, FISH BOWL SERIES Project Officer’s Report—Project 8A.3...Close-In Thermal and X-ray Vulnerability Measurements—Shots Blue Gill and King Fish F. D. Adams, Project Officer Flight Dynamics Laboratory Wright...version of POR-2037 (WT-2037), Volume 1, OPERATION DOMINIC; Fish Bowl Series, Project 8A. 3. Approved for public release; distribution is unlimited
Computer Corner: Computer Graphics for the Vibrating String.
ERIC Educational Resources Information Center
Smith, David A.; Cunningham, R. Stephen
1986-01-01
Computer graphics are used to display the sum of the first few terms of the series solution for the problem of the vibrating string frequently discussed in introductory courses on differential equations. (MNS)
The revised solar array synthesis computer program
NASA Technical Reports Server (NTRS)
1970-01-01
The Revised Solar Array Synthesis Computer Program is described. It is a general-purpose program which computes solar array output characteristics while accounting for the effects of temperature, incidence angle, charged-particle irradiation, and other degradation effects on various solar array configurations in either circular or elliptical orbits. Array configurations may consist of up to 75 solar cell panels arranged in any series-parallel combination not exceeding three series-connected panels in a parallel string and no more than 25 parallel strings in an array. Up to 100 separate solar array current-voltage characteristics, corresponding to 100 equal-time increments during the sunlight illuminated portion of an orbit or any 100 user-specified combinations of incidence angle and temperature, can be computed and printed out during one complete computer execution. Individual panel incidence angles may be computed and printed out at the user's option.
UDATE1: A computer program for the calculation of uranium-series isotopic ages
Rosenbauer, R.J.
1991-01-01
UDATE1 is a FORTRAN-77 program with an interface for an Apple Macintosh computer that calculates isotope activities from measured count rates to date geologic materials by uranium-series disequilibria. Dates on pure samples can be determined directly by the accumulation of 230Th from 234U and of 231Pa from 235U. Dates for samples contaminated by clays containing abundant natural thorium can be corrected by the program using various mixing models. Input to the program and file management are made simple and user friendly by a series of Macintosh modal dialog boxes. ?? 1991.
Adaptive voting computer system
NASA Technical Reports Server (NTRS)
Koczela, L. J.; Wilgus, D. S. (Inventor)
1974-01-01
A computer system is reported that uses adaptive voting to tolerate failures and operates in a fail-operational, fail-safe manner. Each of four computers is individually connected to one of four external input/output (I/O) busses which interface with external subsystems. Each computer is connected to receive input data and commands from the other three computers and to furnish output data commands to the other three computers. An adaptive control apparatus including a voter-comparator-switch (VCS) is provided for each computer to receive signals from each of the computers and permits adaptive voting among the computers to permit the fail-operational, fail-safe operation.
Rasmussen, Patrick P.; Gray, John R.; Glysson, G. Doug; Ziegler, Andrew C.
2010-01-01
Over the last decade, use of a method for computing suspended-sediment concentration and loads using turbidity sensors—primarily nephelometry, but also optical backscatter—has proliferated. Because an in- itu turbidity sensor is capa le of measuring turbidity instantaneously, a turbidity time series can be recorded and related directly to time-varying suspended-sediment concentrations. Depending on the suspended-sediment characteristics of the measurement site, this method can be more reliable and, in many cases, a more accurate means for computing suspended-sediment concentrations and loads than traditional U.S. Geological Survey computational methods. Guidelines and procedures for estimating time s ries of suspended-sediment concentration and loading as a function of turbidity and streamflow data have been published in a U.S. Geological Survey Techniques and Methods Report, Book 3, Chapter C4. This paper is a summary of these guidelines and discusses some of the concepts, s atistical procedures, and techniques used to maintain a multiyear suspended sediment time series.
Non-unitary probabilistic quantum computing circuit and method
NASA Technical Reports Server (NTRS)
Williams, Colin P. (Inventor); Gingrich, Robert M. (Inventor)
2009-01-01
A quantum circuit performing quantum computation in a quantum computer. A chosen transformation of an initial n-qubit state is probabilistically obtained. The circuit comprises a unitary quantum operator obtained from a non-unitary quantum operator, operating on an n-qubit state and an ancilla state. When operation on the ancilla state provides a success condition, computation is stopped. When operation on the ancilla state provides a failure condition, computation is performed again on the ancilla state and the n-qubit state obtained in the previous computation, until a success condition is obtained.
Blocksome, Michael A [Rochester, MN
2011-12-20
Methods, apparatus, and products are disclosed for determining when a set of compute nodes participating in a barrier operation on a parallel computer are ready to exit the barrier operation that includes, for each compute node in the set: initializing a barrier counter with no counter underflow interrupt; configuring, upon entering the barrier operation, the barrier counter with a value in dependence upon a number of compute nodes in the set; broadcasting, by a DMA engine on the compute node to each of the other compute nodes upon entering the barrier operation, a barrier control packet; receiving, by the DMA engine from each of the other compute nodes, a barrier control packet; modifying, by the DMA engine, the value for the barrier counter in dependence upon each of the received barrier control packets; exiting the barrier operation if the value for the barrier counter matches the exit value.
Runtime optimization of an application executing on a parallel computer
None
2014-11-25
Identifying a collective operation within an application executing on a parallel computer; identifying a call site of the collective operation; determining whether the collective operation is root-based; if the collective operation is not root-based: establishing a tuning session and executing the collective operation in the tuning session; if the collective operation is root-based, determining whether all compute nodes executing the application identified the collective operation at the same call site; if all compute nodes identified the collective operation at the same call site, establishing a tuning session and executing the collective operation in the tuning session; and if all compute nodes executing the application did not identify the collective operation at the same call site, executing the collective operation without establishing a tuning session.
Runtime optimization of an application executing on a parallel computer
Faraj, Daniel A; Smith, Brian E
2014-11-18
Identifying a collective operation within an application executing on a parallel computer; identifying a call site of the collective operation; determining whether the collective operation is root-based; if the collective operation is not root-based: establishing a tuning session and executing the collective operation in the tuning session; if the collective operation is root-based, determining whether all compute nodes executing the application identified the collective operation at the same call site; if all compute nodes identified the collective operation at the same call site, establishing a tuning session and executing the collective operation in the tuning session; and if all compute nodes executing the application did not identify the collective operation at the same call site, executing the collective operation without establishing a tuning session.
Runtime optimization of an application executing on a parallel computer
Faraj, Daniel A.; Smith, Brian E.
2013-01-29
Identifying a collective operation within an application executing on a parallel computer; identifying a call site of the collective operation; determining whether the collective operation is root-based; if the collective operation is not root-based: establishing a tuning session and executing the collective operation in the tuning session; if the collective operation is root-based, determining whether all compute nodes executing the application identified the collective operation at the same call site; if all compute nodes identified the collective operation at the same call site, establishing a tuning session and executing the collective operation in the tuning session; and if all compute nodes executing the application did not identify the collective operation at the same call site, executing the collective operation without establishing a tuning session.
Ensemble Bayesian forecasting system Part I: Theory and algorithms
NASA Astrophysics Data System (ADS)
Herr, Henry D.; Krzysztofowicz, Roman
2015-05-01
The ensemble Bayesian forecasting system (EBFS), whose theory was published in 2001, is developed for the purpose of quantifying the total uncertainty about a discrete-time, continuous-state, non-stationary stochastic process such as a time series of stages, discharges, or volumes at a river gauge. The EBFS is built of three components: an input ensemble forecaster (IEF), which simulates the uncertainty associated with random inputs; a deterministic hydrologic model (of any complexity), which simulates physical processes within a river basin; and a hydrologic uncertainty processor (HUP), which simulates the hydrologic uncertainty (an aggregate of all uncertainties except input). It works as a Monte Carlo simulator: an ensemble of time series of inputs (e.g., precipitation amounts) generated by the IEF is transformed deterministically through a hydrologic model into an ensemble of time series of outputs, which is next transformed stochastically by the HUP into an ensemble of time series of predictands (e.g., river stages). Previous research indicated that in order to attain an acceptable sampling error, the ensemble size must be on the order of hundreds (for probabilistic river stage forecasts and probabilistic flood forecasts) or even thousands (for probabilistic stage transition forecasts). The computing time needed to run the hydrologic model this many times renders the straightforward simulations operationally infeasible. This motivates the development of the ensemble Bayesian forecasting system with randomization (EBFSR), which takes full advantage of the analytic meta-Gaussian HUP and generates multiple ensemble members after each run of the hydrologic model; this auxiliary randomization reduces the required size of the meteorological input ensemble and makes it operationally feasible to generate a Bayesian ensemble forecast of large size. Such a forecast quantifies the total uncertainty, is well calibrated against the prior (climatic) distribution of predictand, possesses a Bayesian coherence property, constitutes a random sample of the predictand, and has an acceptable sampling error-which makes it suitable for rational decision making under uncertainty.
NASA Astrophysics Data System (ADS)
Pan, Kok-Kwei
We have generalized the linked cluster expansion method to solve more many-body quantum systems, such as quantum spin systems with crystal-field potentials and the Hubbard model. The technique sums up all connected diagrams to a certain order of the perturbative Hamiltonian. The modified multiple-site Wick reduction theorem and the simple tau dependence of the standard basis operators have been used to facilitate the evaluation of the integration procedures in the perturbation expansion. Computational methods are developed to calculate all terms in the series expansion. As a first example, the perturbation series expansion of thermodynamic quantities of the single-band Hubbard model has been obtained using a linked cluster series expansion technique. We have made corrections to all previous results of several papers (up to fourth order). The behaviors of the three dimensional simple cubic and body-centered cubic systems have been discussed from the qualitative analysis of the perturbation series up to fourth order. We have also calculated the sixth-order perturbation series of this model. As a second example, we present the magnetic properties of spin-one Heisenberg model with arbitrary crystal-field potential using a linked cluster series expansion. The calculation of the thermodynamic properties using this method covers the whole range of temperature, in both magnetically ordered and disordered phases. The series for the susceptibility and magnetization have been obtained up to fourth order for this model. The method sums up all perturbation terms to certain order and estimates the result using a well -developed and highly successful extrapolation method (the standard ratio method). The dependence of critical temperature on the crystal-field potential and the magnetization as a function of temperature and crystal-field potential are shown. The critical behaviors at zero temperature are also shown. The range of the crystal-field potential for Ni(2+) compounds is roughly estimated based on this model using known experimental results.
Castro-Castro, Julián
2014-01-01
The purpose of this study was to asses the value of intraoperative cone-beam CT (O-arm) and stereotactic navigation for the insertion of anterior odontoid screws. this was a retrospective review of patients receiving surgical treatment for traumatic odontoid fractures during a period of 18 months. Procedures were guided with O-arm assistance in all cases. The screw position was verified with an intraoperative CT scan. Intraoperative and clinical parameters were evaluated. Odontoid fracture fusion was assessed on postoperative CT scans obtained at 3 and 6 months' follow-up Five patients were included in this series; 4 patients (80%) were male. Mean age was 63.6 years (range 35-83 years). All fractures were acute type ii odontoid fractures. The mean operative time was 116minutes (range 60-160minutes). Successful screw placement, judged by intraoperative computed tomography, was attained in all 5 patients (100%). The average preoperative and postoperative times were 8.6 (range 2-22 days) and 4.2 days (range 3-7 days) respectively. No neurological deterioration occurred after surgery. The rate of bone fusion was 80% (4/5). Although this initial study evaluated a small number of patients, anterior odontoid screw fixation utilizing the O-arm appears to be safe and accurate. This system allows immediate CT imaging in the operating room to verify screw position. Copyright © 2014 Sociedad Española de Neurocirugía. Published by Elsevier España. All rights reserved.
NASA Astrophysics Data System (ADS)
Oza, D. H.; Jones, T. L.; Feiertag, R.; Samii, M. V.; Doll, C. E.; Mistretta, G. D.; Hart, R. C.
The Goddard Space Flight Center (GSFC) Flight Dynamics Division (FDD) commissioned Applied Technology Associates, Incorporated, to develop the Real-Time Orbit Determination/Enhanced (RTOD/E) system on a Disk Operating System (DOS)-based personal computer (PC) as a prototype system for sequential orbit determination of spacecraft. This paper presents the results of a study to compare the orbit determination accuracy for a Tracking and Data Relay Satellite (TDRS) System (TDRSS) user spacecraft, Landsat-4, obtained using RTOD/E, operating on a PC, with the accuracy of an established batch least-squares system, the Goddard Trajectory Determination System (GTDS), operating on a mainframe computer. The results of Landsat-4 orbit determination will provide useful experience for the Earth Observing System (EOS) series of satellites. The Landsat-4 ephemerides were estimated for the May 18-24, 1992, timeframe, during which intensive TDRSS tracking data for Landsat-4 were available. During this period, there were two separate orbit-adjust maneuvers on one of the TDRSS spacecraft (TDRS-East) and one small orbit-adjust maneuver for Landsat-4. Independent assessments were made of the consistencies (overlap comparisons for the batch case and covariances and the first measurement residuals for the sequential case) of solutions produced by the batch and sequential methods. The forward-filtered RTOD/E orbit solutions were compared with the definitive GTDS orbit solutions for Landsat-4; the solution differences were generally less than 30 meters after the filter had reached steady state.
NASA Technical Reports Server (NTRS)
Oza, D. H.; Jones, T. L.; Feiertag, R.; Samii, M. V.; Doll, C. E.; Mistretta, G. D.; Hart, R. C.
1993-01-01
The Goddard Space Flight Center (GSFC) Flight Dynamics Division (FDD) commissioned Applied Technology Associates, Incorporated, to develop the Real-Time Orbit Determination/Enhanced (RTOD/E) system on a Disk Operating System (DOS)-based personal computer (PC) as a prototype system for sequential orbit determination of spacecraft. This paper presents the results of a study to compare the orbit determination accuracy for a Tracking and Data Relay Satellite (TDRS) System (TDRSS) user spacecraft, Landsat-4, obtained using RTOD/E, operating on a PC, with the accuracy of an established batch least-squares system, the Goddard Trajectory Determination System (GTDS), operating on a mainframe computer. The results of Landsat-4 orbit determination will provide useful experience for the Earth Observing System (EOS) series of satellites. The Landsat-4 ephemerides were estimated for the May 18-24, 1992, timeframe, during which intensive TDRSS tracking data for Landsat-4 were available. During this period, there were two separate orbit-adjust maneuvers on one of the TDRSS spacecraft (TDRS-East) and one small orbit-adjust maneuver for Landsat-4. Independent assessments were made of the consistencies (overlap comparisons for the batch case and covariances and the first measurement residuals for the sequential case) of solutions produced by the batch and sequential methods. The forward-filtered RTOD/E orbit solutions were compared with the definitive GTDS orbit solutions for Landsat-4; the solution differences were generally less than 30 meters after the filter had reached steady state.
Organics removal from landfill leachate and activated sludge production in SBR reactors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klimiuk, Ewa; Kulikowska, Dorota
2006-07-01
This study is aimed at estimating organic compounds removal and sludge production in SBR during treatment of landfill leachate. Four series were performed. At each series, experiments were carried out at the hydraulic retention time (HRT) of 12, 6, 3 and 2 d. The series varied in SBR filling strategies, duration of the mixing and aeration phases, and the sludge age. In series 1 and 2 (a short filling period, mixing and aeration phases in the operating cycle), the relationship between organics concentration (COD) in the leachate treated and HRT was pseudo-first-order kinetics. In series 3 (with mixing and aerationmore » phases) and series 4 (only aeration phase) with leachate supplied by means of a peristaltic pump for 4 h of the cycle (filling during reaction period) - this relationship was zero-order kinetics. Activated sludge production expressed as the observed coefficient of biomass production (Y {sub obs}) decreased correspondingly with increasing HRT. The smallest differences between reactors were observed in series 3 in which Y {sub obs} was almost stable (0.55-0.6 mg VSS/mg COD). The elimination of the mixing phase in the cycle (series 4) caused the Y {sub obs} to decrease significantly from 0.32 mg VSS/mg COD at HRT 2 d to 0.04 mg VSS/mg COD at HRT 12 d. The theoretical yield coefficient Y accounted for 0.534 mg VSS/mg COD (series 1) and 0.583 mg VSS/mg COD (series 2). In series 3 and 4, it was almost stable (0.628 mg VSS/mg COD and 0.616 mg VSS/mg COD, respectively). After the elimination of the mixing phase in the operating cycle, the specific biomass decay rate increased from 0.006 d{sup -1} (series 3) to 0.032 d{sup -1} (series 4). The operating conditions employing mixing/aeration or only aeration phases enable regulation of the sludge production. The SBRs operated under aerobic conditions are more favourable at a short hydraulic retention time. At long hydraulic retention time, it can lead to a decrease in biomass concentration in the SBR as a result of cell decay. On the contrary, in the activated sludge at long HRT, a short filling period and operating cycle of the reactor with the mixing and aeration phases seem the most favourable.« less
Haulage Truck Operator. Open Pit Mining Job Training Series.
ERIC Educational Resources Information Center
British Columbia Dept. of Education, Victoria.
This training outline for haulage truck operators, one in a series of eight outlines, is designed primarily for company training foremen or supervisors and for trainers to use as an industry-wide guideline for heavy equipment operator training in open pit mining in British Columbia. Intended as a guide for preparation of lesson plans both for…
Rotary Drill Operator. Open Pit Mining Job Training Series.
ERIC Educational Resources Information Center
Savilow, Bill
This training outline for rotary drill operators, one in a series of eight outlines, is designed primarily for company training foremen or supervisors and for trainers to use as an industry-wide guideline for heavy equipment operator training in open pit mining in British Columbia. Intended as a guide for preparation of lesson plans both for…
Track Dozer Operator. Open Pit Mining Job Training Series.
ERIC Educational Resources Information Center
British Columbia Dept. of Education, Victoria.
This training outline for track dozer operators, one in a series of eight outlines, is designed primarily for company training foremen or supervisors and for trainers to use as an industry-wide guideline for heavy equipment operator training in open pit mining in British Columbia. Intended as a guide for preparation of lesson plans both for…
Grader Operator. Open Pit Mining Job Training Series.
ERIC Educational Resources Information Center
Savilow, Bill
This training outline for grader operators, one in a series of eight outlines, is designed primarily for company training foremen or supervisors and for trainers to use as an industry-wide guideline for heavy equipment operator training in open pit mining in British Columbia. Intended as a guide for preparation of lesson plans both for classroom…
Rubber Tire Dozer Operator. Open Pit Mining Job Training Series.
ERIC Educational Resources Information Center
British Columbia Dept. of Education, Victoria.
This training outline for rubber tire dozer operators, one in a series of eight outlines, is designed primarily for company training foremen or supervisors and for trainers to use as an industry-wide guideline for heavy equipment operator training in open pit mining in British Columbia. Intended as a guide for preparation of lesson plans both for…
Front End Loader Operator. Open Pit Mining Job Training Series.
ERIC Educational Resources Information Center
Savilow, Bill
This training outline for front end loader operators, one in a series of eight outlines, is designed primarily for company training foremen or supervisors and for trainers to use as an industry-wide guideline for heavy equipment operator training in open pit mining in British Columbia. Intended as a guide for preparation of lesson plans both for…
Shovel Operator. Open Pit Mining Job Training Series.
ERIC Educational Resources Information Center
Hartley, Larry
This training outline for shovel operators, one in a series of eight outlines, is designed primarily for company training foremen or supervisors and for trainers to use as an industry-wide guideline for heavy equipment operator training in open pit mining in British Columbia. Intended as a guide for preparation of lesson plans both for classroom…
NASA Astrophysics Data System (ADS)
Noreen, Amna; Olaussen, Kåre
2012-10-01
A subroutine for a very-high-precision numerical solution of a class of ordinary differential equations is provided. For a given evaluation point and equation parameters the memory requirement scales linearly with precision P, and the number of algebraic operations scales roughly linearly with P when P becomes sufficiently large. We discuss results from extensive tests of the code, and how one, for a given evaluation point and equation parameters, may estimate precision loss and computing time in advance. Program summary Program title: seriesSolveOde1 Catalogue identifier: AEMW_v1_0 Program summary URL: http://cpc.cs.qub.ac.uk/summaries/AEMW_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 991 No. of bytes in distributed program, including test data, etc.: 488116 Distribution format: tar.gz Programming language: C++ Computer: PC's or higher performance computers. Operating system: Linux and MacOS RAM: Few to many megabytes (problem dependent). Classification: 2.7, 4.3 External routines: CLN — Class Library for Numbers [1] built with the GNU MP library [2], and GSL — GNU Scientific Library [3] (only for time measurements). Nature of problem: The differential equation -s2({d2}/{dz2}+{1-ν+-ν-}/{z}{d}/{dz}+{ν+ν-}/{z2})ψ(z)+{1}/{z} ∑n=0N vnznψ(z)=0, is solved numerically to very high precision. The evaluation point z and some or all of the equation parameters may be complex numbers; some or all of them may be represented exactly in terms of rational numbers. Solution method: The solution ψ(z), and optionally ψ'(z), is evaluated at the point z by executing the recursion A(z)={s-2}/{(m+1+ν-ν+)(m+1+ν-ν-)} ∑n=0N Vn(z)A(z), ψ(z)=ψ(z)+A(z), to sufficiently large m. Here ν is either ν+ or ν-, and Vn(z)=vnz. The recursion is initialized by A(z)=δzν,for n=0,1,…,N ψ(z)=A0(z). Restrictions: No solution is computed if z=0, or s=0, or if ν=ν- (assuming Reν+≥Reν-) with ν+-ν- an integer, except when ν+-ν-=1 and v =0 (i.e. when z is an ordinary point for zψ(z)). Additional comments: The code of the main algorithm is in the file seriesSolveOde1.cc, which "#include" the file checkForBreakOde1.cc. These routines, and the programs using them, must "#include" the file seriesSolveOde1.cc. Running time: On a Linux PC that is a few years old, at y=√{10} to an accuracy of P=200 decimal digits, evaluating the ground state wavefunction of the anharmonic oscillator (with the eigenvalue known in advance); (cf. Eq. (6)) takes about 2 ms, and about 40 min at an accuracy of P=100000 decimal digits. References: [1] B. Haible and R.B. Kreckel, CLN — Class Library for Numbers, http://www.ginac.de/CLN/ [2] T. Granlund and collaborators, GMP — The GNU Multiple Precision Arithmetic Library, http://gmplib.org/ [3] M. Galassi et al., GNU Scientific Library Reference Manual (3rd Ed.), ISBN 0954612078., http://www.gnu.org/software/gsl/